diff options
author | Josh Poimboeuf <jpoimboe@redhat.com> | 2016-09-21 16:04:02 -0500 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2016-10-20 09:15:23 +0200 |
commit | 6616a147a79c6fc280572f5a993e9e5ebd200d24 (patch) | |
tree | 01690120c4702201234db234155bcd07d2cd7fe5 /arch | |
parent | 4d516f41704055710da872b62ddc4b6d23248984 (diff) | |
download | linux-stable-6616a147a79c6fc280572f5a993e9e5ebd200d24.tar.gz linux-stable-6616a147a79c6fc280572f5a993e9e5ebd200d24.tar.bz2 linux-stable-6616a147a79c6fc280572f5a993e9e5ebd200d24.zip |
x86/boot/32: Fix the end of the stack for idle tasks
The frame at the end of each idle task stack is inconsistent with real
task stacks, which have a stack frame header and a real return address
before the pt_regs area. This inconsistency can be confusing for stack
unwinders. It also hides useful information about what asm code was
involved in calling into C.
Fix that by changing the initial code jumps to calls. Also add infinite
loops after the calls to make it clear that the calls don't return, and
to hang if they do.
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nilay Vaish <nilayvaish@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/2588f34b6fbac4ae6f6f9ead2a78d7f8d58a6341.1474480779.git.jpoimboe@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/kernel/head_32.S | 8 |
1 files changed, 5 insertions, 3 deletions
diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S index 65e62256df43..9a6f8e820ae1 100644 --- a/arch/x86/kernel/head_32.S +++ b/arch/x86/kernel/head_32.S @@ -289,7 +289,8 @@ num_subarch_entries = (. - subarch_entries) / 4 ENTRY(start_cpu0) movl initial_stack, %ecx movl %ecx, %esp - jmp *(initial_code) + call *(initial_code) +1: jmp 1b ENDPROC(start_cpu0) #endif @@ -470,8 +471,9 @@ ENTRY(startup_32_smp) xorl %eax,%eax # Clear LDT lldt %ax - pushl $0 # fake return address for unwinder - jmp *(initial_code) + call *(initial_code) +1: jmp 1b +ENDPROC(startup_32_smp) #include "verify_cpu.S" |