diff options
author | Patrick Rudolph <patrick.rudolph@9elements.com> | 2021-06-11 21:24:10 +0200 |
---|---|---|
committer | Patrick Georgi <pgeorgi@google.com> | 2021-06-15 07:47:35 +0000 |
commit | d023909b0112889941c1470cd125b1903572ba3b (patch) | |
tree | 4c92322f65ca5e672e3af98f9804d4f9d65d6c91 /src/arch/x86/gdt_init.S | |
parent | 6ccb252918c8a076dcf0a122e3aac028b4e80d4e (diff) | |
download | coreboot-d023909b0112889941c1470cd125b1903572ba3b.tar.gz coreboot-d023909b0112889941c1470cd125b1903572ba3b.tar.bz2 coreboot-d023909b0112889941c1470cd125b1903572ba3b.zip |
treewide: Disable R_AMD64_32S relocation support
This fixes a hard to debug hang that could occur in any stage, but in
the end it follows simple rules and is easy to fix.
In long mode the 32bit displacement addressing used on 'mov' and 'lea'
instructions is sign-extended. Those instructions can be found using
readelf on the stage and searching for relocation type R_X86_64_32S.
The sign extension is no issue when either running in protected mode or
the code module and thus the address is below 2GiB. If the address is
greater than 2GiB, as usually the case for code in TSEG, the higher
address bits [64:32] are all set to 1 and the effective address is
pointing to memory not paged. Accessing this memory will cause a page
fault, which isn't handled either.
To prevent such problems
- disable R_AMD64_32S relocations in rmodtool
- add comment explaining why it's not allowed
- use the pseudo op movabs, which doesn't use 32bit displacement addressing
- Print a useful error message if such a reloc is present in the code
Fixes a crash in TSEG and when in long mode seen on Intel Sandybridge.
Change-Id: Ia5f5a9cde7c325f67b12e3a8e9a76283cc3870a3
Signed-off-by: Patrick Rudolph <patrick.rudolph@9elements.com>
Reviewed-on: https://review.coreboot.org/c/coreboot/+/55448
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Reviewed-by: Arthur Heymans <arthur@aheymans.xyz>
Diffstat (limited to 'src/arch/x86/gdt_init.S')
-rw-r--r-- | src/arch/x86/gdt_init.S | 12 |
1 files changed, 0 insertions, 12 deletions
diff --git a/src/arch/x86/gdt_init.S b/src/arch/x86/gdt_init.S index f33a1517d82d..30b39653c943 100644 --- a/src/arch/x86/gdt_init.S +++ b/src/arch/x86/gdt_init.S @@ -23,18 +23,6 @@ gdtptr: .section .init._gdt64_, "ax", @progbits .globl gdt_init64 gdt_init64: - /* Workaround a bug in the assembler. - * The following code doesn't work: - * lgdt gdtptr64 - * - * The assembler tries to save memory by using 32bit displacement addressing mode. - * Displacements are using signed integers. - * This is fine in protected mode, as the negative address points to the correct - * address > 2GiB, but in long mode this doesn't work at all. - * Tests showed that QEMU can gracefully handle it, but real CPUs can't. - * - * Use the movabs pseudo instruction to force using a 64bit absolute address. - */ movabs $gdtptr64, %rax lgdt (%rax) ret |