summaryrefslogtreecommitdiffstats
path: root/UefiCpuPkg
Commit message (Collapse)AuthorAgeFilesLines
* UefiCpuPkg/SecMain: Add NORETURN decorator to SecStartup().Marvin H?user2018-05-082-2/+9
| | | | | | | | | | The function SecStartup() is not supposed to return. Hence, add the NORETURN decorator. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Marvin Haeuser <Marvin.Haeuser@outlook.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg MpInitLib: Fix typo "sCPUID" to "CPUID"Star Zeng2018-04-251-2/+2
| | | | | | | | | Cc: Eric Dong <eric.dong@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: use mnemonics for FXSAVE(64)/FXRSTOR(64)Laszlo Ersek2018-04-043-10/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | NASM introduced FXSAVE / FXRSTOR support in commit 900fa5b26b8f ("NASM 0.98p3-hpa", 2002-04-30), which commit stands for the nasm-0.98p3-hpa release. NASM introduced FXSAVE64 / FXRSTOR64 support in commit 3a014348ca15 ("insns: add FXSAVE64/FXRSTOR64, drop np prefix", 2010-07-07), which was part of the "nasm-2.09" release. Edk2 requires nasm-2.10 or later for use with the GCC toolchain family, and nasm-2.12.01 or later for use with all other toolchain families. Replace the binary encoding of the FXSAVE(64)/FXRSTOR(64) instructions with mnemonics. I verified that the "Ia32/SmiException.obj", "X64/SmiEntry.obj" and "X64/SmiException.obj" files are rebuilt after this patch, without any change in content. This patch removes the last instructions encoded with DBs from PiSmmCpuDxeSmm. Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: remove DBs from SmmRelocationSemaphoreComplete32()Laszlo Ersek2018-04-042-17/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (1) SmmRelocationSemaphoreComplete32() runs in 32-bit mode, so wrap it in a (BITS 32 ... BITS 64) bracket. (2) SmmRelocationSemaphoreComplete32() currently compiles to: > 000002AE C6050000000001 mov byte [dword 0x0],0x1 > 000002B5 FF2500000000 jmp dword [dword 0x0] where the first instruction is patched with the contents of "mRebasedFlag" (so that (*mRebasedFlag) is set to 1), and the second instruction is patched with the address of "mSmmRelocationOriginalAddress" (so that we jump to "mSmmRelocationOriginalAddress"). In its current form the first instruction could not be patched with PatchInstructionX86(), given that the operand to patch is not encoded in the trailing bytes of the instruction. Therefore, adopt an EAX-based version, inspired by both the IA32 and X64 variants of SmmRelocationSemaphoreComplete(): > 000002AE 50 push eax > 000002AF B800000000 mov eax,0x0 > 000002B4 C60001 mov byte [eax],0x1 > 000002B7 58 pop eax > 000002B8 FF2500000000 jmp dword [dword 0x0] Here both instructions can be patched with PatchInstructionX86(), and the DBs can be replaced with native NASM syntax. (3) Turn the "mRebasedFlagAddr32" and "mSmmRelocationOriginalAddressPtr32" variables into markers that suit PatchInstructionX86(). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmInitStack" with PatchInstructionX86()Laszlo Ersek2018-04-044-8/+12
| | | | | | | | | | | | | | | | | Rename the variable to "gPatchSmmInitStack" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This lets us remove the binary (DB) encoding of some instructions in "SmmInit.nasm". The size of the patched source operand is (sizeof (UINTN)). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: eliminate "gSmmJmpAddr" and related DBsLaszlo Ersek2018-04-044-28/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The IA32 version of "SmmInit.nasm" does not need "gSmmJmpAddr" at all (its PiSmmCpuSmmInitFixupAddress() variant doesn't do anything either). We can simply use the NASM syntax for the following Mixed-Size Jump: > jmp PROTECT_MODE_CS : dword @32bit The generated object code for the instruction is unchanged: > 00000182 66EA5A0000000800 jmp dword 0x8:0x5a (The NASM manual explains that putting the DWORD prefix after the colon ":" reflects the intent better, since it is the offset that is a DWORD. Thus, that's what I used. However, both syntaxes are interchangeable, hence the ndisasm output.) The X64 version of "SmmInit.nasm" appears to require "gSmmJmpAddr"; however that's accidental, not inherent: - Bring LONG_MODE_CODE_SEGMENT from "UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h" to "SmmInit.nasm" as LONG_MODE_CS, same as PROTECT_MODE_CODE_SEGMENT was brought to the IA32 version as PROTECT_MODE_CS earlier. - Apply the NASM-native Mixed-Size Jump syntax again, but jump to the fixed zero offset in LONG_MODE_CS. This will produce no relocation record at all. Add a label after the instruction. - Modify PiSmmCpuSmmInitFixupAddress() to patch the jump target backwards from the label. Because we modify the DWORD offset with a DWORD access, the segment selector is unharmed in the instruction, and we need not set it from PiCpuSmmEntry(). According to "objdump --reloc", the X64 version undergoes only the following relocations, after this patch: > RELOCATION RECORDS FOR [.text]: > OFFSET TYPE VALUE > 0000000000000095 R_X86_64_PC32 SmmInitHandler-0x0000000000000004 > 00000000000000e0 R_X86_64_PC32 mRebasedFlag-0x0000000000000004 > 00000000000000ea R_X86_64_PC32 mSmmRelocationOriginalAddress-0x0000000000000004 Therefore the patch does not regress <https://bugzilla.tianocore.org/show_bug.cgi?id=849> ("Enable XCODE5 tool chain for UefiCpuPkg with nasm source code"). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmCr0" with PatchInstructionX86()Laszlo Ersek2018-04-045-9/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | Like "gSmmCr4" in the previous patch, "gSmmCr0" is not only used for machine code patching, but also as a means to communicate the initial CR0 value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words, the last four bytes of the "mov eax, Cr0Value" instruction's binary representation are utilized as normal data too. In order to get rid of the DB for "mov eax, Cr0Value", we have to split both roles, patching and data flow. Introduce the "mSmmCr0" global (SMRAM) variable for the data flow purpose. Rename the "gSmmCr0" variable to "gPatchSmmCr0" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(), to the value now contained in "mSmmCr0". This lets us remove the binary (DB) encoding of "mov eax, Cr0Value" in "SmmInit.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmCr4" with PatchInstructionX86()Laszlo Ersek2018-04-045-9/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | Unlike "gSmmCr3" in the previous patch, "gSmmCr4" is not only used for machine code patching, but also as a means to communicate the initial CR4 value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words, the last four bytes of the "mov eax, Cr4Value" instruction's binary representation are utilized as normal data too. In order to get rid of the DB for "mov eax, Cr4Value", we have to split both roles, patching and data flow. Introduce the "mSmmCr4" global (SMRAM) variable for the data flow purpose. Rename the "gSmmCr4" variable to "gPatchSmmCr4" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(), to the value now contained in "mSmmCr4". This lets us remove the binary (DB) encoding of "mov eax, Cr4Value" in "SmmInit.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmCr3" with PatchInstructionX86()Laszlo Ersek2018-04-044-8/+8
| | | | | | | | | | | | | | | Rename the variable to "gPatchSmmCr3" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This lets us remove the binary (DB) encoding of some instructions in "SmmInit.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from X64 SmmStartup()Laszlo Ersek2018-04-041-9/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (This patch is the 64-bit variant of commit e75ee97224e5, "UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from IA32 SmmStartup()", 2018-01-31.) The SmmStartup() function executes in SMM, which is very similar to real mode. Add "BITS 16" before it and "BITS 64" after it (just before the @LongMode label). Remove the manual 0x66 operand-size override prefixes, for selecting 32-bit operands -- the sizes of our operands trigger NASM to insert the prefixes automatically in almost every spot. The one place where we have to add it back manually is the LGDT instruction. In the LGDT instruction we also replace the binary 0x2E prefix with the normal NASM syntax for CS segment override. The stores to the Control Registers were always 32-bit wide; the source code only used RAX as source operand because it generated the expected object code (with NASM compiling the source as if in BITS 64). With BITS 16 added, we can use the actual register width in the source operands (EAX). This patch causes NASM to generate byte-identical object code (determined by disassembling both the pre-patch and post-patch versions, and comparing the listings), except: > @@ -231,7 +231,7 @@ > 000001D2 6689D3 mov ebx,edx > 000001D5 66B800000000 mov eax,0x0 > 000001DB 0F22D8 mov cr3,eax > -000001DE 662E670F0155F6 o32 lgdt [cs:ebp-0xa] > +000001DE 2E66670F0155F6 o32 lgdt [cs:ebp-0xa] > 000001E5 66B800000000 mov eax,0x0 > 000001EB 80CC02 or ah,0x2 > 000001EE 0F22E0 mov cr4,eax The only difference is the prefix list order, it changes from: - 0x66, 0x2E, 0x67 to - 0x2E, 0x66, 0x67 Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "XdSupported" with PatchInstructionX86()Laszlo Ersek2018-04-044-6/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "mXdSupported" is a global BOOLEAN variable, initialized to TRUE. The CheckFeatureSupported() function is executed on all processors (not concurrently though), called from SmmInitHandler(). If XD support is found to be missing on any CPU, then "mXdSupported" is set to FALSE, and further processors omit the check. Afterwards, "mXdSupported" is read by several assembly and C code locations. The tricky part is *where* "mXdSupported" is allocated (defined): - Before commit 717fb60443fb ("UefiCpuPkg/PiSmmCpuDxeSmm: Add paging protection.", 2016-11-17), it used to be a normal global variable, defined (allocated) in "SmmProfile.c". - With said commit, we moved the definition (allocation) of "mXdSupported" into "SmiEntry.nasm". The variable was defined over the last byte of a "mov al, 1" instruction, so that setting it to FALSE in CheckFeatureSupported() would patch the instruction to "mov al, 0". The subsequent conditional jump would change behavior, plus all further read references to "mXdSupported" (in C and assembly code) would read back the source (imm8) operand of the patched MOV instruction as data. This trick required that the MOV instruction be encoded with DB. In order to get rid of the DB, we have to split both roles: we need a label for the code patching, and "mXdSupported" has to be defined (allocated) independently of the code patching. Of course, their values must always remain in sync. (1) Reinstate the "mXdSupported" definition and initialization in "SmmProfile.c" from before commit 717fb60443fb. Change the assembly language definition ("global") to a declaration ("extern"). (2) Define the "gPatchXdSupported" label (type X86_ASSEMBLY_PATCH_LABEL) in "SmiEntry.nasm", and add the C-language declaration to "SmmProfileInternal.h". Replace the DB with the MOV mnemonic (keeping the imm8 source operand with value 1). (3) In CheckFeatureSupported(), whenever "mXdSupported" is set to FALSE, patch the assembly code in sync, with PatchInstructionX86(). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmiCr3" with PatchInstructionX86()Laszlo Ersek2018-04-043-8/+8
| | | | | | | | | | | | | | | Rename the variable to "gPatchSmiCr3" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This lets us remove the binary (DB) encoding of some instructions in "SmiEntry.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmiStack" with PatchInstructionX86()Laszlo Ersek2018-04-043-9/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rename the variable to "gPatchSmiStack" so that its association with PatchInstructionX86() is clear from the declaration. Also change its type to X86_ASSEMBLY_PATCH_LABEL. Unlike "gSmbase" in the previous patch, "gSmiStack"'s patched value is also de-referenced by C code (in other words, it is read back after patching): the InstallSmiHandler() function stores "CpuIndex" to the given CPU's SMI stack through "gSmiStack". Introduce the local variable "CpuSmiStack" in InstallSmiHandler() for calculating the stack location separately, then use this variable for both patching into the assembly code, and for storing "CpuIndex" through it. It's assumed that "volatile" stood in the declaration of "gSmiStack" because we used to read "gSmiStack" back for de-referencing; with that use gone, we can remove "volatile" too. (Note that the *target* of the pointer was never volatile-qualified.) Finally, replace the binary (DB) encoding of "mov esp, imm32" in "SmiEntry.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmbase" with PatchInstructionX86()Laszlo Ersek2018-04-043-12/+12
| | | | | | | | | | | | | | | Rename the variable to "gPatchSmbase" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This lets us remove the binary (DB) encoding of some instructions in "SmiEntry.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: remove *.S and *.asm assembly filesLaszlo Ersek2018-04-0417-4294/+0
| | | | | | | | | | | | | | | | | | | | | | All edk2 toolchains use NASM for compiling X86 assembly source code. We plan to remove X86 *.S and *.asm files globally, in order to reduce maintenance and confusion: http://mid.mail-archive.com/4A89E2EF3DFEDB4C8BFDE51014F606A14E1B9F76@SHSMSX104.ccr.corp.intel.com https://lists.01.org/pipermail/edk2-devel/2018-March/022690.html https://bugzilla.tianocore.org/show_bug.cgi?id=881 Let's start with UefiCpuPkg/PiSmmCpuDxeSmm: remove the *.S and *.asm dialects (both Ia32 and X64) of the SmmInit, SmiEntry, SmiException and MpFuncs sources. Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Andrew Fish <afish@apple.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg PiSmmCpuDxeSmm: Refine some comments about SmmMemoryAttributeStar Zeng2018-04-042-23/+17
| | | | | | | | | | | | | | | 1. Fix some "support" to "supported". 2. Fix some "set" to "clear" in ClearMemoryAttributes interface. 3. Remove redundant comments for GetMemoryAttributes interface. Cc: Jian J Wang <jian.j.wang@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/MpInitLib: Disable interrupt at ExitBootServices AP MwaitHao Wu2018-03-202-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Within function ApWakeupFunction(): When source level debugger is enabled, AP interrupts will be enabled by EnableDebugAgent(). Then the AP function will be executed by: Procedure (Parameter); After the AP function returns, AP interrupts will be disabled when the APs are placed in loop mode (both HltLoop and MwaiLoop). However, at ExitBootServices, ApWakeupFunction() is called with 'Procedure' equals to RelocateApLoop(). (ExitBootServices callback registered within InitMpGlobalData()) RelocateApLoop() never returns, so it has to disable the AP interrupts by itself. However, we find that interrupts are only disabled for the HltLoop case, but not for the MwaitLoop case (within file MpFuncs.nasm). This commit adds the missing disabling of AP interrupts for MwaitLoop. Also, for X64, this commit will disable the interrupts before switching to 32-bit mode. Cc: Laszlo Ersek <lersek@redhat.com> Cc: Jian J Wang <jian.j.wang@intel.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Eric Dong <eric.dong@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Hao Wu <hao.a.wu@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
* UefiCpuPkg CpuExceptionHandlerLib: use FixedPcdGetSize() as the macro valueLiming Gao2018-03-161-3/+3
| | | | | | | | | | | | FixedPcdGetSize() is used as the macro value, PcdGetSize() is used as global variable or function. Here usage is to access macro value. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Liming Gao <liming.gao@intel.com> Cc: Wang Jian J <jian.j.wang@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
* UefiCpuPkg/MpInitLib: put mReservedApLoopFunc in executable memoryJian J Wang2018-03-081-4/+34
| | | | | | | | | | | | | | | | | | if PcdDxeNxMemoryProtectionPolicy is enabled for EfiReservedMemoryType of memory, #PF will be triggered for each APs after ExitBootServices in SCRT test. The root cause is that AP wakeup code executed at that time is stored in memory of type EfiReservedMemoryType (referenced by global mReservedApLoopFunc), which is marked as non-executable. This patch fixes this issue by setting memory of mReservedApLoopFunc to be executable immediately after allocation. Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/CpuCommonFeaturesLib: Fix coding style issueDandan Bi2018-03-081-1/+1
| | | | | | | | | | | | | Boolean values do not need to use explicit comparisons to TRUE or FALSE. Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Dandan Bi <dandan.bi@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
* UefiCpuPkg S3ResumePei: Signal S3SmmInitDoneStar Zeng2018-03-032-14/+37
| | | | | | | | | | Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com> Acked-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/CpuExceptionHandlerLib: fix incorrect init of exception stackJian J Wang2018-02-281-1/+1
| | | | | | | | | | | | | | | | | | This issue is introduced at following commit, which tried to add stack switch support on behalf of Stack Guard feature. 0ff5aa9cae1ea276668fa4398d047aa9fda3c2c7 The field KnownGoodStackTop in CPU_EXCEPTION_INIT_DATA is initialized to the start address of array mNewStack. This is wrong. It must be the end of mNewStack. This patch fixes this mistake. Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/S3Resume: Remove useless perf codeDandan Bi2018-02-122-133/+1
| | | | | | | | | | | | | | | | | | | V2: Just update the commit message to reference the hash value of new performance infrastructure. Our new performance infrastructure (edk2 trunk commit hash value: SHA-1: 73fef64f14d1b97ae9bd4705df3becc022391eba ~ SHA-1: 115eae650bfd2be2c2bc37360f4a755065e774c4)can support to dump performance date form ACPI table in OS. So we can remove the old perf code to write performance data to OS. Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Liming Gao <liming.gao@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Dandan Bi <dandan.bi@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Star Zeng <star.zeng@intel.com>
* UefiCpuPkg/S3Resume: Add more perf entry for S3 phaseBi, Dandan2018-02-091-1/+14
| | | | | | | | | | | | | | | | | | | | | | | V2: Just update the commit message. Add more perf entry to hook BootScriptDonePpi/EndOfPeiPpi/ EndOfS3Resume. Add the new perf entry with Identifier PERF_INMODULE_START_ID/PERF_INMODULE_END_ID which are defined in new performance infrastructure (edk2 trunk commit hash value: SHA-1: 73fef64f14d1b97ae9bd4705df3becc022391eba ~ SHA-1: 115eae650bfd2be2c2bc37360f4a755065e774c4). PERF_INMODULE_START_ID/PERF_INMODULE_END_ID are general Identifier which are used within a module. Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Liming Gao <liming.gao@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Dandan Bi <dandan.bi@intel.com> Reviewed-by: Liming Gao <liming.gao@intel.com> Acked-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/FeaturesLib: don't init MCi_CTL/STATUS when MCA's disabledRuiyu Ni2018-02-091-15/+17
| | | | | | | | | | | Today's McaInitialize() doesn't check State value before initialize MCi_CTL and MCi_STATUS. The patch fixes this issue by only initializing the two kinds of MSRs when State is enabled. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/FeaturesLib: Fix Haswell CPU hang with 50% throttlingRuiyu Ni2018-02-081-29/+23
| | | | | | | | | | | | | | | Today's implementation only assumes SandyBridge CPU supports Extended On-Demand Clock Modulation Duty Cycle. Actually it is supported when CPUID.06h.EAX[5] == 1. When platform requests 50% throttling, it causes value 1000b set to the low-4 bits of IA32_CLOCK_MODULATION. But the wrong code sets 1000b to bits[1-3] which causes assertion. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Jeff Fan <vanjeff_919@hotmail.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: fix infinite loop issue in SMM profileJian J Wang2018-02-081-4/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | > v2: > Reduce the number of page to update/restore from 3 to 2 because DF > has no effect in this issue. The infinite loop is caused by the memory instruction, such as "rep mov", operating on memory block crossing boundary of NON-PRESENT pages. Because the address triggering page fault set in CR2 will be in the first page, SmmProfilePFHandler() will only change the first page into PRESENT. The page following will be still in NON-PRESENT status. Since SmmProfilePFHandler() will setup single-step trap for the instruction causing #PF, when the handler returns back to the instruction and re-execute it, both #DB and #PF will be triggered because the instruction wants to access both first and second page but only first page is PRESENT. Normally #DB exception will be handled first and its handler will change first page back to NON-PRESENT status. Then #PF is handled and its handler will change first page to PRESENT status again and setup another single-step for the instruction triggering #PF. Then the whole system falls into an infinite loop and the memory operation will never move on. This patch fix above situation by always changing 2 pages to PRESENT status instead of just 1 page. Those 2 pages include the page causing #PF and the page after it. Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
* UefiCpuPkg: Remove the unused file ResetVec.asm16Liming Gao2018-02-011-106/+0
| | | | | | | | ResetVec.nasmb is used. ResetVec.asm16 can be retired. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Liming Gao <liming.gao@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: eliminate conditional jump in IA32 SmmStartup()Laszlo Ersek2018-01-311-4/+3
| | | | | | | | | | | | | | | | | | | | | | SMM emulation under both KVM and QEMU (TCG) crashes the guest when the "jz" branch, added in commit d4d87596c11d ("UefiCpuPkg/PiSmmCpuDxeSmm: Enable NXE if it's supported", 2018-01-18), is taken. Rework the propagation of CPUID.80000001H:EDX.NX [bit 20] to IA32_EFER.NXE [bit 11] so that no code is executed conditionally. Cc: Eric Dong <eric.dong@intel.com> Cc: Jian J Wang <jian.j.wang@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Ref: http://mid.mail-archive.com/d6fff558-6c4f-9ca6-74a7-e7cd9d007276@redhat.com Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> [lersek@redhat.com: XD -> NX code comment updates from Ray] Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com> [lersek@redhat.com: mark QEMU/TCG as well in the commit message]
* UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from IA32 SmmStartup()Laszlo Ersek2018-01-311-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The SmmStartup() executes in SMM, which is very similar to real mode. Add "BITS 16" before it and "BITS 32" after it (just before the @32bit label). Remove the manual 0x66 operand-size override prefixes, for selecting 32-bit operands -- the sizes of our operands trigger NASM to insert the prefixes automatically in almost every spot. The one place where we have to add it back manually is the LGDT instruction. (The 0x67 address-size override prefix is also auto-generated.) This patch causes NASM to generate byte-identical object code (determined by disassembling both the pre-patch and post-patch versions, and comparing the listings), except: > @@ -158,7 +158,7 @@ > 00000142 6689D3 mov ebx,edx > 00000145 66B800000000 mov eax,0x0 > 0000014B 0F22D8 mov cr3,eax > -0000014E 67662E0F0155F6 o32 lgdt [cs:ebp-0xa] > +0000014E 2E66670F0155F6 o32 lgdt [cs:ebp-0xa] > 00000155 66B800000000 mov eax,0x0 > 0000015B 0F22E0 mov cr4,eax > 0000015E 66B9800000C0 mov ecx,0xc0000080 The only difference is the prefix list order, it changes from: - 0x67, 0x66, 0x2E to - 0x2E, 0x66, 0x67 (0x2E is "CS segment override"). Cc: Eric Dong <eric.dong@intel.com> Cc: Jian J Wang <jian.j.wang@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: update comments in IA32 SmmStartup()Laszlo Ersek2018-01-311-4/+4
| | | | | | | | | | | | | | | | | | | | The gSmmCr3, gSmmCr4, gSmmCr0 and gSmmJmpAddr global variables are used for patching assembly instructions, thus we can't yet remove the DB encodings for those instructions. At least we should add the intended meanings in comments. This patch only changes comments. Cc: Eric Dong <eric.dong@intel.com> Cc: Jian J Wang <jian.j.wang@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com> [lersek@redhat.com: adapt commit msg to ongoing PatchAssembly discussion]
* UefiCpuPkg/CpuDxe: remove all code to flush TLB for APsJian J Wang2018-01-291-80/+5
| | | | | | | | | | | | | | | | | | | The reason doing this is that we found that calling StartupAllAps() to flush TLB for all APs in CpuDxe driver after changing page attributes will spend a lot of time to complete. If there are many page attributes update requests, the whole system performance will be slowed down explicitly, including any shell command and UI operation. The solution is removing the flush operation for AP in CpuDxe driver and let AP flush TLB after woken up. Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
* UefiCpuPkg/MpInitLib: force flushing TLB for AP in mwait loop modeJian J Wang2018-01-291-0/+7
| | | | | | | | | | | | | | | | | | | | The reason doing this is that we found that calling StartupAllAps() to flush TLB for all APs in CpuDxe driver after changing page attributes will spend a lot of time to complete. If there are many page attributes update requests, the whole system performance will be slowed down explicitly, including any shell command and UI operation. The solution is removing the flush operation for AP in CpuDxe driver. Since TLB is always flushed in HLT loop mode, we just need to enforce a TLB flush for mwait loop mode. Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
* UefiCpuPkg/MpInitLib: fix AP init issue in 64-bit PEIJian J Wang2018-01-291-4/+5
| | | | | | | | | | | | | | | | | | | | This issue is introduced by a patch at f32bfe6d061420a15bac6083063d227c567e6388 The above patch miss the case of 64-bit PEI, which will link X64/MpFuncs.nasm instead of Ia32/MpFuncs.nasm. For X64/MpFuncs.nasm, ExchangeInfo->ModeHighMemory should be always initialized no matter if separate wakeup buffer is allocated or not. Ia32/MpFuncs.nasm will not need ModeHighMemory during AP init. So the changes made in this patch should not affect the functionality of it. Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
* UefiCpuPkg/MpInitLib: Make sure AP uses correct StartupApSignalStar Zeng2018-01-261-0/+9
| | | | | | | | | | | | | | | | Every processor's StartupApSignal is initialized in MpInitLibInitialize() before calling CollectProcessorCount(). When SortApicId() is called from CollectProcessorCount(), AP Index is re-assigned by APIC ID. But SortApicId() forgets to set the correct StartupApSignal when sorting the AP. The patch fixes this issue. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Star Zeng <star.zeng@intel.com> Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com> Reviewed-by: Star Zeng <star.zeng@intel.com> Cc: Chasel Chiu <chasel.chiu@intel.com>
* UefiCpuPkg/CpuExceptionHandler: Init serial port before context dumpRuiyu Ni2018-01-262-2/+10
| | | | | | Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
* UefiCpuPkg/MpInitLib: fix issue in wakeup buffer initializationJian J Wang2018-01-252-10/+11
| | | | | | | | | | | | | | | | | | | | To fix an issue in which enabling NX feature will mark the AP wakeup buffer as non-executable and fail the AP init, the buffer was split into two part: the lower part in memory within 1MB and the higher part within allocated executable memory (EfiBootServicesCode). But the address of higher part memory was stored in lower part memory, which is actually shared with legacy components and will be overwritten by LegacyBiosDxe driver if CSM is enabled. This patch fixes this issue by storing the address of higher part memory in CpuMpData instead of ExchangeInfo. Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/MtrrLib: Add comments to recommend to use batch-set APIRuiyu Ni2018-01-242-0/+20
| | | | | | | | | | | | | MtrrSetMemoryAttributesInMtrrSettings() is a batch-set API. When setting multiple ranges of memory attributes, the single-set API (MtrrSetMemoryAttributeInMtrrSettings and MtrrSetMemoryAttribute) may fail, but batch-set API may succeed. Add comments to recommend caller to use batch-set API when setting multiple ranges. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com> Reviewed-by: Ming Shao <ming.shao@intel.com>
* UefiCpuPkg/MtrrLib: Update the comments for RETURN_BUFFER_TOO_SMALLRuiyu Ni2018-01-242-5/+13
| | | | | | Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com> Reviewed-by: Ming Shao <ming.shao@intel.com>
* UefiCpuPkg/PeiMpLib: Fix a system hang-in-pei issue.Ruiyu Ni2018-01-241-12/+11
| | | | | | | | | | | | | | | | | | GetWakeupBuffer() tries to find a below-1M free memory, it checks whether the memory is allocated already in CheckOverlapWithAllocatedBuffer(). When there is a memory allocation hob (base = 0xff_00000000, size = 0x10000000), CheckOverlapWithAllocateBuffer() truncates the base to 0 which causes it always returns TRUE so GetWakeupBuffer() fails to find a below-1MB memory. The patch fixes this issue by using UINT64 type. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
* UefiCpuPkg: Update package version.Eric Dong2018-01-222-2/+2
| | | | | | | | Cc: Star Zeng <star.zeng@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Star Zeng <star.zeng@intel.com>
* UefiCpuPkg/CpuDxe: fix bad boot performanceJian J Wang2018-01-191-2/+0
| | | | | | | | | | | | | | | | If features like memory profile, protection and heap guard are enabled, a lot of more memory page attributes update actions will happen than usual. An unnecessary sync of CR0.WP setting among APs will then cause worse performance in memory allocation action. Removing the calling of SyncMemoryPageAttributesAp() in function DisableReadOnlyPageWriteProtect and EnableReadOnlyPageWriteProtect can fix this problem. In DEBUG build case, the boot performance can be boosted from 11 minute to 6 minute. Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Enable NXE if it's supportedJian J Wang2018-01-182-1/+25
| | | | | | | | | | | | | | | | | | | | | If PcdDxeNxMemoryProtectionPolicy is set to enable protection for memory of EfiBootServicesCode, EfiConventionalMemory, the BIOS will hang at a page fault exception triggered by PiSmmCpuDxeSmm. The root cause is that PiSmmCpuDxeSmm will access default SMM RAM starting at 0x30000 which is marked as non-executable, but NX feature was not enabled during SMM initialization. Accessing memory which has invalid attributes set will cause page fault exception. This patch fixes it by checking NX capability in cpuid and enable NXE in EFER MSR if it's available. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/CpuDxe: clear NX attr for page directoryJian J Wang2018-01-181-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | If PcdDxeNxMemoryProtectionPolicy is set to enable protection for memory of EfiBootServicesCode, EfiConventionalMemory and EfiReservedMemoryType, the BIOS will hang at a page fault exception randomly. The root cause is that the memory allocation for driver images (actually a memory type conversion from free memory, type of EfiConventionalMemory, to code memory, type of EfiBootServicesCode/EfiRuntimeServicesCode) will get memory with NX set, because the CpuDxe driver will keep the NX attribute (with free memory) in page directory during page table splitting and then override the NX attribute of all its entries. This patch fixes this issue by not inheriting NX attribute when turning a page entry into a page directory during page granularity split. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/CpuExceptionHandlerLib: alloc code memory for exception handlersJian J Wang2018-01-181-4/+14
| | | | | | | | | | | | | | | | | | | If PcdDxeNxMemoryProtectionPolicy is set to enable protection for memory of EfiBootServicesData, EfiConventionalMemory, the BIOS will reset after timer initialized and started. The root cause is that the memory used to hold the exception and interrupt handler is allocated with type of EfiBootServicesData and marked as non-executable due to NX feature enabled. This patch fixes it by allocating EfiBootServicesCode type of memory for those handlers instead. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/MpInitLib: split wake up buffer into two partsJian J Wang2018-01-188-49/+204
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | If PcdDxeNxMemoryProtectionPolicy is set to enable protection for memory of EfiBootServicesCode, EfiConventionalMemory, the BIOS will hang at a page fault exception during MP initialization. The root cause is that the AP wake up buffer, which is below 1MB and used to hold both AP init code and data, is type of EfiConventionalMemory (not really allocated because of potential conflict with legacy code), and is marked as non-executable. During the transition from real address mode to long mode, the AP init code has to enable paging which will then cause itself a page fault exception because it's just running in non-executable memory. The solution is splitting AP wake up buffer into two part: lower part is still below 1MB and shared with legacy system, higher part is really allocated memory of BootServicesCode type. The init code in the memory below 1MB will not enable paging but just switch to protected mode and jump to higher memory, in which the init code will enable paging and switch to long mode. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/CpuDxe: fix SetMemoryAttributes issue in 32-bit modeJian J Wang2018-01-181-0/+4
| | | | | | | | | | | | | | | | | | In 32-bit mode, the BIOS will not create page table for memory beyond 4GB and therefore it cannot handle the attributes change request for those memory. But current CpuDxe doesn't check this situation and still try to complete the request, which will cause attributes of incorrect memory address to be changed due to type cast from 64-bit to 32-bit. This patch fixes this issue by checking the end address of input memory block and returning EFI_UNSUPPORTED if it's out of range. Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
* UefiCpuPkg/MpInitLib: Fix timer interrupt is disabled after SwitchBSPRuiyu Ni2018-01-181-0/+1
| | | | | | | | | | | | | | Commits a2ea6894e6ca95e8d7a254593661a79e4b988626 * UefiCpuPkg/MpInitLib: Fix a bug that AP enters timer INT handler masked the interrupts in AP. But it didn't unmask the interrupt in new BSP when Switch BSP happens. The patch fixed this issue. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com> Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com> Cc: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg: Update PiSmmCpuDxeSmm pass XCODE5 tool chainLiming Gao2018-01-169-24/+79
| | | | | | | | | | | | | | | | | | | | | https://bugzilla.tianocore.org/show_bug.cgi?id=849 In V2, use "mov rax, strict qword 0" to replace the hard code db. 1. Use lea instruction to get the address instead of mov instruction. 2. Use the dummy address as jmp destination, and add the logic to fix up the address to the absolute address at boot time. 3. On MpFuncs.nasm, use ExchangeInfo to record InitializeFloatingPointUnits. This way is same to MpInitLib. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Liming Gao <liming.gao@intel.com> Cc: Andrew Fish <afish@apple.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
* UefiCpuPkg: Update SmmCpuFeatureLib pass XCODE5 tool chainLiming Gao2018-01-165-22/+52
| | | | | | | | | | | | | | | | | | | https://bugzilla.tianocore.org/show_bug.cgi?id=849 In V2, use "mov rax, strict qword 0" to replace the hard code db. 1. Use lea instruction to get the address instead of mov instruction. 2. Use the dummy address as jmp destination, and add the logic to fix up the address to the absolute address at boot time. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Liming Gao <liming.gao@intel.com> Cc: Andrew Fish <afish@apple.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>