summaryrefslogtreecommitdiffstats
path: root/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
Commit message (Collapse)AuthorAgeFilesLines
* UefiCpuPkg/PiSmmCpuDxeSmm: Introduce page table pool mechanismduntan2022-12-211-30/+0
| | | | | | | | | | | | | | | | | | Introduce page table pool mechanism for smm page table to simplify page table memory management and protection. This mechanism has been used in DxeIpl. The basic idea is to allocate a bunch of continuous pages of memory in advance, and all future page tables consumption will happen in those pool instead of system memory. Since we have centralized page tables, we only need to mark all page table pools as RO, instead of searching page table memory layer by layer in smm page table. Once current page table pool has been used up, another memory pool will be allocated and the new pool will also be set as RO if current page table memory has been marked as RO. Signed-off-by: Dun Tan <dun.tan@intel.com> Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com>
* UefiCpuPkg: Check SMM Delayed/Blocked AP CountWu, Jiaxin2022-12-081-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4173 Due to more core count increasement, it's hard to reflect all APs state via AP bitvector support in the register. Actually, SMM CPU driver doesn't need to check each AP state to know all CPUs in SMI or not, one alternative method is to check the SMM Delayed & Blocked AP Count number: APs in SMI + Blocked Count + Disabled Count >= All supported Aps (code comments explained why can be > All supported Aps) With above change, the returned value of "SmmRegSmmEnable" & "SmmRegSmmDelayed" & "SmmRegSmmBlocked" from SmmCpuFeaturesLib should be the AP count number within the existing CPU package. For register that return the bitvector state, require SmmCpuFeaturesGetSmmRegister() returns count number of all bit per logical processor within the same package. For register that return the AP count, require SmmCpuFeaturesGetSmmRegister() returns the register value directly. v3: - Refine the coding style v2: - Rename "mPackageBspInfo" to "mPackageFirstThreadIndex" - Clarify the expected value of "SmmRegSmmEnable" & "SmmRegSmmDelayed" & "SmmRegSmmBlocked" returned from SmmCpuFeaturesLib. - Thread: https://edk2.groups.io/g/devel/message/96722 v1: - Thread: https://edk2.groups.io/g/devel/message/96671 Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Zeng Star <star.zeng@intel.com> Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
* UefiCpuPkg: Apply uncrustify changesMichael Kubacki2021-12-071-119/+145
| | | | | | | | | | | | REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3737 Apply uncrustify changes to .c/.h files in the UefiCpuPkg package Cc: Andrew Fish <afish@apple.com> Cc: Leif Lindholm <leif@nuviainc.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Signed-off-by: Michael Kubacki <michael.kubacki@microsoft.com> Reviewed-by: Ray Ni <ray.ni@intel.com>
* UefiCpuPkg: Change complex DEBUG_CODE() to DEBUG_CODE_BEGIN/END()Michael D Kinney2021-12-071-2/+2
| | | | | | | | | | | | | REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3767 Update use of DEBUG_CODE(Expression) if Expression is a complex code block with if/while/for/case statements that use {}. Cc: Andrew Fish <afish@apple.com> Cc: Leif Lindholm <leif@nuviainc.com> Cc: Michael Kubacki <michael.kubacki@microsoft.com> Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com>
* UefiCpuPkg: Change use of EFI_D_* to DEBUG_*Michael D Kinney2021-12-071-7/+7
| | | | | | | | | | | | REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3739 Update all use of EFI_D_* defines in DEBUG() macros to DEBUG_* defines. Cc: Andrew Fish <afish@apple.com> Cc: Leif Lindholm <leif@nuviainc.com> Cc: Michael Kubacki <michael.kubacki@microsoft.com> Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Use SMM Interrupt Shadow StackSheng, W2021-11-121-19/+42
| | | | | | | | | | | | | | | When CET shadow stack feature is enabled, it needs to use IST for the exceptions, and uses interrupt shadow stack for the stack switch. Shadow stack should be 32 bytes aligned. Check IST field, when clear shadow stack token busy bit when using retf. REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3728 Signed-off-by: Sheng Wei <w.sheng@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Update mPatchCetSupported set conditionWenxing Hou2021-09-011-2/+5
| | | | | | | | | | | | | | | REF:https://bugzilla.tianocore.org/show_bug.cgi?id=3584 Function AsmCpuid should first check the value for Basic CPUID Information. The fix is to update the mPatchCetSupported judgment statement. Signed-off-by: Wenxing Hou <wenxing.hou@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Sheng W <w.sheng@intel.com> Cc: Yao Jiewen <jiewen.yao@intel.com>
* UefiCpuPkg/PiSmm: Fix various typosAntoine Coeur2020-02-101-2/+2
| | | | | | | | | | | | | | Fix various typos in comments and documentation. Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Signed-off-by: Antoine Coeur <coeur@gmx.fr> Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Signed-off-by: Philippe Mathieu-Daude <philmd@redhat.com> Message-Id: <20200207010831.9046-78-philmd@redhat.com>
* UefiCpuPkg/PiSmmCpu: Restrict access per PcdCpuSmmRestrictedMemoryAccessRay Ni2019-09-041-8/+10
| | | | | | | | | | | | | | | | | | Today's behavior is to always restrict access to non-SMRAM regardless the value of PcdCpuSmmRestrictedMemoryAccess. Because RAS components require to access all non-SMRAM memory, the patch changes the code logic to honor PcdCpuSmmRestrictedMemoryAccess so that only when the PCD is true, the restriction takes affect and page table memory is also protected. Because IA32 build doesn't reference this PCD, such restriction always takes affect in IA32 build. Signed-off-by: Ray Ni <ray.ni@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* Revert "UefiCpuPkg/PiSmmCpu: Allow SMM access-out when static paging is OFF"Laszlo Ersek2019-07-261-15/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 30f6148546c7092650ab4886fc6d95d5065c3188. Commit 30f6148546c7 causes a build failure, when building for IA32: > UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c: In function > 'PerformRemainingTasks': > UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c:1440:9: error: > 'mCpuSmmStaticPageTable' undeclared (first use in this function) > if (mCpuSmmStaticPageTable) { "mCpuSmmStaticPageTable" is an X64-only variable. It is defined in "X64/PageTbl.c", which is not linked into the IA32 binary. We must not reference the variable in such code that is linked into both IA32 and X64 builds, such as "PiSmmCpuDxeSmm.c". We have encountered the same challenge at least once in the past: - https://bugzilla.tianocore.org/show_bug.cgi?id=1593 - commit 37f9fea5b88d ("UefiCpuPkg\CpuSmm: Save & restore CR2 on-demand paging in SMM", 2019-04-04) The right approach is to declare a new function in "PiSmmCpuDxeSmm.h", and to provide two definitions for the function, one in "Ia32/PageTbl.c", and another in "X64/PageTbl.c". The IA32 implementation should return a constant value. The X64 implementation should return "mCpuSmmStaticPageTable". (In the example named above, the functions were SaveCr2() and RestoreCr2().) Signed-off-by: Laszlo Ersek <lersek@redhat.com> [lersek@redhat.com: push revert immediately, due to build breakage that would have been easy to catch before submitting the patch]
* UefiCpuPkg/PiSmmCpu: Allow SMM access-out when static paging is OFFNi, Ray2019-07-261-6/+15
| | | | | | | | | | | | | | | | | | | | | | Commit c60d36b4d1ee1f69b7cca897d3621dfa951895c2 * UefiCpuPkg/SmmCpu: Block access-out only when static paging is used updated page fault handler to treat SMM access-out as allowed address when static paging is not used. But that commit is not complete because the page table is still updated in SetUefiMemMapAttributes() for non-SMRAM memory. When SMM code accesses non-SMRAM memory, page fault is still generated. This patch skips to update page table for non-SMRAM memory and page table itself. Signed-off-by: Ray Ni <ray.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Jian J Wang <jian.j.wang@intel.com> Acked-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Enable MM MP ProtocolEric Dong2019-07-161-0/+18
| | | | | | | | | | | | REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1937 Add MM Mp Protocol in PiSmmCpuDxeSmm driver. Cc: Ray Ni <ray.ni@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Signed-off-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg: Replace BSD License with BSD+Patent LicenseMichael D Kinney2019-04-091-7/+1
| | | | | | | | | | | | | | | | | | | | | https://bugzilla.tianocore.org/show_bug.cgi?id=1373 Replace BSD 2-Clause License with BSD+Patent License. This change is based on the following emails: https://lists.01.org/pipermail/edk2-devel/2019-February/036260.html https://lists.01.org/pipermail/edk2-devel/2018-October/030385.html RFCs with detailed process for the license change: V3: https://lists.01.org/pipermail/edk2-devel/2019-March/038116.html V2: https://lists.01.org/pipermail/edk2-devel/2019-March/037669.html V1: https://lists.01.org/pipermail/edk2-devel/2019-March/037500.html Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com>
* UefiCpuPkg/PiSmmCpu: Add Shadow Stack Support for X86 SMM.Jiewen Yao2019-02-281-12/+85
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1521 We scan the SMM code with ROPgadget. http://shell-storm.org/project/ROPgadget/ https://github.com/JonathanSalwan/ROPgadget/tree/master This tool reports the gadget in SMM driver. This patch enabled CET ShadowStack for X86 SMM. If CET is supported, SMM will enable CET ShadowStack. SMM CET will save the OS CET context at SmmEntry and restore OS CET context at SmmExit. Test: 1) test Intel internal platform (x64 only, CET enabled/disabled) Boot test: CET supported or not supported CPU on CET supported platform CET enabled/disabled PcdCpuSmmCetEnable enabled/disabled Single core/Multiple core PcdCpuSmmStackGuard enabled/disabled PcdCpuSmmProfileEnable enabled/disabled PcdCpuSmmStaticPageTable enabled/disabled CET exception test: #CF generated with PcdCpuSmmStackGuard enabled/disabled. Other exception test: #PF for normal stack overflow #PF for NX protection #PF for RO protection CET env test: Launch SMM in CET enabled/disabled environment (DXE) - no impact to DXE The test case can be found at https://github.com/jyao1/SecurityEx/tree/master/ControlFlowPkg 2) test ovmf (both IA32 and X64 SMM, CET disabled only) test OvmfIa32/Ovmf3264, with -D SMM_REQUIRE. qemu-system-x86_64.exe -machine q35,smm=on -smp 4 -serial file:serial.log -drive if=pflash,format=raw,unit=0,file=OVMF_CODE.fd,readonly=on -drive if=pflash,format=raw,unit=1,file=OVMF_VARS.fd QEMU emulator version 3.1.0 (v3.1.0-11736-g7a30e7adb0-dirty) 3) not tested IA32 CET enabled platform Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Yao Jiewen <jiewen.yao@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Update to consume SpeculationBarrierHao Wu2018-12-251-3/+3
| | | | | | | | | | | | | | | | | REF:https://bugzilla.tianocore.org/show_bug.cgi?id=1417 Since BaseLib API AsmLfence() is a x86 arch specific API and should be avoided using in generic codes, this commit replaces the usage of AsmLfence() with arch-generic API SpeculationBarrier(). Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Liming Gao <liming.gao@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Hao Wu <hao.a.wu@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: [CVE-2017-5753] Fix bounds check bypassHao Wu2018-09-301-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | REF:https://bugzilla.tianocore.org/show_bug.cgi?id=1194 Speculative execution is used by processor to avoid having to wait for data to arrive from memory, or for previous operations to finish, the processor may speculate as to what will be executed. If the speculation is incorrect, the speculatively executed instructions might leave hints such as which memory locations have been brought into cache. Malicious actors can use the bounds check bypass method (code gadgets with controlled external inputs) to infer data values that have been used in speculative operations to reveal secrets which should not otherwise be accessed. It is possible for SMI handler(s) to call EFI_SMM_CPU_PROTOCOL service ReadSaveState() and use the content in the 'CommBuffer' (controlled external inputs) as the 'CpuIndex'. So this commit will insert AsmLfence API to mitigate the bounds check bypass issue within SmmReadSaveState(). For SmmReadSaveState(): The 'CpuIndex' will be passed into function ReadSaveStateRegister(). And then in to ReadSaveStateRegisterByIndex(). With the call: ReadSaveStateRegisterByIndex ( CpuIndex, SMM_SAVE_STATE_REGISTER_IOMISC_INDEX, sizeof(IoMisc.Uint32), &IoMisc.Uint32 ); The 'IoMisc' can be a cross boundary access during speculative execution. Later, 'IoMisc' is used as the index to access buffers 'mSmmCpuIoWidth' and 'mSmmCpuIoType'. One can observe which part of the content within those buffers was brought into cache to possibly reveal the value of 'IoMisc'. Hence, this commit adds a AsmLfence() after the check of 'CpuIndex' within function SmmReadSaveState() to prevent the speculative execution. A more detailed explanation of the purpose of commit is under the 'Bounds check bypass mitigation' section of the below link: https://software.intel.com/security-software-guidance/insights/host-firmware-speculative-execution-side-channel-mitigation And the document at: https://software.intel.com/security-software-guidance/api-app/sites/default/files/337879-analyzing-potential-bounds-Check-bypass-vulnerabilities.pdf Cc: Michael D Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Hao Wu <hao.a.wu@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Acked-by: Laszlo Ersek <lersek@redhat.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: clear exec file mode bits on "PiSmmCpuDxeSmm.c"Laszlo Ersek2018-08-221-0/+0
| | | | | | | | | | | | | Commit 241f914975d5 ("UefiCpuPkg/PiSmmCpuDxeSmm: Add support for PCD PcdPteMemoryEncryptionAddressOrMask", 2017-03-01) unintentionally set the executable file mode bits on "PiSmmCpuDxeSmm.c"; clear them now. Cc: Eric Dong <eric.dong@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=1092 Fixes: 241f914975d50e34f6da57d1e5ac60eedb5d52de Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Star Zeng <star.zeng@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmInitStack" with PatchInstructionX86()Laszlo Ersek2018-04-041-1/+5
| | | | | | | | | | | | | | | | | Rename the variable to "gPatchSmmInitStack" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This lets us remove the binary (DB) encoding of some instructions in "SmmInit.nasm". The size of the patched source operand is (sizeof (UINTN)). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: eliminate "gSmmJmpAddr" and related DBsLaszlo Ersek2018-04-041-7/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The IA32 version of "SmmInit.nasm" does not need "gSmmJmpAddr" at all (its PiSmmCpuSmmInitFixupAddress() variant doesn't do anything either). We can simply use the NASM syntax for the following Mixed-Size Jump: > jmp PROTECT_MODE_CS : dword @32bit The generated object code for the instruction is unchanged: > 00000182 66EA5A0000000800 jmp dword 0x8:0x5a (The NASM manual explains that putting the DWORD prefix after the colon ":" reflects the intent better, since it is the offset that is a DWORD. Thus, that's what I used. However, both syntaxes are interchangeable, hence the ndisasm output.) The X64 version of "SmmInit.nasm" appears to require "gSmmJmpAddr"; however that's accidental, not inherent: - Bring LONG_MODE_CODE_SEGMENT from "UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h" to "SmmInit.nasm" as LONG_MODE_CS, same as PROTECT_MODE_CODE_SEGMENT was brought to the IA32 version as PROTECT_MODE_CS earlier. - Apply the NASM-native Mixed-Size Jump syntax again, but jump to the fixed zero offset in LONG_MODE_CS. This will produce no relocation record at all. Add a label after the instruction. - Modify PiSmmCpuSmmInitFixupAddress() to patch the jump target backwards from the label. Because we modify the DWORD offset with a DWORD access, the segment selector is unharmed in the instruction, and we need not set it from PiCpuSmmEntry(). According to "objdump --reloc", the X64 version undergoes only the following relocations, after this patch: > RELOCATION RECORDS FOR [.text]: > OFFSET TYPE VALUE > 0000000000000095 R_X86_64_PC32 SmmInitHandler-0x0000000000000004 > 00000000000000e0 R_X86_64_PC32 mRebasedFlag-0x0000000000000004 > 00000000000000ea R_X86_64_PC32 mSmmRelocationOriginalAddress-0x0000000000000004 Therefore the patch does not regress <https://bugzilla.tianocore.org/show_bug.cgi?id=849> ("Enable XCODE5 tool chain for UefiCpuPkg with nasm source code"). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmCr0" with PatchInstructionX86()Laszlo Ersek2018-04-041-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | Like "gSmmCr4" in the previous patch, "gSmmCr0" is not only used for machine code patching, but also as a means to communicate the initial CR0 value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words, the last four bytes of the "mov eax, Cr0Value" instruction's binary representation are utilized as normal data too. In order to get rid of the DB for "mov eax, Cr0Value", we have to split both roles, patching and data flow. Introduce the "mSmmCr0" global (SMRAM) variable for the data flow purpose. Rename the "gSmmCr0" variable to "gPatchSmmCr0" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(), to the value now contained in "mSmmCr0". This lets us remove the binary (DB) encoding of "mov eax, Cr0Value" in "SmmInit.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmCr4" with PatchInstructionX86()Laszlo Ersek2018-04-041-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | Unlike "gSmmCr3" in the previous patch, "gSmmCr4" is not only used for machine code patching, but also as a means to communicate the initial CR4 value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words, the last four bytes of the "mov eax, Cr4Value" instruction's binary representation are utilized as normal data too. In order to get rid of the DB for "mov eax, Cr4Value", we have to split both roles, patching and data flow. Introduce the "mSmmCr4" global (SMRAM) variable for the data flow purpose. Rename the "gSmmCr4" variable to "gPatchSmmCr4" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(), to the value now contained in "mSmmCr4". This lets us remove the binary (DB) encoding of "mov eax, Cr4Value" in "SmmInit.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmCr3" with PatchInstructionX86()Laszlo Ersek2018-04-041-1/+1
| | | | | | | | | | | | | | | Rename the variable to "gPatchSmmCr3" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This lets us remove the binary (DB) encoding of some instructions in "SmmInit.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg: Update PiSmmCpuDxeSmm pass XCODE5 tool chainLiming Gao2018-01-161-1/+7
| | | | | | | | | | | | | | | | | | | | | https://bugzilla.tianocore.org/show_bug.cgi?id=849 In V2, use "mov rax, strict qword 0" to replace the hard code db. 1. Use lea instruction to get the address instead of mov instruction. 2. Use the dummy address as jmp destination, and add the logic to fix up the address to the absolute address at boot time. 3. On MpFuncs.nasm, use ExchangeInfo to record InitializeFloatingPointUnits. This way is same to MpInitLib. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Liming Gao <liming.gao@intel.com> Cc: Andrew Fish <afish@apple.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
* UefiCpuPkg: Fix unix style of EOLJian J Wang2017-11-211-20/+20
| | | | | | | | | Cc: Wu Hao <hao.a.wu@intel.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Eric Dong <eric.dong@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Hao Wu <hao.a.wu@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add SmmMemoryAttribute protocolJian J Wang2017-11-171-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | Heap guard makes use of paging mechanism to implement its functionality. But there's no protocol or library available to change page attribute in SMM mode. A new protocol gEdkiiSmmMemoryAttributeProtocolGuid is introduced to make it happen. This protocol provide three interfaces struct _EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL { EDKII_SMM_GET_MEMORY_ATTRIBUTES GetMemoryAttributes; EDKII_SMM_SET_MEMORY_ATTRIBUTES SetMemoryAttributes; EDKII_SMM_CLEAR_MEMORY_ATTRIBUTES ClearMemoryAttributes; }; Since heap guard feature need to update page attributes. The page table should not set to be read-only if heap guard feature is enabled for SMM mode. Otherwise this feature cannot work. Cc: Eric Dong <eric.dong@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Centralize mPhysicalAddressBits definitionStar Zeng2017-08-281-0/+2
| | | | | | | | | | | | | | | | | | | | Originally (before 714c2603018a99a514c42c2b511c821f30ba9cdf), mPhysicalAddressBits was only defined in X64 PageTbl.c, after 714c2603018a99a514c42c2b511c821f30ba9cdf, mPhysicalAddressBits is also defined in Ia32 PageTbl.c, then mPhysicalAddressBits is used in ConvertMemoryPageAttributes() for address check. This patch is to centralize mPhysicalAddressBits definition to PiSmmCpuDxeSmm.c from Ia32 and X64 PageTbl.c. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Eric Dong <eric.dong@intel.com> Suggested-by: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Check ProcessorId == INVALID_APIC_IDJeff Fan2017-05-111-1/+6
| | | | | | | | | | | | | | | | | If PcdCpuHotPlugSupport is TRUE, gSmst->NumberOfCpus will be the PcdCpuMaxLogicalProcessorNumber. If gSmst->SmmStartupThisAp() is invoked for those un-existed processors, ASSERT() happened in ConfigSmmCodeAccessCheck(). This fix is to check if ProcessorId is valid before invoke gSmst->SmmStartupThisAp() in ConfigSmmCodeAccessCheck() and to check if ProcessorId is valid in InternalSmmStartupThisAp() to avoid unexpected DEBUG error message displayed. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Eric Dong <eric.dong@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* PeCoffGetEntryPointLib: Fix spelling issueJeff Fan2017-04-261-1/+1
| | | | | | | | | | | *Serach* should be *Search* Cc: Liming Gao <liming.gao@intel.com> Cc: Feng Tian <feng.tian@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Consume new APIsJeff Fan2017-04-071-34/+3
| | | | | | | | | | | | Consuming PeCoffSerachImageBase() from PeCoffGetEntrypointLib and consuming DumpCpuContext() from CpuExceptionHandlerLib to replace its own implementation. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Save SMM ranges info into global variablesJeff Fan2017-04-011-20/+24
| | | | | | | | | | | | v2: Add #define SMRR_MAX_ADDRESS to clarify SMRR requirement. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
* UefiCpuPkg: Refine casting expression result to bigger sizeHao Wu2017-03-061-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are cases that the operands of an expression are all with rank less than UINT64/INT64 and the result of the expression is explicitly cast to UINT64/INT64 to fit the target size. An example will be: UINT32 a,b; // a and b can be any unsigned int type with rank less than UINT64, like // UINT8, UINT16, etc. UINT64 c; c = (UINT64) (a + b); Some static code checkers may warn that the expression result might overflow within the rank of "int" (integer promotions) and the result is then cast to a bigger size. The commit refines codes by the following rules: 1). When the expression is possible to overflow the range of unsigned int/ int: c = (UINT64)a + b; 2). When the expression will not overflow within the rank of "int", remove the explicit type casts: c = a + b; 3). When the expression will be cast to pointer of possible greater size: UINT32 a,b; VOID *c; c = (VOID *)(UINTN)(a + b); --> c = (VOID *)((UINTN)a + b); 4). When one side of a comparison expression contains only operands with rank less than UINT32: UINT8 a; UINT16 b; UINTN c; if ((UINTN)(a + b) > c) {...} --> if (((UINT32)a + b) > c) {...} For rule 4), if we remove the 'UINTN' type cast like: if (a + b > c) {...} The VS compiler will complain with warning C4018 (signed/unsigned mismatch, level 3 warning) due to promoting 'a + b' to type 'int'. Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Hao Wu <hao.a.wu@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add support for PCD ↵Leo Duran2017-03-011-0/+14
| | | | | | | | | | | | | | | | | | | PcdPteMemoryEncryptionAddressOrMask This PCD holds the address mask for page table entries when memory encryption is enabled on AMD processors supporting the Secure Encrypted Virtualization (SEV) feature. The mask is applied when page tables entriees are created or modified. CC: Jeff Fan <jeff.fan@intel.com> Cc: Feng Tian <feng.tian@intel.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Brijesh Singh <brijesh.singh@amd.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Leo Duran <leo.duran@amd.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpu: Add SMM Comm Buffer Paging Protection.Jiewen Yao2016-12-191-12/+11
| | | | | | | | | | | | | | | | | | | | | This patch sets the normal OS buffer EfiLoaderCode/Data, EfiBootServicesCode/Data, EfiConventionalMemory, EfiACPIReclaimMemory to be not present after SmmReadyToLock. To access these region in OS runtime phase is not a good solution. Previously, we did similar check in SmmMemLib to help SMI handler do the check. But if SMI handler forgets the check, it can still access these OS region and bring risk. So here we enforce the policy to prevent it happening. Cc: Jeff Fan <jeff.fan@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Remove PSD layout assumptionsMichael Kinney2016-12-011-7/+8
| | | | | | | | | | | | | | | | | | | https://bugzilla.tianocore.org/show_bug.cgi?id=277 Remove dependency on layout of PROCESSOR_SMM_DESCRIPTOR everywhere possible. The only exception is the standard SMI entry handler template that is included with the PiSmmCpuDxeSmm module. This allows an instance of the SmmCpuFeaturesLib to provide alternate PROCESSOR_SMM_DESCRIPTOR structure layouts. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Jeff Fan <jeff.fan@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: handle dynamic PcdCpuMaxLogicalProcessorNumberLaszlo Ersek2016-11-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "UefiCpuPkg/UefiCpuPkg.dec" already allows platforms to make PcdCpuMaxLogicalProcessorNumber dynamic, however PiSmmCpuDxeSmm does not take this into account everywhere. As soon as a platform turns the PCD into a dynamic one, at least S3 fails. When the PCD is dynamic, all PcdGet() calls translate into PCD DXE protocol calls, which are only permitted at boot time, not at runtime or during S3 resume. We already have a variable called mMaxNumberOfCpus; it is initialized in the entry point function like this: > // > // If support CPU hot plug, we need to allocate resources for possibly > // hot-added processors > // > if (FeaturePcdGet (PcdCpuHotPlugSupport)) { > mMaxNumberOfCpus = PcdGet32 (PcdCpuMaxLogicalProcessorNumber); > } else { > mMaxNumberOfCpus = mNumberOfCpus; > } There's another use of the PCD a bit higher up, also in the entry point function: > // > // Use MP Services Protocol to retrieve the number of processors and > // number of enabled processors > // > Status = MpServices->GetNumberOfProcessors (MpServices, &mNumberOfCpus, > &NumberOfEnabledProcessors); > ASSERT_EFI_ERROR (Status); > ASSERT (mNumberOfCpus <= PcdGet32 (PcdCpuMaxLogicalProcessorNumber)); Preserve these calls in the entry point function, and replace all other uses of PcdCpuMaxLogicalProcessorNumber -- there are only reads -- with mMaxNumberOfCpus. For PcdCpuHotPlugSupport==TRUE, this is an unobservable change. For PcdCpuHotPlugSupport==FALSE, we even save SMRAM, because we no longer allocate resources needlessly for CPUs that can never appear in the system. PcdCpuMaxLogicalProcessorNumber is also retrieved in "UefiCpuPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c", but only in the library instance constructor, which runs even before the entry point function is called. Cc: Igor Mammedov <imammedo@redhat.com> Cc: Jeff Fan <jeff.fan@intel.com> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=116 Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add paging protection.Jiewen Yao2016-11-171-2/+142
| | | | | | | | | | | | | | | | | | | | | | | | | | | | PiSmmCpuDxeSmm consumes SmmAttributesTable and setup page table: 1) Code region is marked as read-only and Data region is non-executable, if the PE image is 4K aligned. 2) Important data structure is set to RO, such as GDT/IDT. 3) SmmSaveState is set to non-executable, and SmmEntrypoint is set to read-only. 4) If static page is supported, page table is read-only. We use page table to protect other components, and itself. If we use dynamic paging, we can still provide *partial* protection. And hope page table is not modified by other components. The XD enabling code is moved to SmiEntry to let NX take effect. Cc: Jeff Fan <jeff.fan@intel.com> Cc: Feng Tian <feng.tian@intel.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jiewen Yao <jiewen.yao@intel.com> Tested-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Free SmramRanges to save SMM spaceJeff Fan2016-11-161-0/+1
| | | | | | | | | Cc: Zeng Star <star.zeng@intel.com> Cc: Feng Tian <feng.tian@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Zeng Star <star.zeng@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Consume PcdAcpiS3Enable to control the codeStar Zeng2016-09-011-0/+1
| | | | | | | | | | | | | | if PcdAcpiS3Enable is disabled, then skip S3 related logic. Cc: Jeff Fan <jeff.fan@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Move S3 related code to CpuS3.cStar Zeng2016-09-011-320/+3
| | | | | | | | | | | | Cc: Jeff Fan <jeff.fan@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> Tested-by: Laszlo Ersek <lersek@redhat.com> Acked-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Clean up CheckFeatureSupported()Jeff Fan2016-07-141-1/+1
| | | | | | | | | | | | | Removed EFIAPI and parameter from CheckFeatureSupported() and removed CheckProcessorFeature() totally. Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com> Reviewed-by: Michael Kinney <michael.d.kinney@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Check XD/BTS features in SMM relocationJeff Fan2016-07-141-5/+7
| | | | | | | | | | | | | | | | CheckProcessorFeature() invokes MpService->StartupAllAps() to detect XD/BTS features on normal boot path. It's not necessary and may cause performance impact, because INIT-SIPI-SIPI must be sent to APs if APs are in hlt-loop mode. XD/BTS feature detection is moved to SmmInitHandler() in SMM relocation during normal boot path. Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com> Reviewed-by: Michael Kinney <michael.d.kinney@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add SMM S3 boot flagJeff Fan2016-07-141-0/+7
| | | | | | | | | | | | It will be set to TRUE during S3 resume. Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com> Reviewed-by: Michael Kinney <michael.d.kinney@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add MemoryMapped in SetProcessorRegister()Jeff Fan2016-07-141-0/+2
| | | | | | | | | | | | | | | | | | REGISTER_TYPE in UefiCpuPkg/Include/AcpiCpuData.h defines a MemoryMapped enum value. However support for the MemoryMapped enum is missing from the implementation of SetProcessorRegister(). This patch adds support for MemoryMapped type SetProcessorRegister(). One spin lock is added to avoid potential conflict when multiple processor update the same memory space. Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com> Reviewed-by: Michael Kinney <michael.d.kinney@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Using CPU semaphores in aligned bufferJeff Fan2016-05-241-2/+2
| | | | | | | | | | | | | Update each CPU semaphores to the ones in allocated aligned semaphores buffer. Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com> Reviewed-by: Michael Kinney <michael.d.kinney@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Using global semaphores in aligned bufferJeff Fan2016-05-241-8/+8
| | | | | | | | | | | | | Update all global semaphores to the ones in allocated aligned semaphores buffer. Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com> Reviewed-by: Michael Kinney <michael.d.kinney@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Initialize gSmst fields on S3 resumeMichael Kinney2015-12-241-0/+9
| | | | | | | | | | | | Update S3 resume path to initialize the fields of gSmst before the gSmst fields are used to complete initialization in S3 resume. Cc: Jeff Fan <jeff.fan@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19504 6f19259b-4bc3-4df7-8a09-765794883524
* UefiCpuPkg/PiSmmCpuDxeSmm: Correct CPUID leaf used to detect SMM modeMichael Kinney2015-12-241-1/+6
| | | | | | | | | | | | | Use Bit 29 of CPUID leaf CPUID_EXTENDED_CPU_SIG (0x80000001) to determine the SMM save state mode. The previous version of this code used CPUID leaf CPUID_VERSION_INFO (0x00000001). Cc: Jeff Fan <jeff.fan@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@19503 6f19259b-4bc3-4df7-8a09-765794883524
* UefiCpuPkg/PiSmmCpu: Update function call for 2 new APIs.Yao, Jiewen2015-11-271-2/+35
| | | | | | | | | | | | | | | All page table allocation will use AllocatePageTableMemory(). Add SmmCpuFeaturesCompleteSmmReadyToLock() to PerformRemainingTasks() and PerformPreTasks(). Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com> Reviewed-by: "Kinney, Michael D" <michael.d.kinney@intel.com> Cc: "Fan, Jeff" <jeff.fan@intel.com> Cc: "Kinney, Michael D" <michael.d.kinney@intel.com> Cc: "Laszlo Ersek" <lersek@redhat.com> git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18981 6f19259b-4bc3-4df7-8a09-765794883524
* Revert "Add 2 APIs in SmmCpuFeaturesLib."Laszlo Ersek2015-11-271-35/+2
| | | | | | | | | | | | | | This reverts SVN r18958 / git commit 9daa916dd1efe6443f9a66dfa882f3185d33ad28. The patch series had been fully reviewed on edk2-devel, but it got committed as a single squashed patch. Revert it for now. Link: http://thread.gmane.org/gmane.comp.bios.edk2.devel/4951 Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Laszlo Ersek <lersek@redhat.com> git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18978 6f19259b-4bc3-4df7-8a09-765794883524
* Add 2 APIs in SmmCpuFeaturesLib.Yao, Jiewen2015-11-261-2/+35
| | | | | | | | | | | | | | | | | | | Add NULL func for 2 new APIs in SmmCpuFeaturesLib. SmmCpuFeaturesCompleteSmmReadyToLock() is a hook point to allow CPU specific code to do more registers setting after the gEfiSmmReadyToLockProtocolGuid notification is completely processed. Add SmmCpuFeaturesCompleteSmmReadyToLock() to PerformRemainingTasks() and PerformPreTasks(). SmmCpuFeaturesAllocatePageTableMemory() is an API to allow CPU to allocate a specific region for storing page tables. All page table allocation will use AllocatePageTableMemory(). Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: "Yao, Jiewen" <jiewen.yao@intel.com> Reviewed-by: "Kinney, Michael D" <michael.d.kinney@intel.com> git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@18958 6f19259b-4bc3-4df7-8a09-765794883524