summaryrefslogtreecommitdiffstats
path: root/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
Commit message (Collapse)AuthorAgeFilesLines
* UefiCpuPkg: PiSmmCpuDxeSmm: Check buffer size before accessingKun Qin2021-04-121-1/+1
| | | | | | | | | | | | | | | | | | | | | REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3283 Current SMM Save State routine does not check the number of bytes to be read, when it comse to read IO_INFO, before casting the incoming buffer to EFI_SMM_SAVE_STATE_IO_INFO. This could potentially cause memory corruption due to extra bytes are written out of buffer boundary. This change adds a width check before copying IoInfo into output buffer. Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Signed-off-by: Kun Qin <kuqin12@gmail.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Message-Id: <20210406195254.1018-2-kuqin12@gmail.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Reflect page table depth with page table addressSheng Wei2020-11-181-5/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When trying to get page table base, if mInternalCr3 is zero, it will use the page table from CR3, and reflect the page table depth by CR4 LA57 bit. If mInternalCr3 is non zero, it will use the page table from mInternalCr3 and reflect the page table depth of mInternalCr3 at same time. In the case of X64, we use m5LevelPagingNeeded to reflect the depth of the page table. And in the case of IA32, it will not the page table depth information. This patch is a bug fix when enable CET feature with 5 level paging. The SMM page tables are allocated / initialized in PiCpuSmmEntry(). When CET is enabled, PiCpuSmmEntry() must further modify the attribute of shadow stack pages. This page table is not set to CR3 in PiCpuSmmEntry(). So the page table base address is set to mInternalCr3 for modifty the page table attribute. It could not use CR4 LA57 bit to reflect the page table depth for mInternalCr3. So we create a architecture-specific implementation GetPageTable() with 2 output parameters. One parameter is used to output the page table address. Another parameter is used to reflect if it is 5 level paging or not. REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3015 Signed-off-by: Sheng Wei <w.sheng@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Remove Used parameter.Dong, Eric2020-04-131-1/+0
| | | | | | | | | | | | | | REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2388 After patch "UefiCpuPkg/PiSmmCpuDxeSmm: Improve the performance of GetFreeToken()" which adds new parameter FirstFreeToken, it's not need to use Uses parameter. This patch used to remove this parameter. Signed-off-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Improve the performance of GetFreeToken()Ray Ni2020-04-131-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Today's GetFreeToken() runs at the algorithm complexity of O(n) where n is the size of the token list. The change introduces a new global variable FirstFreeToken and it always points to the first free token. So the algorithm complexity of GetFreeToken() decreases from O(n) to O(1). The improvement matters when some SMI code uses StartupThisAP() service for each of the AP such that the algorithm complexity becomes O(n) * O(m) where m is the AP count. As next steps, 1. PROCEDURE_TOKEN.Used field can be optimized out because all tokens before FirstFreeToken should have "Used" set while all after FirstFreeToken should have "Used" cleared. 2. ResetTokens() can be optimized to only reset tokens before FirstFreeToken. v2: add missing line in InitializeDataForMmMp. v3: update copyright year to 2020. Signed-off-by: Ray Ni <ray.ni@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Star Zeng <star.zeng@intel.com>
* UefiCpuPkg/PiSmm: Fix various typosAntoine Coeur2020-02-101-6/+6
| | | | | | | | | | | | | | Fix various typos in comments and documentation. Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Signed-off-by: Antoine Coeur <coeur@gmx.fr> Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Signed-off-by: Philippe Mathieu-Daude <philmd@redhat.com> Message-Id: <20200207010831.9046-78-philmd@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Pre-allocate PROCEDURE_TOKEN bufferEric Dong2020-01-021-5/+1
| | | | | | | | | | | | | | | | | REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2388 Token is new introduced by MM MP Protocol. Current logic allocate Token every time when need to use it. The logic caused SMI latency raised to very high. Update logic to allocate Token buffer at driver's entry point. Later use the token from the allocated token buffer. Only when all the buffer have been used, then need to allocate new buffer. Former change (9caaa79dd7e078ebb4012dde3b3d3a5d451df609) missed PROCEDURE_TOKEN part, this change covers it. Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Signed-off-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Remove dependence between APsEric Dong2019-12-241-2/+3
| | | | | | | | | | | | | | | | | REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2268 In current implementation, when check whether APs called by StartUpAllAPs or StartUpThisAp, it checks the Tokens value used by other APs. Also the AP will update the Token value for itself if its task finished. In this case, the potential race condition issues happens for the tokens. Because of this, system may trig ASSERT during cycling test. This change enhance the code logic, add new attributes for the token to remove the reference for the tokens belongs to other APs. Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Signed-off-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Avoid allocate Token every timeEric Dong2019-12-061-0/+15
| | | | | | | | | | | | | | REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2388 Token is new introduced by MM MP Protocol. Current logic allocate Token every time when need to use it. The logic caused SMI latency raised to very high. Update logic to allocate Token buffer at driver's entry point. Later use the token from the allocated token buffer. Only when all the buffer have been used, then need to allocate new buffer. Reviewed-by: Ray Ni <ray.ni@intel.com> Signed-off-by: Eric Dong <eric.dong@intel.com> Acked-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg: Update the coding stylesShenglei Zhang2019-12-041-1/+1
| | | | | | | | | | | | In MpLib.c, remove the white space on a new line. In PageTbl.c and PiSmmCpuDxeSmm.h, update the comment style. Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Signed-off-by: Shenglei Zhang <shenglei.zhang@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpu: Restrict access per PcdCpuSmmRestrictedMemoryAccessRay Ni2019-09-041-0/+11
| | | | | | | | | | | | | | | | | | Today's behavior is to always restrict access to non-SMRAM regardless the value of PcdCpuSmmRestrictedMemoryAccess. Because RAS components require to access all non-SMRAM memory, the patch changes the code logic to honor PcdCpuSmmRestrictedMemoryAccess so that only when the PCD is true, the restriction takes affect and page table memory is also protected. Because IA32 build doesn't reference this PCD, such restriction always takes affect in IA32 build. Signed-off-by: Ray Ni <ray.ni@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Fix coding styleShenglei Zhang2019-08-131-3/+3
| | | | | | | | | | | | | 1. Update CPUStatus to CpuStatus in comments to align comments with code. 2. Add "OUT" attribute for "ProcedureArguments" parameter in function header. Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Signed-off-by: Shenglei Zhang <shenglei.zhang@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg: Update code to include register definitions from MdePkgNi, Ray2019-08-091-2/+2
| | | | | | | Signed-off-by: Ray Ni <ray.ni@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com> Signed-off-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Make code consistent with commentsshenglei2019-08-061-2/+2
| | | | | | | | | | | | 1.Remove "out" attribute for " Buffer" parameter in function header. 2.Add "out" attribute for " Token" parameter in function header. 3.Update ProcedureArgument to ProcedureArguments. Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Signed-off-by: Shenglei Zhang <shenglei.zhang@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Enable MM MP ProtocolEric Dong2019-07-161-1/+192
| | | | | | | | | | | | REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1937 Add MM Mp Protocol in PiSmmCpuDxeSmm driver. Cc: Ray Ni <ray.ni@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Signed-off-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg: Replace BSD License with BSD+Patent LicenseMichael D Kinney2019-04-091-7/+1
| | | | | | | | | | | | | | | | | | | | | https://bugzilla.tianocore.org/show_bug.cgi?id=1373 Replace BSD 2-Clause License with BSD+Patent License. This change is based on the following emails: https://lists.01.org/pipermail/edk2-devel/2019-February/036260.html https://lists.01.org/pipermail/edk2-devel/2018-October/030385.html RFCs with detailed process for the license change: V3: https://lists.01.org/pipermail/edk2-devel/2019-March/038116.html V2: https://lists.01.org/pipermail/edk2-devel/2019-March/037669.html V1: https://lists.01.org/pipermail/edk2-devel/2019-March/037500.html Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com>
* UefiCpuPkg\CpuSmm: Save & restore CR2 on-demand paging in SMMVanguput, Narendra K2019-04-041-0/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1593 For every SMI occurrence, save and restore CR2 register only when SMM on-demand paging support is enabled in 64 bit operation mode. This is not a bug but to have better improvement of code. Patch5 is updated with separate functions for Save and Restore of CR2 based on review feedback. Patch6 - Removed Global Cr2 instead used function parameter. Patch7 - Removed checking Cr2 with 0 as per feedback. Patch8 and 9 - Aligned with EDK2 Coding style. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Vanguput Narendra K <narendra.k.vanguput@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Yao Jiewen <jiewen.yao@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Nate DeSimone <nathaniel.l.desimone@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpu: Add Shadow Stack Support for X86 SMM.Jiewen Yao2019-02-281-4/+99
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1521 We scan the SMM code with ROPgadget. http://shell-storm.org/project/ROPgadget/ https://github.com/JonathanSalwan/ROPgadget/tree/master This tool reports the gadget in SMM driver. This patch enabled CET ShadowStack for X86 SMM. If CET is supported, SMM will enable CET ShadowStack. SMM CET will save the OS CET context at SmmEntry and restore OS CET context at SmmExit. Test: 1) test Intel internal platform (x64 only, CET enabled/disabled) Boot test: CET supported or not supported CPU on CET supported platform CET enabled/disabled PcdCpuSmmCetEnable enabled/disabled Single core/Multiple core PcdCpuSmmStackGuard enabled/disabled PcdCpuSmmProfileEnable enabled/disabled PcdCpuSmmStaticPageTable enabled/disabled CET exception test: #CF generated with PcdCpuSmmStackGuard enabled/disabled. Other exception test: #PF for normal stack overflow #PF for NX protection #PF for RO protection CET env test: Launch SMM in CET enabled/disabled environment (DXE) - no impact to DXE The test case can be found at https://github.com/jyao1/SecurityEx/tree/master/ControlFlowPkg 2) test ovmf (both IA32 and X64 SMM, CET disabled only) test OvmfIa32/Ovmf3264, with -D SMM_REQUIRE. qemu-system-x86_64.exe -machine q35,smm=on -smp 4 -serial file:serial.log -drive if=pflash,format=raw,unit=0,file=OVMF_CODE.fd,readonly=on -drive if=pflash,format=raw,unit=1,file=OVMF_VARS.fd QEMU emulator version 3.1.0 (v3.1.0-11736-g7a30e7adb0-dirty) 3) not tested IA32 CET enabled platform Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Yao Jiewen <jiewen.yao@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Clean up useless code.Eric Dong2018-10-261-16/+0
| | | | | | | | | | | Remove useless code after change 93324390. Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Dandan Bi <dandan.bi@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add logic to support semaphore type.Eric Dong2018-10-221-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | V4 changes: 1. Serial console log for different threads when program register table. 2. Check the AcpiCpuData before use it to avoid potential ASSERT. V3 changes: 1. Use global variable instead of internal function to return string for register type and dependence type. 2. Add comments for some complicated logic. V1 changes: Because this driver needs to set MSRs saved in normal boot phase, sync semaphore logic from RegisterCpuFeaturesLib code which used for normal boot phase. Detail see below change for RegisterCpuFeaturesLib: UefiCpuPkg/RegisterCpuFeaturesLib: Add logic to support semaphore type. Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com> Acked-by: Laszlo Ersek <lersek@redhat.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg: Remove redundant library classes, Ppis and GUIDsshenglei2018-09-211-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some redundant library classes Ppis and GUIDs have been removed in inf, .c and .h files. v2: 1.Remove ReadOnlyVariable2.h in S3Resume.c which should be deleted in last version in which gEfiPeiReadOnlyVariable2PpiGuid was removed. 2.Remove the library class BaseLib in CpuPageTable.c which is included elsewhere. 3.Add library classes in SecCore.inf which are removed at last version. They are DebugAgentLib and CpuExceptionHandlerLib. 4.Add two Ppis in SecCore.inf which are removed at last version. They are gEfiSecPlatformInformationPpiGuid and gEfiSecPlatformInformation2PpiGuid. https://bugzilla.tianocore.org/show_bug.cgi?id=1043 https://bugzilla.tianocore.org/show_bug.cgi?id=1013 https://bugzilla.tianocore.org/show_bug.cgi?id=1032 https://bugzilla.tianocore.org/show_bug.cgi?id=1016 Cc: Eric Dong <eric.dong@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: shenglei <shenglei.zhang@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/PiSmmCpu: Check EFI_RUNTIME_RO in UEFI mem attrib table.Jiewen Yao2018-07-261-0/+2
| | | | | | | | | It treats the UEFI runtime page with EFI_MEMORY_RO attribute as invalid SMM communication buffer. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Star Zeng <star.zeng@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmInitStack" with PatchInstructionX86()Laszlo Ersek2018-04-041-1/+1
| | | | | | | | | | | | | | | | | Rename the variable to "gPatchSmmInitStack" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This lets us remove the binary (DB) encoding of some instructions in "SmmInit.nasm". The size of the patched source operand is (sizeof (UINTN)). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: eliminate "gSmmJmpAddr" and related DBsLaszlo Ersek2018-04-041-11/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The IA32 version of "SmmInit.nasm" does not need "gSmmJmpAddr" at all (its PiSmmCpuSmmInitFixupAddress() variant doesn't do anything either). We can simply use the NASM syntax for the following Mixed-Size Jump: > jmp PROTECT_MODE_CS : dword @32bit The generated object code for the instruction is unchanged: > 00000182 66EA5A0000000800 jmp dword 0x8:0x5a (The NASM manual explains that putting the DWORD prefix after the colon ":" reflects the intent better, since it is the offset that is a DWORD. Thus, that's what I used. However, both syntaxes are interchangeable, hence the ndisasm output.) The X64 version of "SmmInit.nasm" appears to require "gSmmJmpAddr"; however that's accidental, not inherent: - Bring LONG_MODE_CODE_SEGMENT from "UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h" to "SmmInit.nasm" as LONG_MODE_CS, same as PROTECT_MODE_CODE_SEGMENT was brought to the IA32 version as PROTECT_MODE_CS earlier. - Apply the NASM-native Mixed-Size Jump syntax again, but jump to the fixed zero offset in LONG_MODE_CS. This will produce no relocation record at all. Add a label after the instruction. - Modify PiSmmCpuSmmInitFixupAddress() to patch the jump target backwards from the label. Because we modify the DWORD offset with a DWORD access, the segment selector is unharmed in the instruction, and we need not set it from PiCpuSmmEntry(). According to "objdump --reloc", the X64 version undergoes only the following relocations, after this patch: > RELOCATION RECORDS FOR [.text]: > OFFSET TYPE VALUE > 0000000000000095 R_X86_64_PC32 SmmInitHandler-0x0000000000000004 > 00000000000000e0 R_X86_64_PC32 mRebasedFlag-0x0000000000000004 > 00000000000000ea R_X86_64_PC32 mSmmRelocationOriginalAddress-0x0000000000000004 Therefore the patch does not regress <https://bugzilla.tianocore.org/show_bug.cgi?id=849> ("Enable XCODE5 tool chain for UefiCpuPkg with nasm source code"). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmCr0" with PatchInstructionX86()Laszlo Ersek2018-04-041-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Like "gSmmCr4" in the previous patch, "gSmmCr0" is not only used for machine code patching, but also as a means to communicate the initial CR0 value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words, the last four bytes of the "mov eax, Cr0Value" instruction's binary representation are utilized as normal data too. In order to get rid of the DB for "mov eax, Cr0Value", we have to split both roles, patching and data flow. Introduce the "mSmmCr0" global (SMRAM) variable for the data flow purpose. Rename the "gSmmCr0" variable to "gPatchSmmCr0" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(), to the value now contained in "mSmmCr0". This lets us remove the binary (DB) encoding of "mov eax, Cr0Value" in "SmmInit.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmCr4" with PatchInstructionX86()Laszlo Ersek2018-04-041-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Unlike "gSmmCr3" in the previous patch, "gSmmCr4" is not only used for machine code patching, but also as a means to communicate the initial CR4 value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words, the last four bytes of the "mov eax, Cr4Value" instruction's binary representation are utilized as normal data too. In order to get rid of the DB for "mov eax, Cr4Value", we have to split both roles, patching and data flow. Introduce the "mSmmCr4" global (SMRAM) variable for the data flow purpose. Rename the "gSmmCr4" variable to "gPatchSmmCr4" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(), to the value now contained in "mSmmCr4". This lets us remove the binary (DB) encoding of "mov eax, Cr4Value" in "SmmInit.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmCr3" with PatchInstructionX86()Laszlo Ersek2018-04-041-1/+1
| | | | | | | | | | | | | | | Rename the variable to "gPatchSmmCr3" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This lets us remove the binary (DB) encoding of some instructions in "SmmInit.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg PiSmmCpuDxeSmm: Refine some comments about SmmMemoryAttributeStar Zeng2018-04-041-9/+6
| | | | | | | | | | | | | | | 1. Fix some "support" to "supported". 2. Fix some "set" to "clear" in ClearMemoryAttributes interface. 3. Remove redundant comments for GetMemoryAttributes interface. Cc: Jian J Wang <jian.j.wang@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg: Update PiSmmCpuDxeSmm pass XCODE5 tool chainLiming Gao2018-01-161-0/+18
| | | | | | | | | | | | | | | | | | | | | https://bugzilla.tianocore.org/show_bug.cgi?id=849 In V2, use "mov rax, strict qword 0" to replace the hard code db. 1. Use lea instruction to get the address instead of mov instruction. 2. Use the dummy address as jmp destination, and add the logic to fix up the address to the absolute address at boot time. 3. On MpFuncs.nasm, use ExchangeInfo to record InitializeFloatingPointUnits. This way is same to MpInitLib. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Liming Gao <liming.gao@intel.com> Cc: Andrew Fish <afish@apple.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
* UefiCpuPkg PiSmmCpuDxeSmm: Fixed #double fault on #page fault for IA32Star Zeng2018-01-151-9/+1
| | | | | | | | | | | | | | | | | | | | | | | | | When StackGuard is enabled on IA32, the #double fault exception is reported instead of #page fault. This issue does not exist on X64, or IA32 without StackGuard. The fix at e4435f710cea2d2f10cd7343d545920867780086 was incomplete. It is because AllocateCodePages() is used to allocate buffer for GDT and TSS, the code pages will be set to RO in SetMemMapAttributes(). But IA32 Stack Guard need use task switch to switch stack that need write GDT and TSS, so AllocateCodePages() could not be used. This patch uses AllocatePages() instead of AllocateCodePages() to allocate buffer for GDT and TSS if StackGuard is enabled on IA32. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Jian J Wang <jian.j.wang@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
* UefiCpuPkg: Fix unix style of EOLJian J Wang2017-11-211-98/+98
| | | | | | | | | Cc: Wu Hao <hao.a.wu@intel.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Eric Dong <eric.dong@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Hao Wu <hao.a.wu@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add SmmMemoryAttribute protocolJian J Wang2017-11-171-0/+98
| | | | | | | | | | | | | | | | | | | | | | | | | | | Heap guard makes use of paging mechanism to implement its functionality. But there's no protocol or library available to change page attribute in SMM mode. A new protocol gEdkiiSmmMemoryAttributeProtocolGuid is introduced to make it happen. This protocol provide three interfaces struct _EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL { EDKII_SMM_GET_MEMORY_ATTRIBUTES GetMemoryAttributes; EDKII_SMM_SET_MEMORY_ATTRIBUTES SetMemoryAttributes; EDKII_SMM_CLEAR_MEMORY_ATTRIBUTES ClearMemoryAttributes; }; Since heap guard feature need to update page attributes. The page table should not set to be read-only if heap guard feature is enabled for SMM mode. Otherwise this feature cannot work. Cc: Eric Dong <eric.dong@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Fix memory protection crashStar Zeng2017-08-281-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | https://bugzilla.tianocore.org/show_bug.cgi?id=624 reports memory protection crash in PiSmmCpuDxeSmm, Ia32 build with RAM above 4GB (of which 2GB are placed in 64-bit address). It is because UEFI builds identity mapping page tables, >4G address is not supported at Ia32 build. This patch is to get the PhysicalAddressBits that is used to build in PageTbl.c(Ia32/X64), and use it to check whether the address is supported or not in ConvertMemoryPageAttributes(). With this patch, the debug messages will be like below. UefiMemory protection: 0x0 - 0x9F000 Success UefiMemory protection: 0x100000 - 0x807000 Success UefiMemory protection: 0x808000 - 0x810000 Success UefiMemory protection: 0x818000 - 0x820000 Success UefiMemory protection: 0x1510000 - 0x7B798000 Success UefiMemory protection: 0x7B79B000 - 0x7E538000 Success UefiMemory protection: 0x7E539000 - 0x7E545000 Success UefiMemory protection: 0x7E55A000 - 0x7E61F000 Success UefiMemory protection: 0x7E62B000 - 0x7F6AB000 Success UefiMemory protection: 0x7F703000 - 0x7F70B000 Success UefiMemory protection: 0x7F70F000 - 0x7F778000 Success UefiMemory protection: 0x100000000 - 0x180000000 Unsupported Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Eric Dong <eric.dong@intel.com> Originally-suggested-by: Jiewen Yao <jiewen.yao@intel.com> Reported-by: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com> Tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Consume new APIsJeff Fan2017-04-071-2/+2
| | | | | | | | | | | | Consuming PeCoffSerachImageBase() from PeCoffGetEntrypointLib and consuming DumpCpuContext() from CpuExceptionHandlerLib to replace its own implementation. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Save SMM ranges info into global variablesJeff Fan2017-04-011-1/+5
| | | | | | | | | | | | v2: Add #define SMRR_MAX_ADDRESS to clarify SMRR requirement. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add support for PCD ↵Leo Duran2017-03-011-1/+7
| | | | | | | | | | | | | | | | | | | PcdPteMemoryEncryptionAddressOrMask This PCD holds the address mask for page table entries when memory encryption is enabled on AMD processors supporting the Secure Encrypted Virtualization (SEV) feature. The mask is applied when page tables entriees are created or modified. CC: Jeff Fan <jeff.fan@intel.com> Cc: Feng Tian <feng.tian@intel.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Brijesh Singh <brijesh.singh@amd.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Leo Duran <leo.duran@amd.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpu: Add SMM Comm Buffer Paging Protection.Jiewen Yao2016-12-191-0/+29
| | | | | | | | | | | | | | | | | | | | | This patch sets the normal OS buffer EfiLoaderCode/Data, EfiBootServicesCode/Data, EfiConventionalMemory, EfiACPIReclaimMemory to be not present after SmmReadyToLock. To access these region in OS runtime phase is not a good solution. Previously, we did similar check in SmmMemLib to help SMI handler do the check. But if SMI handler forgets the check, it can still access these OS region and bring risk. So here we enforce the policy to prevent it happening. Cc: Jeff Fan <jeff.fan@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpu: Fixed #double fault on #page fault.Jiewen Yao2016-12-071-0/+68
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes https://bugzilla.tianocore.org/show_bug.cgi?id=246 Previously, when SMM exception happens after EndOfDxe, with StackGuard enabled on IA32, the #double fault exception is reported instead of #page fault. Root cause is below: Current EDKII SMM page protection will lock GDT. If IA32 stack guard is enabled, the page fault handler will do task switch. This task switch need write busy flag in GDT, and write TSS. However, the GDT and TSS is locked at that time, so the double fault happens. We decide to not lock GDT for IA32 StackGuard enabled. This issue does not exist on X64, or IA32 without StackGuard. Cc: Laszlo Ersek <lersek@redhat.com> Cc: Jeff Fan <jeff.fan@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Remove PSD layout assumptionsMichael Kinney2016-12-011-26/+0
| | | | | | | | | | | | | | | | | | | https://bugzilla.tianocore.org/show_bug.cgi?id=277 Remove dependency on layout of PROCESSOR_SMM_DESCRIPTOR everywhere possible. The only exception is the standard SMI entry handler template that is included with the PiSmmCpuDxeSmm module. This allows an instance of the SmmCpuFeaturesLib to provide alternate PROCESSOR_SMM_DESCRIPTOR structure layouts. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Jeff Fan <jeff.fan@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Remove MTRRs from PSD structureMichael Kinney2016-12-011-1/+1
| | | | | | | | | | | | | | | | https://bugzilla.tianocore.org/show_bug.cgi?id=277 All CPUs use the same MTRR settings. Move MTRR settings from a field in the PROCESSOR_SMM_DESCRIPTOR structure into a module global variable. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Jeff Fan <jeff.fan@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: TransferApToSafeState() use UINTN paramsMichael Kinney2016-11-171-6/+6
| | | | | | | | | | | | | | | | | | | Update TransferApToSafeState() use UINTN params to reduce the number of type casts required in these calls. Also change the NumberToFinish parameter from UINT32* to UINTN NumberToFinishAddress to resolve issues with conversion from a volatile pointer to a non-volatile pointer. The assembly code that receives the NumberToFinishAddress value must treat that memory location as a volatile to track the number of APs. Cc: Liming Gao <liming.gao@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Andrew Fish <afish@apple.com> Cc: Jeff Fan <jeff.fan@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add paging protection.Jiewen Yao2016-11-171-5/+151
| | | | | | | | | | | | | | | | | | | | | | | | | | | | PiSmmCpuDxeSmm consumes SmmAttributesTable and setup page table: 1) Code region is marked as read-only and Data region is non-executable, if the PE image is 4K aligned. 2) Important data structure is set to RO, such as GDT/IDT. 3) SmmSaveState is set to non-executable, and SmmEntrypoint is set to read-only. 4) If static page is supported, page table is read-only. We use page table to protect other components, and itself. If we use dynamic paging, we can still provide *partial* protection. And hope page table is not modified by other components. The XD enabling code is moved to SmiEntry to let NX take effect. Cc: Jeff Fan <jeff.fan@intel.com> Cc: Feng Tian <feng.tian@intel.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jiewen Yao <jiewen.yao@intel.com> Tested-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Decrease mNumberToFinish in AP safe codeJeff Fan2016-11-151-1/+3
| | | | | | | | | | | | | | | | | | | | | We will put APs into hlt-loop in safe code. But we decrease mNumberToFinish before APs enter into the safe code. Paolo pointed out this gap. This patch is to move mNumberToFinish decreasing to the safe code. It could make sure BSP could wait for all APs are running in safe code. https://bugzilla.tianocore.org/show_bug.cgi?id=216 Reported-by: Paolo Bonzini <pbonzini@redhat.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com> Tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Put AP into safe hlt-loop code on S3 pathJeff Fan2016-11-151-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On S3 path, we will wake up APs to restore CPU context in PiSmmCpuDxeSmm driver. However, we place AP in hlt-loop under 1MB space borrowed after CPU restoring CPU contexts. In case, one NMI or SMI happens, APs may exit from hlt state and execute the instruction after HLT instruction. But the code under 1MB is no longer safe at that time. This fix is to allocate one ACPI NVS range to place the AP hlt-loop code. When CPU finished restoration CPU contexts, AP will execute in this ACPI NVS range. https://bugzilla.tianocore.org/show_bug.cgi?id=216 v2: 1. Make stack alignment per Laszlo's comment. 2. Trim whitespace at end of end. 3. Update year mark in file header. Reported-by: Laszlo Ersek <lersek@redhat.com> Analyzed-by: Paolo Bonzini <pbonzini@redhat.com> Analyzed-by: Laszlo Ersek <lersek@redhat.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com> Tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Consume PcdAcpiS3Enable to control the codeStar Zeng2016-09-011-0/+9
| | | | | | | | | | | | | | if PcdAcpiS3Enable is disabled, then skip S3 related logic. Cc: Jeff Fan <jeff.fan@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Move S3 related code to CpuS3.cStar Zeng2016-09-011-21/+40
| | | | | | | | | | | | Cc: Jeff Fan <jeff.fan@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> Tested-by: Laszlo Ersek <lersek@redhat.com> Acked-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add MemoryMapped in SetProcessorRegister()Jeff Fan2016-07-141-0/+2
| | | | | | | | | | | | | | | | | | REGISTER_TYPE in UefiCpuPkg/Include/AcpiCpuData.h defines a MemoryMapped enum value. However support for the MemoryMapped enum is missing from the implementation of SetProcessorRegister(). This patch adds support for MemoryMapped type SetProcessorRegister(). One spin lock is added to avoid potential conflict when multiple processor update the same memory space. Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com> Reviewed-by: Michael Kinney <michael.d.kinney@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Using MSRs semaphores in aligned bufferJeff Fan2016-05-241-1/+3
| | | | | | | | | | | | | Update MSRs semaphores to the ones in allocated aligned semaphores buffer. If MSRs semaphores is not enough, allocate one page more. Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com> Reviewed-by: Michael Kinney <michael.d.kinney@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Allocate buffer for MSRs semaphoresJeff Fan2016-05-241-0/+10
| | | | | | | | | | | | | Allocate MSRs semaphores in allocated aligned semaphores buffer. And add it into semaphores structure. Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com> Reviewed-by: Michael Kinney <michael.d.kinney@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Using CPU semaphores in aligned bufferJeff Fan2016-05-241-3/+3
| | | | | | | | | | | | | Update each CPU semaphores to the ones in allocated aligned semaphores buffer. Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com> Reviewed-by: Michael Kinney <michael.d.kinney@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Allocate buffer for each CPU semaphoresJeff Fan2016-05-241-0/+11
| | | | | | | | | | | | | Allocate each CPU semaphores in allocated aligned semaphores buffer. And add it into semaphores structure. Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com> Reviewed-by: Michael Kinney <michael.d.kinney@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>