summaryrefslogtreecommitdiffstats
path: root/UefiCpuPkg/PiSmmCpuDxeSmm
Commit message (Collapse)AuthorAgeFilesLines
* UefiCpuPkg: PiSmmCpuDxeSmm Add the missing ASM_PFX in nasm codeLiming Gao2017-12-081-5/+5
| | | | | | | Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Liming Gao <liming.gao@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg PiSmmCpuDxeSmm: SMM profile and static paging mutual exclusionStar Zeng2017-12-082-4/+20
| | | | | | | | | | | | | | | | | | SMM profile and static paging could not be enabled at the same time, this patch is to add check and comments to make sure it. Similar comments are also added for the case of static paging and heap guard for SMM. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Jian J Wang <jian.j.wang@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg PiSmmCpuDxeSmm: Only DumpCpuContext in error caseStar Zeng2017-12-082-4/+8
| | | | | | | | | | | | | | | | | | Only DumpCpuContext in error case, otherwise there will be too many debug messages from DumpCpuContext() when SmmProfile feature is enabled by setting PcdCpuSmmProfileEnable to TRUE. Those debug messages are not needed for SmmProfile feature as it will record those information to buffer for further dump. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Acked-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg: Fix unix style of EOLJian J Wang2017-11-216-307/+307
| | | | | | | | | Cc: Wu Hao <hao.a.wu@intel.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Eric Dong <eric.dong@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Hao Wu <hao.a.wu@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add SmmMemoryAttribute protocolJian J Wang2017-11-176-1/+307
| | | | | | | | | | | | | | | | | | | | | | | | | | | Heap guard makes use of paging mechanism to implement its functionality. But there's no protocol or library available to change page attribute in SMM mode. A new protocol gEdkiiSmmMemoryAttributeProtocolGuid is introduced to make it happen. This protocol provide three interfaces struct _EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL { EDKII_SMM_GET_MEMORY_ATTRIBUTES GetMemoryAttributes; EDKII_SMM_SET_MEMORY_ATTRIBUTES SetMemoryAttributes; EDKII_SMM_CLEAR_MEMORY_ATTRIBUTES ClearMemoryAttributes; }; Since heap guard feature need to update page attributes. The page table should not set to be read-only if heap guard feature is enabled for SMM mode. Otherwise this feature cannot work. Cc: Eric Dong <eric.dong@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Fix bitwise size issueJian J Wang2017-10-161-1/+1
| | | | | | | | Cc: Eric Dong <eric.dong@intel.com> Cc: Hao Wu <hao.a.wu@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Hao Wu <hao.a.wu@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Implement NULL pointer detection for SMM codeJian J Wang2017-10-114-1/+49
| | | | | | | | | | | | | | | | | The mechanism behind is the same as NULL pointer detection enabled in EDK-II core. SMM has its own page table and we have to disable page 0 again in SMM mode. Cc: Star Zeng <star.zeng@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Ayellet Wolman <ayellet.wolman@intel.com> Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add check to void use null pointer.Eric Dong2017-10-091-0/+2
| | | | | | | | | | | Current code logic not check the pointer before use it. This may has potential issue, this patch add code to check it. Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com> Reviewed-by: Hao Wu <hao.a.wu@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Refine code to avoid duplicated code.Eric Dong2017-09-291-60/+24
| | | | | | | | | | | | | | | V2: Change function parameter to avoid touch global info in function. Enhance function name, make it more user friendly V1: Refine code to avoid duplicate code to set processor register. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Combine INIT-SIPI-SIPI.Eric Dong2017-09-291-25/+26
| | | | | | | | | | | | In S3 resume path, current implementation do 2 separate INIT-SIPI-SIPI, this is not necessary. This change combine these 2 INIT-SIPI-SIPI to 1 and add CpuPause between them. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Centralize mPhysicalAddressBits definitionStar Zeng2017-08-283-3/+2
| | | | | | | | | | | | | | | | | | | | Originally (before 714c2603018a99a514c42c2b511c821f30ba9cdf), mPhysicalAddressBits was only defined in X64 PageTbl.c, after 714c2603018a99a514c42c2b511c821f30ba9cdf, mPhysicalAddressBits is also defined in Ia32 PageTbl.c, then mPhysicalAddressBits is used in ConvertMemoryPageAttributes() for address check. This patch is to centralize mPhysicalAddressBits definition to PiSmmCpuDxeSmm.c from Ia32 and X64 PageTbl.c. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Eric Dong <eric.dong@intel.com> Suggested-by: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Fix memory protection crashStar Zeng2017-08-283-6/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | https://bugzilla.tianocore.org/show_bug.cgi?id=624 reports memory protection crash in PiSmmCpuDxeSmm, Ia32 build with RAM above 4GB (of which 2GB are placed in 64-bit address). It is because UEFI builds identity mapping page tables, >4G address is not supported at Ia32 build. This patch is to get the PhysicalAddressBits that is used to build in PageTbl.c(Ia32/X64), and use it to check whether the address is supported or not in ConvertMemoryPageAttributes(). With this patch, the debug messages will be like below. UefiMemory protection: 0x0 - 0x9F000 Success UefiMemory protection: 0x100000 - 0x807000 Success UefiMemory protection: 0x808000 - 0x810000 Success UefiMemory protection: 0x818000 - 0x820000 Success UefiMemory protection: 0x1510000 - 0x7B798000 Success UefiMemory protection: 0x7B79B000 - 0x7E538000 Success UefiMemory protection: 0x7E539000 - 0x7E545000 Success UefiMemory protection: 0x7E55A000 - 0x7E61F000 Success UefiMemory protection: 0x7E62B000 - 0x7F6AB000 Success UefiMemory protection: 0x7F703000 - 0x7F70B000 Success UefiMemory protection: 0x7F70F000 - 0x7F778000 Success UefiMemory protection: 0x100000000 - 0x180000000 Unsupported Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Eric Dong <eric.dong@intel.com> Originally-suggested-by: Jiewen Yao <jiewen.yao@intel.com> Reported-by: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com> Tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add CPUID MCA support checkMichael D Kinney2017-08-171-2/+14
| | | | | | | | | | | | | https://bugzilla.tianocore.org/show_bug.cgi?id=674 Add CPUID check to see if the CPU supports the Machine Check Architecture before accessing the Machine Check Architecture related MSRs. Cc: Eric Dong <eric.dong@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg PiSmmCpuDxeSmm: Check LMCE capability when wait for AP.Eric Dong2017-08-041-1/+56
| | | | | | | | Cc: Jeff Fan <jeff.fan@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add missing JMP instructionMichael Kinney2017-05-191-1/+2
| | | | | | | | | | | | | | | | | | | | | | https://bugzilla.tianocore.org/show_bug.cgi?id=555 Add JMP instruction in SmiEntry.S file that is missing. This updates SmiEntry.S to match the logic in SmiEntry.asm and SmiEntry.nasm. The default BUILDRULEORDER has .nasm higher priority than .asm or .S, so this issue was not seen with MSFT or GCC tool chain families. The XCODE5 tool chain overrides the BUILDRULEORDER with .S higher than .nasm, so this issue was only seen when using XCODE5 tool chain when IA32 SMM is enabled. Cc: Jeff Fan <jeff.fan@intel.com> Cc: Andrew Fish <afish@apple.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Fix logic check errorJeff Fan2017-05-111-1/+1
| | | | | | | | Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Eric Dong <eric.dong@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Check ProcessorId == INVALID_APIC_IDJeff Fan2017-05-112-1/+9
| | | | | | | | | | | | | | | | | If PcdCpuHotPlugSupport is TRUE, gSmst->NumberOfCpus will be the PcdCpuMaxLogicalProcessorNumber. If gSmst->SmmStartupThisAp() is invoked for those un-existed processors, ASSERT() happened in ConfigSmmCodeAccessCheck(). This fix is to check if ProcessorId is valid before invoke gSmst->SmmStartupThisAp() in ConfigSmmCodeAccessCheck() and to check if ProcessorId is valid in InternalSmmStartupThisAp() to avoid unexpected DEBUG error message displayed. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Eric Dong <eric.dong@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* PeCoffGetEntryPointLib: Fix spelling issueJeff Fan2017-04-261-1/+1
| | | | | | | | | | | *Serach* should be *Search* Cc: Liming Gao <liming.gao@intel.com> Cc: Feng Tian <feng.tian@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Lock should be acquiredJeff Fan2017-04-191-1/+1
| | | | | | | | | | | | SMM BSP's *busy* state should be acquired. We could use AcquireSpinLock() instead of AcquireSpinLockOrFail(). Cc: Hao Wu <hao.a.wu@intel.com> Cc: Feng Tian <feng.tian@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Hao Wu <hao.a.wu@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Consume new APIsJeff Fan2017-04-075-65/+18
| | | | | | | | | | | | Consuming PeCoffSerachImageBase() from PeCoffGetEntrypointLib and consuming DumpCpuContext() from CpuExceptionHandlerLib to replace its own implementation. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Update saved SMM ranges check in SmmProfileJeff Fan2017-04-011-6/+36
| | | | | | | | | | | | | | SmmProfile feature required to protect all SMM ranges by structure mProtectionMemRangeTemplate. This update is to add additonal save SMM ranges into mProtectionMemRangeTemplate besides the range specified by mCpuHotPlugData.SmrrBase/mCpuHotPlugData.SmrrSiz. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add IsInSmmRanges() to check SMM rangeJeff Fan2017-04-011-5/+31
| | | | | | | | | | | | Internal function IsInSmmRanges() is added t check SMM range by saved SMM ranges beside by mCpuHotPlugData.SmrrBase/mCpuHotPlugData.SmrrSiz. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Save SMM ranges info into global variablesJeff Fan2017-04-012-21/+29
| | | | | | | | | | | | v2: Add #define SMRR_MAX_ADDRESS to clarify SMRR requirement. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
* UefiCpuPkg/AcpiCpuData.h: Support >4GB MMIO addressJeff Fan2017-03-271-1/+1
| | | | | | | | | | | | | | | | | | | The current CPU_REGISTER_TABLE_ENTRY structure only defined UINT32 Index to indicate MSR/MMIO address. It's ok for MSR because MSR address is UINT32 type actually. But for MMIO address, UINT32 limits MMIO address exceeds 4GB. This update on CPU_REGISTER_TABLE_ENTRY is to add additional UINT32 field HighIndex to indicate the high 32bit MMIO address and original Index still indicate the low 32bit MMIO address. This update makes use of original padding space between ValidBitLength and Value to add HighIndex. Cc: Feng Tian <feng.tian@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Skip if AllocatedSize is 0Jeff Fan2017-03-221-13/+17
| | | | | | | | | | | | | | | | | Needn't to copy register table if AllocatedSize is 0. v4: Fix potential uninitialized variable issue. v5: Set DestinationRegisterTableList[Index].RegisterTableEntry before RegisterTableEntry is updated. Cc: Feng Tian <feng.tian@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com>
* UefiCpuPkg/AcpiCpuData: Update RegisterTableEntry typeJeff Fan2017-03-221-5/+5
| | | | | | | | | | | | | | | Current RegisterTableEntry filed in CPU_REGISTER_TABLE is one pointer to CPU_REGISTER_TABLE_ENTRY. If CPU register table wants to be passed from 32bit PEI to x64 DXE/SMM, x64 DXE/SMM cannot get the correct RegisterTableEntry. This update is to update RegisterTableEntry type to EFI_PHYSICAL_ADDRESS and make RegisterTableEntry is fixed length. Cc: Feng Tian <feng.tian@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Refine casting result to bigger sizeHao Wu2017-03-081-2/+2
| | | | | | | | | | | | | | | | | | | The commit is a follow-up of commit 8491e30. In file MpService.c line 786: Pte[Index] = (UINT64)((UINTN)PageTable + EFI_PAGE_SIZE * (Index + 1)) | mAddressEncMask ... (Where PageTable is of type VOID*, Index is of type UINTN, mAddressEncMask is of type UINT64 and Pte[Index] is of type UINT64.) Since in this case, the code logic ensures that the expression will not exceed the range of UINTN, the commit will remove the explicit type cast '(UINT64)'. Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Hao Wu <hao.a.wu@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg: Refine casting expression result to bigger sizeHao Wu2017-03-062-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are cases that the operands of an expression are all with rank less than UINT64/INT64 and the result of the expression is explicitly cast to UINT64/INT64 to fit the target size. An example will be: UINT32 a,b; // a and b can be any unsigned int type with rank less than UINT64, like // UINT8, UINT16, etc. UINT64 c; c = (UINT64) (a + b); Some static code checkers may warn that the expression result might overflow within the rank of "int" (integer promotions) and the result is then cast to a bigger size. The commit refines codes by the following rules: 1). When the expression is possible to overflow the range of unsigned int/ int: c = (UINT64)a + b; 2). When the expression will not overflow within the rank of "int", remove the explicit type casts: c = a + b; 3). When the expression will be cast to pointer of possible greater size: UINT32 a,b; VOID *c; c = (VOID *)(UINTN)(a + b); --> c = (VOID *)((UINTN)a + b); 4). When one side of a comparison expression contains only operands with rank less than UINT32: UINT8 a; UINT16 b; UINTN c; if ((UINTN)(a + b) > c) {...} --> if (((UINT32)a + b) > c) {...} For rule 4), if we remove the 'UINTN' type cast like: if (a + b > c) {...} The VS compiler will complain with warning C4018 (signed/unsigned mismatch, level 3 warning) due to promoting 'a + b' to type 'int'. Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Hao Wu <hao.a.wu@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add support for PCD ↵Leo Duran2017-03-019-125/+91
| | | | | | | | | | | | | | | | | | | PcdPteMemoryEncryptionAddressOrMask This PCD holds the address mask for page table entries when memory encryption is enabled on AMD processors supporting the Secure Encrypted Virtualization (SEV) feature. The mask is applied when page tables entriees are created or modified. CC: Jeff Fan <jeff.fan@intel.com> Cc: Feng Tian <feng.tian@intel.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Brijesh Singh <brijesh.singh@amd.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Leo Duran <leo.duran@amd.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add check to avoid NULL ptr dereferenceHao Wu2016-12-201-0/+8
| | | | | | Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Hao Wu <hao.a.wu@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
* UefiCpuPkg/PiSmmCpu: Add SMM Comm Buffer Paging Protection.Jiewen Yao2016-12-195-12/+324
| | | | | | | | | | | | | | | | | | | | | This patch sets the normal OS buffer EfiLoaderCode/Data, EfiBootServicesCode/Data, EfiConventionalMemory, EfiACPIReclaimMemory to be not present after SmmReadyToLock. To access these region in OS runtime phase is not a good solution. Previously, we did similar check in SmmMemLib to help SMI handler do the check. But if SMI handler forgets the check, it can still access these OS region and bring risk. So here we enforce the policy to prevent it happening. Cc: Jeff Fan <jeff.fan@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Fix .S & .asm build failureFeng Tian2016-12-164-3/+4
| | | | | | | | | Cc: Michael D Kinney <michael.d.kinney@intel.com> Cc: Jeff Fan <jeff.fan@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Feng Tian <feng.tian@intel.com> Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg: Make the comments align with the functionsDandan Bi2016-12-141-1/+1
| | | | | | | Cc: Jeff Fan <jeff.fan@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Dandan Bi <dandan.bi@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Remove MTRR field from PSDMichael Kinney2016-12-066-12/+6
| | | | | | | | | | | | | | | | | | | | | https://bugzilla.tianocore.org/show_bug.cgi?id=277 The MTRR field was removed from PROCESS_SMM_DESCRIPTOR structure in commit: https://github.com/tianocore/edk2/commit/26ab5ac3621bdefe96987f8c1512ca79e1bb7ac0 However, the references to the MTRR field in assembly files were not removed. Remove the extern reference to gSmiMtrr and set the Reserved14 field of PROCESS_SMM_DESCRIPTOR to 0. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Jeff Fan <jeff.fan@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Always initialze PSDMichael Kinney2016-12-061-8/+8
| | | | | | | | | | | | | | | | | | | | | | | The following commit moved the initialization of the default PROCESSOR_SMM_DESCRIPTOR from MpService.c to SmramSaveState.c and made this initialization conditional on the value returned by the SmmCpuFeaturesGetSmiHandlerSize() library function. https://github.com/tianocore/edk2/commit/f12367a0b1de7838f1cb8e0839e168ed7b862333 This changed the behavior of the PiSmmCpuDxeSmm module. The initialization of the PROCESSOR_SMM_DESCRIPTOR is moved before the call to SmmCpuFeaturesGetSmiHandlerSize() to preserve the previous behavior. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Jeff Fan <jeff.fan@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com>
* UefiCpuPkg/PiSmmCpu: Fixed #double fault on #page fault.Jiewen Yao2016-12-074-49/+171
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes https://bugzilla.tianocore.org/show_bug.cgi?id=246 Previously, when SMM exception happens after EndOfDxe, with StackGuard enabled on IA32, the #double fault exception is reported instead of #page fault. Root cause is below: Current EDKII SMM page protection will lock GDT. If IA32 stack guard is enabled, the page fault handler will do task switch. This task switch need write busy flag in GDT, and write TSS. However, the GDT and TSS is locked at that time, so the double fault happens. We decide to not lock GDT for IA32 StackGuard enabled. This issue does not exist on X64, or IA32 without StackGuard. Cc: Laszlo Ersek <lersek@redhat.com> Cc: Jeff Fan <jeff.fan@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Remove PSD layout assumptionsMichael Kinney2016-12-018-47/+60
| | | | | | | | | | | | | | | | | | | https://bugzilla.tianocore.org/show_bug.cgi?id=277 Remove dependency on layout of PROCESSOR_SMM_DESCRIPTOR everywhere possible. The only exception is the standard SMI entry handler template that is included with the PiSmmCpuDxeSmm module. This allows an instance of the SmmCpuFeaturesLib to provide alternate PROCESSOR_SMM_DESCRIPTOR structure layouts. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Jeff Fan <jeff.fan@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Remove MTRRs from PSD structureMichael Kinney2016-12-012-15/+5
| | | | | | | | | | | | | | | | https://bugzilla.tianocore.org/show_bug.cgi?id=277 All CPUs use the same MTRR settings. Move MTRR settings from a field in the PROCESSOR_SMM_DESCRIPTOR structure into a module global variable. Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Jeff Fan <jeff.fan@intel.com> Cc: Feng Tian <feng.tian@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Clear some semaphores on S3 boot pathJeff Fan2016-12-011-0/+3
| | | | | | | | | | | | | | | | | | | | | | Some semaphores are not cleared on S3 boot path. For example, mSmmMpSyncData->CpuData[CpuIndex].Present. It may still keeps the value set at SMM runtime during S3 resume. It may causes BSP have the wrong judgement on SMM AP's present state. We have one related fix at e78a2a49ee6b0c0d7c6997c87ace31d7761cf636. But that is not completed. This fix is to clear Busy/Run/Present semaphores in InitializeMpSyncData(). Cc: Laszlo Ersek <lersek@redhat.com> Cc: Feng Tian <feng.tian@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Acked-by: Laszlo Ersek <lersek@redhat.com> Tested-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Feng Tian <feng.tian@intel.com>
* UefiCpuPkg/PiSmmCpu: relax superpage protection on page split.Jiewen Yao2016-11-301-2/+2
| | | | | | | | | | | | | | | | | | | | | | | PiSmmCpu driver may split page for page attribute request. Current logic not only propagates the super page attribute to the leaf page attribut, but also to the directory page attribute. However, the later might be wrong because we cannot clear protection without touching directory page attribute. The effective protection is the strictest combination across the levels. We should always clear protection on directory page and set protection on leaf page for easy clearing later. Cc: Jeff Fan <jeff.fan@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jiewen Yao <jiewen.yao@intel.com> Acked-by: Laszlo Ersek <lersek@redhat.com> Tested-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: handle dynamic PcdCpuMaxLogicalProcessorNumberLaszlo Ersek2016-11-282-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "UefiCpuPkg/UefiCpuPkg.dec" already allows platforms to make PcdCpuMaxLogicalProcessorNumber dynamic, however PiSmmCpuDxeSmm does not take this into account everywhere. As soon as a platform turns the PCD into a dynamic one, at least S3 fails. When the PCD is dynamic, all PcdGet() calls translate into PCD DXE protocol calls, which are only permitted at boot time, not at runtime or during S3 resume. We already have a variable called mMaxNumberOfCpus; it is initialized in the entry point function like this: > // > // If support CPU hot plug, we need to allocate resources for possibly > // hot-added processors > // > if (FeaturePcdGet (PcdCpuHotPlugSupport)) { > mMaxNumberOfCpus = PcdGet32 (PcdCpuMaxLogicalProcessorNumber); > } else { > mMaxNumberOfCpus = mNumberOfCpus; > } There's another use of the PCD a bit higher up, also in the entry point function: > // > // Use MP Services Protocol to retrieve the number of processors and > // number of enabled processors > // > Status = MpServices->GetNumberOfProcessors (MpServices, &mNumberOfCpus, > &NumberOfEnabledProcessors); > ASSERT_EFI_ERROR (Status); > ASSERT (mNumberOfCpus <= PcdGet32 (PcdCpuMaxLogicalProcessorNumber)); Preserve these calls in the entry point function, and replace all other uses of PcdCpuMaxLogicalProcessorNumber -- there are only reads -- with mMaxNumberOfCpus. For PcdCpuHotPlugSupport==TRUE, this is an unobservable change. For PcdCpuHotPlugSupport==FALSE, we even save SMRAM, because we no longer allocate resources needlessly for CPUs that can never appear in the system. PcdCpuMaxLogicalProcessorNumber is also retrieved in "UefiCpuPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c", but only in the library instance constructor, which runs even before the entry point function is called. Cc: Igor Mammedov <imammedo@redhat.com> Cc: Jeff Fan <jeff.fan@intel.com> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Bugzilla: https://bugzilla.tianocore.org/show_bug.cgi?id=116 Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpu: Correct exception message.Jiewen Yao2016-11-243-7/+77
| | | | | | | | | | | | | | | | | | | | This patch fixes the first part of https://bugzilla.tianocore.org/show_bug.cgi?id=242 Previously, when SMM exception happens, "stack overflow" is misreported. This patch checked the PF address to see it is stack overflow, or it is caused by SMM page protection. It dumps exception data, PF address and the module trigger the issue. Cc: Laszlo Ersek <lersek@redhat.com> Cc: Jeff Fan <jeff.fan@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jiewen Yao <jiewen.yao@intel.com> Tested-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: dynamic PcdCpuSmmApSyncTimeout, PcdCpuSmmSyncModeLaszlo Ersek2016-11-221-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the declaration of these PCDs from the [PcdsFixedAtBuild, PcdsPatchableInModule] section of "UefiCpuPkg/UefiCpuPkg.dec" to the [PcdsFixedAtBuild, PcdsPatchableInModule, PcdsDynamic, PcdsDynamicEx] section. Their types, default values, and token values remain unchanged. Only UefiCpuPkg/PiSmmCpuDxeSmm consumes these PCDs, specifically on the call stack of its entry point function, and it turns them into static or dynamically allocated data in SMRAM: PiCpuSmmEntry() [PiSmmCpuDxeSmm.c] InitializeSmmTimer() [SyncTimer.c] PcdCpuSmmApSyncTimeout -> mTimeoutTicker InitializeMpServiceData() [MpService.c] InitializeMpSyncData() [MpService.c] PcdCpuSmmSyncMode -> mSmmMpSyncData->EffectiveSyncMode However, there's another call path to fetching "PcdCpuSmmSyncMode", namely SmmInitHandler() [PiSmmCpuDxeSmm.c] InitializeMpSyncData() [MpService.c] PcdCpuSmmSyncMode -> mSmmMpSyncData->EffectiveSyncMode and this path is exercised during S3 resume (as stated by the comment in SmmInitHandler() too, "Initialize private data during S3 resume"). While we can call the PCD protocol (via PcdLib) for fetching dynamic PCDs in the entry point function, we cannot do that at S3 resume. Therefore pre-fetch PcdCpuSmmSyncMode into a new global variable (which lives in SMRAM) in InitializeMpServiceData(), just before calling InitializeMpSyncData(). This way InitializeMpSyncData() can retrieve the stashed PCD value from SMRAM, regardless of the boot mode. Cc: Jeff Fan <jeff.fan@intel.com> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=230 Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpu: Check XdSupport before set NX.Jiewen Yao2016-11-181-4/+6
| | | | | | | | | | | | | | | | | When XD is not supported, the BIT63 is reserved. We should not set BIT63 in the page table. Test OVMF IA32/IA32X64 with XD enabled/disabled. Analyzed-by: Laszlo Ersek <lersek@redhat.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Jeff Fan <jeff.fan@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> Tested-by: Laszlo Ersek <lersek@redhat.com>
* MdeModulePkg/PiSmmCpuDxeSmm: Check RegisterCpuInterruptHandler statusJeff Fan2016-11-183-3/+10
| | | | | | | | | | | Once platform selects the incorrect instance, the caller could know it from return status and ASSERT(). Cc: Feng Tian <feng.tian@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add volatile to mNumberToFinishMichael Kinney2016-11-171-1/+1
| | | | | | | | | | | | | | Add volatile qualifier to mNumberToFinish to prevent GCC 5.4 compiler from optimizing away required logic in ACPI S3 resume. Cc: Liming Gao <liming.gao@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Andrew Fish <afish@apple.com> Cc: Jeff Fan <jeff.fan@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: TransferApToSafeState() use UINTN paramsMichael Kinney2016-11-174-28/+28
| | | | | | | | | | | | | | | | | | | Update TransferApToSafeState() use UINTN params to reduce the number of type casts required in these calls. Also change the NumberToFinish parameter from UINT32* to UINTN NumberToFinishAddress to resolve issues with conversion from a volatile pointer to a non-volatile pointer. The assembly code that receives the NumberToFinishAddress value must treat that memory location as a volatile to track the number of APs. Cc: Liming Gao <liming.gao@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Andrew Fish <afish@apple.com> Cc: Jeff Fan <jeff.fan@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add paging protection.Jiewen Yao2016-11-1725-775/+2042
| | | | | | | | | | | | | | | | | | | | | | | | | | | | PiSmmCpuDxeSmm consumes SmmAttributesTable and setup page table: 1) Code region is marked as read-only and Data region is non-executable, if the PE image is 4K aligned. 2) Important data structure is set to RO, such as GDT/IDT. 3) SmmSaveState is set to non-executable, and SmmEntrypoint is set to read-only. 4) If static page is supported, page table is read-only. We use page table to protect other components, and itself. If we use dynamic paging, we can still provide *partial* protection. And hope page table is not modified by other components. The XD enabling code is moved to SmiEntry to let NX take effect. Cc: Jeff Fan <jeff.fan@intel.com> Cc: Feng Tian <feng.tian@intel.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jiewen Yao <jiewen.yao@intel.com> Tested-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Free SmramRanges to save SMM spaceJeff Fan2016-11-161-0/+1
| | | | | | | | | Cc: Zeng Star <star.zeng@intel.com> Cc: Feng Tian <feng.tian@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Zeng Star <star.zeng@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Decrease mNumberToFinish in AP safe codeJeff Fan2016-11-154-15/+18
| | | | | | | | | | | | | | | | | | | | | We will put APs into hlt-loop in safe code. But we decrease mNumberToFinish before APs enter into the safe code. Paolo pointed out this gap. This patch is to move mNumberToFinish decreasing to the safe code. It could make sure BSP could wait for all APs are running in safe code. https://bugzilla.tianocore.org/show_bug.cgi?id=216 Reported-by: Paolo Bonzini <pbonzini@redhat.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Jeff Fan <jeff.fan@intel.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com> Tested-by: Laszlo Ersek <lersek@redhat.com>