summaryrefslogtreecommitdiffstats
path: root/UefiCpuPkg
Commit message (Collapse)AuthorAgeFilesLines
* UefiCpuPkg: Decouple the SEV-ES functionality.YuanhaoXie2023-07-271-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The purpose is to fix an issue where an exception occurs at the start of the DXE phase by applying the following patch series on INTEL-based systems. UefiCpuPkg: Refactor the logic for placing APs in HltLoop. UefiCpuPkg: Refactor the logic for placing APs in Mwait/Runloop. UefiCpuPkg: Create MpHandOff. UefiCpuPkg: ApWakeupFunction directly use CpuMpData. UefiCpuPkg: Eliminate the second INIT-SIPI-SIPI sequence. This series of patches makes changes to the way the APs are initialized and woken up. It removes the 2nd time INIT-SIPI-SIPI and introduces a special startup signal to wake up APs. These patches also create a new HOB identified by the mMpHandOffGuid, which stores only the minimum information required from the PEI phase to the DXE phase. As a result, the original HOB (mCpuInitMpLibHobGuid) is now used only as a global variable in the PEI phase and is no longer necessary in the DXE phase for INTEL-based systems. The AMD SEV-ES related code still relies on the OldCpuMpData in the DXE phase. This patch decouple the SEV-ES functionality of assigning CpuMpData to OldCpuMpData->NewCpuMpData from the Intel logic. Cc: Eric Dong <eric.dong@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
* UefiCpuPkg: Uses gMmst in MmSaveStateLibAbdul Lateef Attar2023-07-176-10/+10
| | | | | | | | | | | | | | BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4182 Use gMmst instead of gSmst. Replace SmmServicesTableLib with MmServicesTableLib. Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Abner Chang <abner.chang@amd.com> Signed-off-by: Abdul Lateef Attar <AbdulLateef.Attar@amd.com>
* UefiCpuPkg: RISC-V: Support MMU with SV39/48/57 modeTuan Phan2023-07-159-2/+873
| | | | | | | | | During CpuDxe initialization, MMU will be setup with the highest mode that HW supports. Signed-off-by: Tuan Phan <tphan@ventanamicro.com> Reviewed-by: Andrei Warkentin <andrei.warkentin@intel.com> Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
* UefiCpuPkg: Eliminate the second INIT-SIPI-SIPI sequence.Xie, Yuanhao2023-07-112-2/+145
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When both the PEI and DXE phases operate in the same execution mode(32-bit/64-bit), the BSP send a special start-up signal during the DXE phase to awaken the Application APs. To eliminate the need for the INIT-SIPI-SIPI sequence at the beginning of the DXE phase, the BSP call the SwitchApContext function to trigger the special start-up signal. By writing the specified StartupSignalValue to the designated StartupSignalAddress, the BSP wakes up the APs from mwait mode. Once the APs receive the MP_HAND_OFF_SIGNAL value, they are awakened and proceed to execute the SwitchContextPerAp procedure. They enter another while loop, transitioning their context from the PEI phase to the DXE phase. The original state transitions for an AP during the procedure are as follows: Idle ----> Ready ----> Busy ----> Idle [BSP] [AP] [AP] Instead of init-sipi-sipi sequence, we make use of a start-up signal to awaken the APs and transfer their context from PEI to DXE. Consequently, APs, rather than the BSP, to set their state to CpuStateReady. Tested-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
* UefiCpuPkg: ApWakeupFunction directly use CpuMpData.Xie, Yuanhao2023-07-113-13/+6
| | | | | | | | | | | | | | | | | | | In the original design, once the APs finished executing their assembly code and switched to executing C code, they would enter a continuous loop within a function. In this function, they would collect CpuMpData using the MP_CPU_EXCHANGE_INFO mechanism. However, in the updated approach, CpuMpData can now be passed directly to the ApWakeUpFunction, bypassing the need for MP_CPU_EXCHANGE_INFO. This modification is made in preparation for eliminating the requirement of a second INIT-SIPI-SIPI sequence in the DXE phase. Tested-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
* UefiCpuPkg: Create MpHandOff.Xie, Yuanhao2023-07-117-15/+186
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Initially, the purpose of the Hob was twofold: it served as a way to transfer information from PEI to DXE. However, during the DXE phase, only a few fields from the CPU_MP_DATA which collected in PEI phase were needed. A new Hob was specifically created to transfer information to the DXE phase. This new Hob contained only the essential fields required for reuse in DXE. For instance, instead of directly including the BspNumber in MpHandOff, the DXE phase introduced the use of GetBspNumber() to collect the BspNumber from ApicID and CpuCount. The SaveCpuMpData() function was updated to construct the MP_HAND_OFF Hob. Additionally, the function introduced the MP_HAND_OFF_SIGNAL, which solely served the purpose of awakening the APs and transitioning their context from PEI to DXE. The WaitLoopExecutionMode field indicated whether the bit mode of PEI matched that of DXE. Both of them were filled only if the ApLoopMode was not ApInHltLoop. In the case of ApInHltLoop, it remained necessary to wake up the APs using the init-sipi-sipi sequence. This improvement still allow INIT-SIPI-SIPI even APs are wait in Run/Mwait loop mode. The function GetMpHandOffHob() was added to facilitate access to the collected MpHandOff in the DXE phase. The CpuMpData in the DXE phase was updated by gathering information from MpHandOff. Since MpHandOff replaced the usage of OldCpuMpData and contained essential information from the PEI phase to the DXE phase. AmdSevUpdateCpuMpData was included to maintain the original implementation of AmdSev, ensuring that OldCpuMpData->NewCpuMpData pointed to CpuMpData. Tested-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
* UefiCpuPkg: Refactor the logic for placing APs in Mwait/Runloop.Xie, Yuanhao2023-07-111-33/+50
| | | | | | | | | | | | | Refactor the logic for placing APs in Mwait/Runloop into a separate function. Tested-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
* UefiCpuPkg: Refactor the logic for placing APs in HltLoop.Xie, Yuanhao2023-07-111-11/+24
| | | | | | | | | | | | Refactor the logic for placing APs in HltLoop into a separate function. Tested-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
* UefiCpuPkg: Get processor extended information for SmmCpuServiceProtocolZhang, Hongbin12023-07-051-1/+1
| | | | | | | | | | | | | Some features like RAS need to use processor extended information under smm, So add code to support it Signed-off-by: Hongbin1 Zhang <hongbin1.zhang@intel.com> Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Acked-by: Gerd Hoffmann <kraxel@redhat.com> Cc: Star Zeng <star.zeng@intel.com> Reviewed-by: Jiaxin Wu <jiaxin.wu@intel.com>
* UefiCpuPkg: Removes SmmCpuFeaturesReadSaveStateRegisterAbdul Lateef Attar2023-07-037-728/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4182 Removes SmmCpuFeaturesReadSaveStateRegister and SmmCpuFeaturesWirteSaveStateRegister function from SmmCpuFeaturesLib library. MmSaveStateLib library replaces the functionality of the above functions. Platform old/new need to use MmSaveStateLib library to read/write save state registers. Current implementation supports Intel and AMD. Cc: Paul Grimes <paul.grimes@amd.com> Cc: Abner Chang <abner.chang@amd.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Ard Biesheuvel <ardb+tianocore@kernel.org> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Jordan Justen <jordan.l.justen@intel.com> Signed-off-by: Abdul Lateef Attar <abdattar@amd.com> Reviewed-by: Abner Chang <abner.chang@amd.com> Reviewed-by: Ray Ni <ray.ni@intel.com>
* UefiCpuPkg: Implements MmSaveStateLib for IntelAbdul Lateef Attar2023-07-033-1/+447
| | | | | | | | | | | | | | | | | | | BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4182 Implements MmSaveStateLib library interfaces to read and write save state registers for Intel processor family. Moves Intel and AMD common functionality to common area. Cc: Paul Grimes <paul.grimes@amd.com> Cc: Abner Chang <abner.chang@amd.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Signed-off-by: Abdul Lateef Attar <abdattar@amd.com> Reviewed-by: Abner Chang <abner.chang@amd.com>
* UefiCpuPkg: Implements SmmCpuFeaturesLib for AMD FamilyAbdul Lateef Attar2023-07-033-0/+490
| | | | | | | | | | | | | | | | | | BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4182 Implements interfaces to read and write save state registers of AMD's processor family. Initializes processor SMMADDR and MASK depends on PcdSmrrEnable flag. Program or corrects the IP once control returns from SMM. Cc: Paul Grimes <paul.grimes@amd.com> Cc: Abner Chang <abner.chang@amd.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Signed-off-by: Abdul Lateef Attar <abdattar@amd.com> Reviewed-by: Abner Chang <abner.chang@amd.com>
* UefiCpuPkg/SmmCpuFeaturesLib: Restructure arch-dependent codeAbdul Lateef Attar2023-07-032-128/+128
| | | | | | | | | | | | | | | | | | BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4182 moves Intel-specific code to the arch-dependent file. Other processor families might have different implementation of these functions. Hence, moving out of the common file. Cc: Abner Chang <abner.chang@amd.com> Cc: Paul Grimes <paul.grimes@amd.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Signed-off-by: Abdul Lateef Attar <AbdulLateef.Attar@amd.com> Reviewed-by: Abner Chang <abner.chang@amd.com> Reviewed-by: Ray Ni <ray.ni@intel.com>
* UefiCpuPkg: Implements MmSaveStateLib library instanceAbdul Lateef Attar2023-07-035-0/+572
| | | | | | | | | | | | | | | | | BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4182 Implements MmSaveStateLib Library class for AMD cpu family. Cc: Paul Grimes <paul.grimes@amd.com> Cc: Abner Chang <abner.chang@amd.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Signed-off-by: Abdul Lateef Attar <abdattar@amd.com> Reviewed-by: Abner Chang <abner.chang@amd.com>
* UefiCpuPkg: Adds MmSaveStateLib library classAbdul Lateef Attar2023-07-032-0/+78
| | | | | | | | | | | | | | | | | | BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4182 Adds MmSaveStateLib Library class in UefiCpuPkg.dec. Adds function declaration header file. Cc: Paul Grimes <paul.grimes@amd.com> Cc: Abner Chang <abner.chang@amd.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Signed-off-by: Abdul Lateef Attar <abdattar@amd.com> Reviewed-by: Abner Chang <abner.chang@amd.com> Reviewed-by: Ray Ni <ray.ni@intel.com>
* UefiCpuPkg: CpuTimerDxeRiscV64: Fix timer event not working correctlyTuan Phan2023-07-021-1/+7
| | | | | | | The timer notify function should be called with timer period, not the value read from timer register. Signed-off-by: Tuan Phan <tphan@ventanamicro.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Remove unnecessary functionDun Tan2023-06-303-40/+6
| | | | | | | | | | | Remove unnecessary function SetNotPresentPage(). We can directly use ConvertMemoryPageAttributes to set a range to non-present. Signed-off-by: Dun Tan <dun.tan@intel.com> Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com>
* UefiCpuPkg: Refinement to smm runtime InitPaging() codeDun Tan2023-06-302-228/+100
| | | | | | | | | | | | | | | | This commit is code refinement to current smm runtime InitPaging() page table update code. In InitPaging(), if PcdCpuSmmProfileEnable is TRUE, use ConvertMemoryPageAttributes() API to map the range in mProtectionMemRange to the attrbute recorded in the attribute field of mProtectionMemRange, map the range outside mProtectionMemRange as non-present. If PcdCpuSmmProfileEnable is FALSE, only need to set the ranges not in mSmmCpuSmramRanges as NX. Signed-off-by: Dun Tan <dun.tan@intel.com> Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com>
* UefiCpuPkg: Sort mProtectionMemRange when ReadyToLockDun Tan2023-06-301-0/+32
| | | | | | | | | | | Sort mProtectionMemRange in InitProtectedMemRange() when ReadyToLock. Signed-off-by: Dun Tan <dun.tan@intel.com> Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com>
* UefiCpuPkg: Sort mSmmCpuSmramRanges in FindSmramInfoDun Tan2023-06-301-0/+32
| | | | | | | | | | | Sort mSmmCpuSmramRanges after get the SMRAM info in FindSmramInfo() function. Signed-off-by: Dun Tan <dun.tan@intel.com> Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com>
* UefiCpuPkg: Use GenSmmPageTable() to create Smm S3 page tableDun Tan2023-06-303-147/+5
| | | | | | | | | | | Use GenSmmPageTable() to create both IA32 and X64 Smm S3 page table. Signed-off-by: Dun Tan <dun.tan@intel.com> Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com>
* UefiCpuPkg: Add GenSmmPageTable() to create smm page tableDun Tan2023-06-304-195/+107
| | | | | | | | | | | | | | | This commit is code refinement to current smm pagetable generation code. Add a new GenSmmPageTable() API to create smm page table based on the PageTableMap() API in CpuPageTableLib. Caller only needs to specify the paging mode and the PhysicalAddressBits to map. This function can be used to create both IA32 pae paging and X64 5level, 4level paging. Signed-off-by: Dun Tan <dun.tan@intel.com> Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com>
* UefiCpuPkg: Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.hDun Tan2023-06-305-8/+3
| | | | | | | | | | | Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.h and remove extern for mSmmShadowStackSize in c files to simplify code. Signed-off-by: Dun Tan <dun.tan@intel.com> Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Clear CR0.WP before modify page tableDun Tan2023-06-302-0/+16
| | | | | | | | | | | | | | | | | | | | Clear CR0.WP before modify smm page table. Currently, there is an assumption that smm pagetable is always RW before ReadyToLock. However, when AMD SEV is enabled, FvbServicesSmm driver calls MemEncryptSevClearMmioPageEncMask to clear AddressEncMask bit in smm page table for this range: [PcdOvmfFdBaseAddress,PcdOvmfFdBaseAddress+PcdOvmfFirmwareFdSize] If page slpit happens in this process, new memory for smm page table is allocated. Then the newly allocated page table memory is marked as RO in smm page table in this FvbServicesSmm driver, which may lead to PF if smm code doesn't clear CR0.WP before modify smm page table when ReadyToLock. Signed-off-by: Dun Tan <dun.tan@intel.com> Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WPDun Tan2023-06-302-49/+90
| | | | | | | | | | | | Add two functions to disable/enable CR0.WP. These two unctions will also be used in later commits. This commit doesn't change any functionality. Signed-off-by: Dun Tan <dun.tan@intel.com> Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: Avoid setting non-present range to RO/NXDun Tan2023-06-301-22/+107
| | | | | | | | | | | | | | | In PiSmmCpuDxeSmm code, SetMemMapAttributes() marks memory ranges in SmmMemoryAttributesTable to RO/NX. There may exist non-present range in these memory ranges. Set other attributes for a non-present range is not permitted in CpuPageTableMapLib. So add code to handle this case. Only map the present ranges in SmmMemoryAttributesTable to RO or NX. Signed-off-by: Dun Tan <dun.tan@intel.com> Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com>
* UefiCpuPkg: Add DEBUG_CODE for special case when clear RPDun Tan2023-06-301-0/+48
| | | | | | | | | | | | | | | | | | In ConvertMemoryPageAttributes() function, when clear RP for a specific range [BaseAddress, BaseAddress + Length], it means to set the present bit to 1 and assign default value for other attributes in page table. The default attributes for the input specific range are NX disabled and ReadOnly. If there is existing present range in [BaseAddress, BaseAddress + Length] and the attributes are not NX disabled or not ReadOnly, then output the DEBUG message to indicate that the NX and ReadOnly attributes of the existing present range are modified in the function. Signed-off-by: Dun Tan <dun.tan@intel.com> Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com>
* UefiCpuPkg: Use CpuPageTableLib to convert SMM paging attribute.Dun Tan2023-06-305-325/+121
| | | | | | | | | | | | | | Simplify the ConvertMemoryPageAttributes API to convert paging attribute by CpuPageTableLib. In the new API, it calls PageTableMap() to update the page attributes of a memory range. With the PageTableMap() API in CpuPageTableLib, we can remove the complicated page table manipulating code. Signed-off-by: Dun Tan <dun.tan@intel.com> Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com>
* UefiCpuPkg/ResetVector: Remove pre-built binariesRay Ni2023-06-2716-199/+9
| | | | | | | | | | | | Because it's simpler for a platform to include the ResetVector source and having pre-built binaries add burdens of updating the pre-built binaries. This patch removes the pre-built binaries and the script that buids the pre-built binaries. Signed-off-by: Ray Ni <ray.ni@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Acked-by: Gerd Hoffmann <kraxel@redhat.com>
* UefiCpuPkg/ResetVector: Add guidance of FDF ffs ruleRay Ni2023-06-272-21/+25
| | | | | | | | | | | | | | | | | | | ResetVector assembly implementation puts "ALIGN 16" in the end to guarantee the final executable file size is multiple of 16 bytes. Because the module uses a special GUID which guarantees it's put in the very end of a FV, which should be also the end of the FD. All of these (file size is multiple of 16B, and the module is put at end of FV, FV is put at end of FD) guarantee the "JMP xxx" instruction is at FFFF_FFF0h. This patch updates INF file and ReadMe.txt to add guidance of FDF ffs rule for the ResetVector. Signed-off-by: Ray Ni <ray.ni@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Gerd Hoffmann <kraxel@redhat.com>
* UefiCpuPkg: Include ResetVector in DSCRay Ni2023-06-271-2/+2
| | | | | | | | | | Since ResetVector source module shares the same GUID as the binary module, the binary INF file is just removed from DSC. Signed-off-by: Ray Ni <ray.ni@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Acked-by: Gerd Hoffmann <kraxel@redhat.com>
* UefiCpuPkg/SmmCpu: Add PcdSmmApPerfLogEnable control AP perf-loggingRay Ni2023-06-215-3/+21
| | | | | | | | | | | | | | | | When a platform has lots of CPU cores/threads, perf-logging on every AP produces lots of records. When this multiplies with number of SMIs during post, the records are even more. So, this patch adds a new PCD PcdSmmApPerfLogEnable (default TRUE) to allow platform to turn off perf-logging on APs. Signed-off-by: Ray Ni <ray.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Jiaxin Wu <jiaxin.wu@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/CpuSmm: Add perf-logging for MP proceduresRay Ni2023-06-216-0/+219
| | | | | | | | | | | | | | | | | | | | | MP procedures are those procedures that run in every CPU thread. The EDKII perf infra is not MP safe so it doesn't support to be called from those MP procedures. The patch adds SMM MP perf-logging support in SmmMpPerf.c. The following procedures are perf-logged: * SmmInitHandler * SmmCpuFeaturesRendezvousEntry * PlatformValidSmi * SmmCpuFeaturesRendezvousExit Signed-off-by: Ray Ni <ray.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Jiaxin Wu <jiaxin.wu@intel.com> Reviewed-by: Jiaxin Wu <jiaxin.wu@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/CpuSmm: Add perf-logging for time-consuming BSP proceduresRay Ni2023-06-216-5/+49
| | | | | | | | | | | | | | | | | | | | | | | | The patch adds perf-logging for the following potential time-consuming BSP procedures: * PiCpuSmmEntry - SmmRelocateBases - ExecuteFirstSmiInit * BSPHandler - SmmWaitForApArrival - PerformRemainingTasks * InitPaging * SetMemMapAttributes * SetUefiMemMapAttributes * SetPageTableAttributes * ConfigSmmCodeAccessCheck * SmmCpuFeaturesCompleteSmmReadyToLock Signed-off-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Jiaxin Wu <jiaxin.wu@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg: RISC-V: TimerLib: Fix delay function to use 64-bitTuan Phan2023-06-151-30/+23
| | | | | | | | | The timer compare register is 64-bit so simplifying the delay function. Cc: Andrei Warkentin <andrei.warkentin@intel.com> Signed-off-by: Tuan Phan <tphan@ventanamicro.com> Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
* UefiCpuPkg: CpuTimerDxeRiscV64: Fix incorrect value sent to SbiSetTimerTuan Phan2023-06-153-5/+26
| | | | | | | | SbiSetTimer expects core tick value. Cc: Andrei Warkentin <andrei.warkentin@intel.com> Signed-off-by: Tuan Phan <tphan@ventanamicro.com> Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
* UefiCpuPkg/PiSmmCpuDxeSmm:add Ap Rendezvous check in PerformRemainingTasks.Zhihao Li2023-05-311-0/+13
| | | | | | | | | | | | | | | | | | REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4424 In Relaxed-AP Sync Mode, BSP will not wait for all Aps arrive. However, PerformRemainingTasks() needs to wait all Aps arrive before calling SetMemMapAttributes and ConfigSmmCodeAccessCheck() when mSmmReadyToLock is true. In SetMemMapAttributes(), SmmSetMemoryAttributesEx() will call FlushTlbForAll() that need to start up the aps. So it need to let all aps arrive. Same as SetMemMapAttributes(), ConfigSmmCodeAccessCheck() also will start up the aps. Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Signed-off-by: Zhihao Li <zhihao.li@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com>
* UefiCpuPkg/CpuService.c:check cpu sync mode in SmmCpuRendezvous()Zhihao Li2023-05-311-6/+13
| | | | | | | | | | | | REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4431 In Ap relaxed mode, some SMI handlers should call SmmWaitForApArrival() to let all ap arrive in SmmCpuRendezvous(). But in traditional mode, these SMI handlers don't need to call SmmWaitForApArrival() again. So it need to be check cpu sync mode before calling SmmWaitForApArrival(). Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Signed-off-by: Zhihao Li <zhihao.li@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com>
* UefiCpuPkg/CpuMpPei: Conditionally enable PAE paging in 32bit modeJiaxin Wu2023-05-313-129/+75
| | | | | | | | | | | | | Some security features depend on the page table enabling. So, This patch is to enable paging if it is not enabled (32bit mode)" Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Zeng Star <star.zeng@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com>
* UefiCpuPkg/SecCore: Migrate page table to permanent memoryJiaxin Wu2023-05-314-0/+153
| | | | | | | | | | | | | | | | | | | | | | Background: For arch X64, system will enable the page table in SPI to cover 0-512G range via CR4.PAE & MSR.LME & CR0.PG & CR3 setting (see ResetVector code). Existing code doesn't cover the higher address access above 512G before memory-discovered callback. That will be potential problem if system access the higher address after the transition from temporary RAM to permanent MEM RAM. Solution: This patch is to migrate page table to permanent memory to map entire physical address space if CR0.PG is set during temporary RAM Done. Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Zeng Star <star.zeng@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com>
* UefiCpuPkg/ResetVector: Support 5 level page table in ResetVectorZhiguang Liu2023-05-302-0/+29
| | | | | | | | | | | | | | | | | | Add a macro USE_5_LEVEL_PAGE_TABLE to determine whether to create 5 level page table. If macro USE_5_LEVEL_PAGE_TABLE is defined, PML5Table is created at (4G-12K), while PML4Table is at (4G-16K). In runtime check, if 5level paging is supported, use PML5Table, otherwise, use PML4Table. If macro USE_5_LEVEL_PAGE_TABLE is not defined, to save space, 5level paging is not created, and 4level paging is at (4G-12K) and be used. Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Debkumar De <debkumar.de@intel.com> Cc: Catharine West <catharine.west@intel.com> Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
* UefiCpuPkg/ResetVector: Modify Page Table in ResetVectorLiu, Zhiguang2023-05-301-16/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In ResetVector, if create page table, its highest address is fixed because after page table, code layout is fixed(4K for normal code, and another 4K only contains reset vector code). Today's implementation organizes the page table as following if 1G page table is used: 4G-16K: PML4 page (PML4[0] points to 4G-12K) 4G-12K: PDP page CR3 is set to 4G-16K When 2M page table is used, the layout is as following: 4G-32K: PML4 page (PML4[0] points to 4G-28K) 4G-28K: PDP page (PDP entries point to PD pages) 4G-24K: PD page mapping 0-1G 4G-20K: PD page mapping 1-2G 4G-16K: PD page mapping 2-3G 4G-12K: PD page mapping 3-4G CR3 is set to 4G-32K CR3 doesn't point to a fixed location which is a bit hard to debug at runtime. The new page table layout will always put PML4 in highest address When 1G page table is used, the layout is as following: 4G-16K: PDP page 4G-12K: PML4 page (PML4[0] points to 4G-16K) When 2M page table is used, the layout is as following: 4G-32K: PD page mapping 0-1G 4G-28K: PD page mapping 1-2G 4G-24K: PD page mapping 2-3G 4G-20K: PD page mapping 3-4G 4G-16K: PDP page (PDP entries point to PD pages) 4G-12K: PML4 page (PML4[0] points to 4G-16K) CR3 is always set to 4G-12K So, this patch can improve debuggability by make sure the init CR3 pointing to a fixed address(4G-12K). Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Tested-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Gerd Hoffmann <kraxel@redhat.com> Cc: Debkumar De <debkumar.de@intel.com> Cc: Catharine West <catharine.west@intel.com> Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
* UefiCpuPkg/ResetVector: Combine PageTables1G.asm and PageTables2M.asmLiu, Zhiguang2023-05-303-76/+33
| | | | | | | | | | | | | Combine PageTables1G.asm and PageTables2M.asm to reuse code. Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Tested-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Gerd Hoffmann <kraxel@redhat.com> Cc: Debkumar De <debkumar.de@intel.com> Cc: Catharine West <catharine.west@intel.com> Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
* UefiCpuPkg/ResetVector: Simplify page table creation in ResetVectorLiu, Zhiguang2023-05-303-32/+24
| | | | | | | | | | | | | | | Currently, page table creation has many hard-code values about the offset to the start of page table. To simplify it, add Labels such as Pml4, Pdp and Pd, so that we can remove many hard-code values Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Tested-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Gerd Hoffmann <kraxel@redhat.com> Cc: Debkumar De <debkumar.de@intel.com> Cc: Catharine West <catharine.west@intel.com> Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
* UefiCpuPkg/ResetVector: Rename macros about page table.Liu, Zhiguang2023-05-302-22/+39
| | | | | | | | | | | | | | | | | | This patch only renames macro, with no code logic impacted. Two purpose to rename macro: 1. Align some macro name in PageTables1G.asm and PageTables2M.asm, so that these two files can be easily combined later. 2. Some Macro names such as PDP are not accurate, since 4 level page entry also uses this macro. PAGE_NLE (no leaf entry) is better Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Tested-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Gerd Hoffmann <kraxel@redhat.com> Cc: Debkumar De <debkumar.de@intel.com> Cc: Catharine West <catharine.west@intel.com> Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
* UefiCpuPkg: Update PT code to support enable collect performanceDun Tan2023-04-263-13/+44
| | | | | | | | | | | | | | | | | Update ProcTrace feature code to support enable collect performance data by generating CYC and TSC packets. Add a new dynamic PCD to indicate if enable performance collecting. In ProcTrace.c code, if this new PCD is true, after check cpuid, CYC and TSC packets will be generated by setting the corresponding MSR bits feilds if supported. Bugzila: https://bugzilla.tianocore.org/show_bug.cgi?id=4423 Signed-off-by: Dun Tan <dun.tan@intel.com> Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Xiao X Chen <xiao.x.chen@intel.com>
* UefiCpuPkg: Update code to support enable ProcTrace only on BSPDun Tan2023-04-263-65/+119
| | | | | | | | | | | | | | | Update code to support enable ProcTrace only on BSP. Add a new dynamic PCD to indicate if enable ProcTrace only on BSP. In ProcTrace.c code, if this new PCD is true, only allocate buffer and set CtrlReg.Bits.TraceEn to 1 for BSP. Bugzila: https://bugzilla.tianocore.org/show_bug.cgi?id=4423 Signed-off-by: Dun Tan <dun.tan@intel.com> Cc: Eric Dong <eric.dong@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Xiao X Chen <xiao.x.chen@intel.com>
* UefiCpuLib: Remove UefiCpuLib.Yu Pu2023-04-126-92/+0
| | | | | | | | | | | | Because UefiCpuPkg/UefiCpuLib is merged to MdePkg/CpuLib and all modules are updated to not depend on this library, remove it completely. Cc: Eric Dong <eric.dong@intel.com> Cc: Ray Ni <ray.ni@intel.com> Cc: Rahul Kumar <rahul1.kumar@intel.com> Signed-off-by: Yu Pu <yu.pu@intel.com> Reviewed-by: Ray Ni <ray.ni@intel.com> Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
* UefiCpuPkg: Update code to be more C11 compliant by using __func__Rebecca Cran2023-04-1013-32/+32
| | | | | | | | | | | | | | __FUNCTION__ is a pre-standard extension that gcc and Visual C++ among others support, while __func__ was standardized in C99. Since it's more standard, replace __FUNCTION__ with __func__ throughout UefiCpuPkg. Signed-off-by: Rebecca Cran <rebecca@bsdio.com> Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Ray Ni <ray.ni@intel.com> Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
* UefiCpuPkg/CpuExceptionHandlerLib: Drop special XCODE5 versionArd Biesheuvel2023-04-063-92/+0
| | | | | | | This library is no longer used or needed, so let's remove it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Ray Ni <ray.ni@intel.com>