summaryrefslogtreecommitdiffstats
path: root/UefiCpuPkg
Commit message (Collapse)AuthorAgeFilesLines
* UefiCpuPkg/PiSmmCpuDxeSmm: Avoid possible NULL ptr dereferenceHao Wu2018-07-311-1/+1
| | | | | | | | | Within function GetUefiMemoryAttributesTable(), add a check to avoid possible null pointer dereference. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Hao Wu <hao.a.wu@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
* UefiCpuPkg/PiSmmCpu: Check EFI_RUNTIME_RO in UEFI mem attrib table.Jiewen Yao2018-07-263-3/+75
| | | | | | | | | It treats the UEFI runtime page with EFI_MEMORY_RO attribute as invalid SMM communication buffer. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Star Zeng <star.zeng@intel.com>
* UefiCpuPkg/PiSmmCpu: Check for untested memory in GCDJiewen Yao2018-07-261-24/+120
| | | | | | | | | It treats GCD untested memory as invalid SMM communication buffer. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Star Zeng <star.zeng@intel.com>
* UefiCpuPkg/MpInitLib: Not use disabled AP when call StartAllAPs.Eric Dong2018-07-263-12/+26
| | | | | | | | | | | | | | | | | | Base on UEFI spec requirement, StartAllAPs function should not use the APs which has been disabled before. This patch just change current code to follow this rule. V3 changes: Only called by StartUpAllAps, WakeUpAp will not wake up the disabled APs, in other cases also need to include the disabled APs, such as CpuDxe driver start up and ChangeApLoopCallback function. WakeUpAP() is called with (Broadcast && WakeUpDisabledAps) from MpInitLibInitialize(), CollectProcessorCount() and MpInitChangeApLoopCallback() only. The first two run before the PPI or Protocol user has a chance to disable any APs. The last one runs in response to the ExitBootServices and LegacyBoot events, after which the MP protocol is unusable. For this reason, it doesn't matter that an originally disabled AP's state is not restored to Disabled, when WakeUpAP() is called with (Broadcast && WakeUpDisabledAps). Cc: Laszlo Ersek <lersek@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/MpInitLib: Remove StartCount and volatile definition.Eric Dong2018-07-262-8/+6
| | | | | | | | | | | | | | | | | | | | | | | | | The patch includes below changes: (1) It removes "volatile" from RunningCount, because only the BSP modifies it. (2) When we detect a timeout in CheckAllAPs(), and collect the list of failed CPUs, the size of the list is derived from the following difference, before the patch: StartCount - FinishedCount where "StartCount" is set by the BSP at startup, and FinishedCount is incremented by the APs themselves. Here the patch replaces this difference with StartCount - RunningCount that is, the difference is no more calculated from the BSP's startup counter and the AP's shared finish counter, but from the RunningCount measurement that the BSP does itself, in CheckAllAPs(). (3) Finally, the patch changes the meaning of RunningCount. Before the patch, we have: - StartCount: the number of APs the BSP stars up, - RunningCount: the number of finished APs that the BSP collected After the patch, StartCount is removed, and RunningCount is *redefined* as the following difference: OLD_StartCount - OLD_RunningCount Giving the number of APs that the BSP started up but hasn't collected yet. Cc: Laszlo Ersek <lersek@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/MpInitLib: Remove redundant CpuStateFinished State.Eric Dong2018-07-262-11/+12
| | | | | | | | | | | | | | | | | | | | | | | | Current CPU state definition include CpuStateIdle and CpuStateFinished. After investigation, current code can use CpuStateIdle to replace the CpuStateFinished. It will reduce the state number and easy for maintenance. > Before this patch, the state transitions for an AP are: > > Idle ----> Ready ----> Busy ----> Finished ----> Idle > [BSP] [AP] [AP] [BSP] > > After the patch, the state transitions for an AP are: > > Idle ----> Ready ----> Busy ----> Idle > [BSP] [AP] [AP] Cc: Laszlo Ersek <lersek@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/CpuMpPei: Correct BIST PPI logic.Marvin H?user2018-07-241-10/+12
| | | | | | | | | | | | | Currently, the SecPlatformInformation2 PPI is installed when either there is none present or the present one doesn't lack data. Update the logic to only install the SecPlatformInformation2 PPI when it's not already installed so that an up-to-date PPI remains the only one and unchanged. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Marvin Haeuser <Marvin.Haeuser@outlook.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/CpuDxe: fix incorrect check of SMM modeJian J Wang2018-07-201-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current IsInSmm() method makes use of gEfiSmmBase2ProtocolGuid.InSmm() to check if current processor is in SMM mode or not. But this is not correct because gEfiSmmBase2ProtocolGuid.InSmm() can only detect if the caller is running in SMRAM or from SMM driver. It cannot guarantee if the caller is running in SMM mode. Because SMM mode will load its own page table, adding an extra check of saved DXE page table base address against current CR3 register value can help to get the correct answer for sure (in SMM mode or not in SMM mode). There's indiscriminate uses of Context.X64 and Context.Ia32 in code which is not a good coding practice and will cause potential issue. In addition, the related structure type definition is not packed and has also potential issue. This will not be covered by this patch but be tracked by a bug below. https://bugzilla.tianocore.org/show_bug.cgi?id=1039 This is an issue caused by check-in at 2a1408d1d739ead00c96397549be7a9fc53c9c6e Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Star Zeng <star.zeng@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/MpInitLib: Fix VS2012 build failureEric Dong2018-07-201-0/+5
| | | | | | | Cc: Eric Dong <eric.dong@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Dandan Bi <dandan.bi@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/MpInitLib: Remove useless code.Eric Dong2018-07-201-15/+0
| | | | | | | | | | | Remove the useless code error added by change 58942277bcbf41abda5f6e3a1c89d571105d5983. Cc: Laszlo Ersek <lersek@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/MpInitLib: Optimize get processor number performance.Eric Dong2018-07-201-1/+4
| | | | | | | | | | | | | | | | Current function has low performance because it calls GetApicId in the loop, so it maybe called more than once. New logic call GetApicId once and base on this value to search the processor. Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Jeff Fan <vanjeff_919@hotmail.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/MpInitLib: Fix S3 resume hang issue.Eric Dong2018-07-194-2/+116
| | | | | | | | | | | | | | | | | | | | | | | | When resume from S3 and CPU loop mode is MWait mode, if driver calls APs to do task at EndOfPei point, the APs can't been wake up and bios hang at that point. The root cause is PiSmmCpuDxeSmm driver wakes up APs with HLT mode during S3 resume phase to do SMM relocation. After this task, PiSmmCpuDxeSmm driver not restore APs context which make CpuMpPei driver saved wake up buffer not works. The solution for this issue is let CpuMpPei driver hook S3SmmInitDone ppi notification. In this notify function, it check whether Cpu Loop mode is not HLT mode. If yes, CpuMpPei driver will set a flag to force BSP use INIT-SIPI -SIPI command to wake up the APs. Cc: Laszlo Ersek <lersek@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
* UefiCpuPkg/MpInitLib: Load uCode once for each core.Eric Dong2018-07-181-0/+9
| | | | | | | | | | | | | | | The SDM requires only one thread per core to load the microcode. This change enables this solution. Cc: Laszlo Ersek <lersek@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Eric Dong <eric.dong@intel.com> Acked-by: Laszlo Ersek <lersek@redhat.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
* UefiCpuPkg/MpInitLib: Use BSP uCode for APs if possible.Eric Dong2018-07-183-7/+45
| | | | | | | | | | | | | | | Search uCode costs much time, if AP has same processor type with BSP, AP can use BSP saved uCode info to get better performance. This change enables this solution. Cc: Laszlo Ersek <lersek@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Eric Dong <eric.dong@intel.com> Acked-by: Laszlo Ersek <lersek@redhat.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
* UefiCpuPkg/MpInitLib: Relocate uCode to memory to save time.Eric Dong2018-07-181-1/+32
| | | | | | | | | | | | | | | | | | | Read uCode from memory has better performance than from flash. But it needs extra effort to let BSP copy uCode from flash to memory. Also BSP already enable cache in SEC phase, so it use less time to relocate uCode from flash to memory. After verification, if system has more than one processor, it will reduce some time if load uCode from memory. This change enable this optimization. Cc: Laszlo Ersek <lersek@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
* UefiCpuPkg/MpInitLib: Avoid calling PEI services from APNi, Ruiyu2018-07-122-14/+63
| | | | | | | | | | | | | | | | | | | | | | | | | Today's MpInitLib PEI implementation directly calls PeiServices->GetHobList() from AP which may cause racing issue. This patch fixes this issue by duplicating IDT for APs. Because CpuMpData structure is stored just after IDT, the CpuMPData address equals to IDTR.BASE + IDTR.LIMIT + 1. v2: 1. Add ALIGN_VALUE() on BufferSize. 2. Add ASSERT() to make sure no memory usage outside of the allocated buffer. 3. Add more comments in InitConfig path when restoring CpuData[0].VolatileRegisters. Cc: Jeff Fan <vanjeff_919@hotmail.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Fish Andrew <afish@apple.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Acked-by: Laszlo Ersek <lersek@redhat.com> Tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg: Removing ipf which is no longer supported from edk2.chenc22018-06-296-234/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Removing rules for Ipf sources file: * Remove the source file which path with "ipf" and also listed in [Sources.IPF] section of INF file. * Remove the source file which listed in [Components.IPF] section of DSC file and not listed in any other [Components] section. * Remove the embedded Ipf code for MDE_CPU_IPF. Removing rules for Inf file: * Remove IPF from VALID_ARCHITECTURES comments. * Remove DXE_SAL_DRIVER from LIBRARY_CLASS in [Defines] section. * Remove the INF which only listed in [Components.IPF] section in DSC. * Remove statements from [BuildOptions] that provide IPF specific flags. * Remove any IPF sepcific sections. Removing rules for Dec file: * Remove [Includes.IPF] section from Dec. Removing rules for Dsc file: * Remove IPF from SUPPORTED_ARCHITECTURES in [Defines] section of DSC. * Remove any IPF specific sections. * Remove statements from [BuildOptions] that provide IPF specific flags. Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Chen A Chen <chen.a.chen@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg: Clean up source filesLiming Gao2018-06-2859-614/+614
| | | | | | | | | 1. Do not use tab characters 2. No trailing white space in one line 3. All files must end with CRLF Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg: Use new added Perf macrosBi, Dandan2018-06-261-8/+8
| | | | | | | | | | Replace old Perf macros with the new added ones. Cc: Liming Gao <liming.gao@intel.com> Cc: Eric Dong <eric.dong@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Dandan Bi <dandan.bi@intel.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/CpuDxe: make register access more readableJian J Wang2018-06-191-15/+29
| | | | | | | | | | | | | | | Update code to use more meaningful constant macro or predefined register structure. Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/CpuDxe: allow accessing (DXE) page table in SMM modeJian J Wang2018-06-192-35/+106
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The MdePkg/Library/SmmMemoryAllocationLib, used only by DXE_SMM_DRIVER, allows to free memory allocated in DXE (before EndOfDxe). This is done by checking the memory range and calling gBS services to do real operation if the memory to free is out of SMRAM. If some memory related features, like Heap Guard, are enabled, gBS interface will turn to EFI_CPU_ARCH_PROTOCOL.SetMemoryAttributes(), provided by DXE driver UefiCpuPkg/CpuDxe, to change memory paging attributes. This means we have part of DXE code running in SMM mode in certain circumstances. Because page table in SMM mode is different from DXE mode and CpuDxe always uses current registers (CR0, CR3, etc.) to get memory paging attributes, it cannot get the correct attributes of DXE memory in SMM mode from SMM page table. This will cause incorrect memory manipulations, like fail the releasing of Guard pages if Heap Guard is enabled. The solution in this patch is to store the DXE page table information (e.g. value of CR0, CR3 registers, etc.) in a global variable of CpuDxe driver. If CpuDxe detects it's in SMM mode, it will use this global variable to access page table instead of current processor registers. This can avoid retrieving wrong DXE memory paging attributes and changing SMM page table attributes unexpectedly. Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/LocalApicLib: Exclude second SendIpi sequence on AMD processors.Eric Dong2018-06-192-8/+16
| | | | | | | | | | | | | | | | | On AMD processors the second SendIpi in the SendInitSipiSipi and SendInitSipiSipiAllExcludingSelf routines is not required, and may cause undesired side-effects during MP initialization. This patch leverages the StandardSignatureIsAuthenticAMD check to exclude the second SendIpi and its associated MicroSecondDelay (200). Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Leo Duran <leo.duran@amd.com> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Jeff Fan <jeff.fan@intel.com> Cc: Liming Gao <liming.gao@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg: Remove X86 ASM and S filesLiming Gao2018-06-0724-2335/+7
| | | | | | | | | | | | | | | NASM has replaced ASM and S files. 1. Remove ASM from all modules expect for the ones in ResetVector directory. The ones in ResetVector directory are included by Vtf0.nasmb. They are also nasm style. 2. Remove S files from the drivers only. 3. https://bugzilla.tianocore.org/show_bug.cgi?id=881 After NASM is updated, S files can be removed from Library. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Liming Gao <liming.gao@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/CpuCommonFeatures: Follow SDM for MAX CPUID feature detectRuiyu Ni2018-05-281-2/+2
| | | | | | | | | | | | | | | According to IA manual: "Before setting this bit (MSR_IA32_MISC_ENABLE[22]) , BIOS must execute the CPUID.0H and examine the maximum value returned in EAX[7:0]. If the maximum value is greater than 2, this bit is supported." We need to fix our current detection logic to compare against 2. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Cc: Ming Shao <ming.shao@intel.com>
* UefiCpuPkg/SecMain: Add NORETURN decorator to SecStartup().Marvin H?user2018-05-082-2/+9
| | | | | | | | | | The function SecStartup() is not supposed to return. Hence, add the NORETURN decorator. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Marvin Haeuser <Marvin.Haeuser@outlook.com> Reviewed-by: Eric Dong <eric.dong@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg MpInitLib: Fix typo "sCPUID" to "CPUID"Star Zeng2018-04-251-2/+2
| | | | | | | | | Cc: Eric Dong <eric.dong@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: use mnemonics for FXSAVE(64)/FXRSTOR(64)Laszlo Ersek2018-04-043-10/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | NASM introduced FXSAVE / FXRSTOR support in commit 900fa5b26b8f ("NASM 0.98p3-hpa", 2002-04-30), which commit stands for the nasm-0.98p3-hpa release. NASM introduced FXSAVE64 / FXRSTOR64 support in commit 3a014348ca15 ("insns: add FXSAVE64/FXRSTOR64, drop np prefix", 2010-07-07), which was part of the "nasm-2.09" release. Edk2 requires nasm-2.10 or later for use with the GCC toolchain family, and nasm-2.12.01 or later for use with all other toolchain families. Replace the binary encoding of the FXSAVE(64)/FXRSTOR(64) instructions with mnemonics. I verified that the "Ia32/SmiException.obj", "X64/SmiEntry.obj" and "X64/SmiException.obj" files are rebuilt after this patch, without any change in content. This patch removes the last instructions encoded with DBs from PiSmmCpuDxeSmm. Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: remove DBs from SmmRelocationSemaphoreComplete32()Laszlo Ersek2018-04-042-17/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (1) SmmRelocationSemaphoreComplete32() runs in 32-bit mode, so wrap it in a (BITS 32 ... BITS 64) bracket. (2) SmmRelocationSemaphoreComplete32() currently compiles to: > 000002AE C6050000000001 mov byte [dword 0x0],0x1 > 000002B5 FF2500000000 jmp dword [dword 0x0] where the first instruction is patched with the contents of "mRebasedFlag" (so that (*mRebasedFlag) is set to 1), and the second instruction is patched with the address of "mSmmRelocationOriginalAddress" (so that we jump to "mSmmRelocationOriginalAddress"). In its current form the first instruction could not be patched with PatchInstructionX86(), given that the operand to patch is not encoded in the trailing bytes of the instruction. Therefore, adopt an EAX-based version, inspired by both the IA32 and X64 variants of SmmRelocationSemaphoreComplete(): > 000002AE 50 push eax > 000002AF B800000000 mov eax,0x0 > 000002B4 C60001 mov byte [eax],0x1 > 000002B7 58 pop eax > 000002B8 FF2500000000 jmp dword [dword 0x0] Here both instructions can be patched with PatchInstructionX86(), and the DBs can be replaced with native NASM syntax. (3) Turn the "mRebasedFlagAddr32" and "mSmmRelocationOriginalAddressPtr32" variables into markers that suit PatchInstructionX86(). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmInitStack" with PatchInstructionX86()Laszlo Ersek2018-04-044-8/+12
| | | | | | | | | | | | | | | | | Rename the variable to "gPatchSmmInitStack" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This lets us remove the binary (DB) encoding of some instructions in "SmmInit.nasm". The size of the patched source operand is (sizeof (UINTN)). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: eliminate "gSmmJmpAddr" and related DBsLaszlo Ersek2018-04-044-28/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The IA32 version of "SmmInit.nasm" does not need "gSmmJmpAddr" at all (its PiSmmCpuSmmInitFixupAddress() variant doesn't do anything either). We can simply use the NASM syntax for the following Mixed-Size Jump: > jmp PROTECT_MODE_CS : dword @32bit The generated object code for the instruction is unchanged: > 00000182 66EA5A0000000800 jmp dword 0x8:0x5a (The NASM manual explains that putting the DWORD prefix after the colon ":" reflects the intent better, since it is the offset that is a DWORD. Thus, that's what I used. However, both syntaxes are interchangeable, hence the ndisasm output.) The X64 version of "SmmInit.nasm" appears to require "gSmmJmpAddr"; however that's accidental, not inherent: - Bring LONG_MODE_CODE_SEGMENT from "UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h" to "SmmInit.nasm" as LONG_MODE_CS, same as PROTECT_MODE_CODE_SEGMENT was brought to the IA32 version as PROTECT_MODE_CS earlier. - Apply the NASM-native Mixed-Size Jump syntax again, but jump to the fixed zero offset in LONG_MODE_CS. This will produce no relocation record at all. Add a label after the instruction. - Modify PiSmmCpuSmmInitFixupAddress() to patch the jump target backwards from the label. Because we modify the DWORD offset with a DWORD access, the segment selector is unharmed in the instruction, and we need not set it from PiCpuSmmEntry(). According to "objdump --reloc", the X64 version undergoes only the following relocations, after this patch: > RELOCATION RECORDS FOR [.text]: > OFFSET TYPE VALUE > 0000000000000095 R_X86_64_PC32 SmmInitHandler-0x0000000000000004 > 00000000000000e0 R_X86_64_PC32 mRebasedFlag-0x0000000000000004 > 00000000000000ea R_X86_64_PC32 mSmmRelocationOriginalAddress-0x0000000000000004 Therefore the patch does not regress <https://bugzilla.tianocore.org/show_bug.cgi?id=849> ("Enable XCODE5 tool chain for UefiCpuPkg with nasm source code"). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmCr0" with PatchInstructionX86()Laszlo Ersek2018-04-045-9/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | Like "gSmmCr4" in the previous patch, "gSmmCr0" is not only used for machine code patching, but also as a means to communicate the initial CR0 value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words, the last four bytes of the "mov eax, Cr0Value" instruction's binary representation are utilized as normal data too. In order to get rid of the DB for "mov eax, Cr0Value", we have to split both roles, patching and data flow. Introduce the "mSmmCr0" global (SMRAM) variable for the data flow purpose. Rename the "gSmmCr0" variable to "gPatchSmmCr0" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(), to the value now contained in "mSmmCr0". This lets us remove the binary (DB) encoding of "mov eax, Cr0Value" in "SmmInit.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmCr4" with PatchInstructionX86()Laszlo Ersek2018-04-045-9/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | Unlike "gSmmCr3" in the previous patch, "gSmmCr4" is not only used for machine code patching, but also as a means to communicate the initial CR4 value from SmmRelocateBases() to InitSmmS3ResumeState(). In other words, the last four bytes of the "mov eax, Cr4Value" instruction's binary representation are utilized as normal data too. In order to get rid of the DB for "mov eax, Cr4Value", we have to split both roles, patching and data flow. Introduce the "mSmmCr4" global (SMRAM) variable for the data flow purpose. Rename the "gSmmCr4" variable to "gPatchSmmCr4" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(), to the value now contained in "mSmmCr4". This lets us remove the binary (DB) encoding of "mov eax, Cr4Value" in "SmmInit.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmmCr3" with PatchInstructionX86()Laszlo Ersek2018-04-044-8/+8
| | | | | | | | | | | | | | | Rename the variable to "gPatchSmmCr3" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This lets us remove the binary (DB) encoding of some instructions in "SmmInit.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from X64 SmmStartup()Laszlo Ersek2018-04-041-9/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (This patch is the 64-bit variant of commit e75ee97224e5, "UefiCpuPkg/PiSmmCpuDxeSmm: remove unneeded DBs from IA32 SmmStartup()", 2018-01-31.) The SmmStartup() function executes in SMM, which is very similar to real mode. Add "BITS 16" before it and "BITS 64" after it (just before the @LongMode label). Remove the manual 0x66 operand-size override prefixes, for selecting 32-bit operands -- the sizes of our operands trigger NASM to insert the prefixes automatically in almost every spot. The one place where we have to add it back manually is the LGDT instruction. In the LGDT instruction we also replace the binary 0x2E prefix with the normal NASM syntax for CS segment override. The stores to the Control Registers were always 32-bit wide; the source code only used RAX as source operand because it generated the expected object code (with NASM compiling the source as if in BITS 64). With BITS 16 added, we can use the actual register width in the source operands (EAX). This patch causes NASM to generate byte-identical object code (determined by disassembling both the pre-patch and post-patch versions, and comparing the listings), except: > @@ -231,7 +231,7 @@ > 000001D2 6689D3 mov ebx,edx > 000001D5 66B800000000 mov eax,0x0 > 000001DB 0F22D8 mov cr3,eax > -000001DE 662E670F0155F6 o32 lgdt [cs:ebp-0xa] > +000001DE 2E66670F0155F6 o32 lgdt [cs:ebp-0xa] > 000001E5 66B800000000 mov eax,0x0 > 000001EB 80CC02 or ah,0x2 > 000001EE 0F22E0 mov cr4,eax The only difference is the prefix list order, it changes from: - 0x66, 0x2E, 0x67 to - 0x2E, 0x66, 0x67 Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "XdSupported" with PatchInstructionX86()Laszlo Ersek2018-04-044-6/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "mXdSupported" is a global BOOLEAN variable, initialized to TRUE. The CheckFeatureSupported() function is executed on all processors (not concurrently though), called from SmmInitHandler(). If XD support is found to be missing on any CPU, then "mXdSupported" is set to FALSE, and further processors omit the check. Afterwards, "mXdSupported" is read by several assembly and C code locations. The tricky part is *where* "mXdSupported" is allocated (defined): - Before commit 717fb60443fb ("UefiCpuPkg/PiSmmCpuDxeSmm: Add paging protection.", 2016-11-17), it used to be a normal global variable, defined (allocated) in "SmmProfile.c". - With said commit, we moved the definition (allocation) of "mXdSupported" into "SmiEntry.nasm". The variable was defined over the last byte of a "mov al, 1" instruction, so that setting it to FALSE in CheckFeatureSupported() would patch the instruction to "mov al, 0". The subsequent conditional jump would change behavior, plus all further read references to "mXdSupported" (in C and assembly code) would read back the source (imm8) operand of the patched MOV instruction as data. This trick required that the MOV instruction be encoded with DB. In order to get rid of the DB, we have to split both roles: we need a label for the code patching, and "mXdSupported" has to be defined (allocated) independently of the code patching. Of course, their values must always remain in sync. (1) Reinstate the "mXdSupported" definition and initialization in "SmmProfile.c" from before commit 717fb60443fb. Change the assembly language definition ("global") to a declaration ("extern"). (2) Define the "gPatchXdSupported" label (type X86_ASSEMBLY_PATCH_LABEL) in "SmiEntry.nasm", and add the C-language declaration to "SmmProfileInternal.h". Replace the DB with the MOV mnemonic (keeping the imm8 source operand with value 1). (3) In CheckFeatureSupported(), whenever "mXdSupported" is set to FALSE, patch the assembly code in sync, with PatchInstructionX86(). Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmiCr3" with PatchInstructionX86()Laszlo Ersek2018-04-043-8/+8
| | | | | | | | | | | | | | | Rename the variable to "gPatchSmiCr3" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This lets us remove the binary (DB) encoding of some instructions in "SmiEntry.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmiStack" with PatchInstructionX86()Laszlo Ersek2018-04-043-9/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rename the variable to "gPatchSmiStack" so that its association with PatchInstructionX86() is clear from the declaration. Also change its type to X86_ASSEMBLY_PATCH_LABEL. Unlike "gSmbase" in the previous patch, "gSmiStack"'s patched value is also de-referenced by C code (in other words, it is read back after patching): the InstallSmiHandler() function stores "CpuIndex" to the given CPU's SMI stack through "gSmiStack". Introduce the local variable "CpuSmiStack" in InstallSmiHandler() for calculating the stack location separately, then use this variable for both patching into the assembly code, and for storing "CpuIndex" through it. It's assumed that "volatile" stood in the declaration of "gSmiStack" because we used to read "gSmiStack" back for de-referencing; with that use gone, we can remove "volatile" too. (Note that the *target* of the pointer was never volatile-qualified.) Finally, replace the binary (DB) encoding of "mov esp, imm32" in "SmiEntry.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: patch "gSmbase" with PatchInstructionX86()Laszlo Ersek2018-04-043-12/+12
| | | | | | | | | | | | | | | Rename the variable to "gPatchSmbase" so that its association with PatchInstructionX86() is clear from the declaration, change its type to X86_ASSEMBLY_PATCH_LABEL, and patch it with PatchInstructionX86(). This lets us remove the binary (DB) encoding of some instructions in "SmiEntry.nasm". Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg/PiSmmCpuDxeSmm: remove *.S and *.asm assembly filesLaszlo Ersek2018-04-0417-4294/+0
| | | | | | | | | | | | | | | | | | | | | | All edk2 toolchains use NASM for compiling X86 assembly source code. We plan to remove X86 *.S and *.asm files globally, in order to reduce maintenance and confusion: http://mid.mail-archive.com/4A89E2EF3DFEDB4C8BFDE51014F606A14E1B9F76@SHSMSX104.ccr.corp.intel.com https://lists.01.org/pipermail/edk2-devel/2018-March/022690.html https://bugzilla.tianocore.org/show_bug.cgi?id=881 Let's start with UefiCpuPkg/PiSmmCpuDxeSmm: remove the *.S and *.asm dialects (both Ia32 and X64) of the SmmInit, SmiEntry, SmiException and MpFuncs sources. Cc: Eric Dong <eric.dong@intel.com> Cc: Michael D Kinney <michael.d.kinney@intel.com> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866 Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Andrew Fish <afish@apple.com> Reviewed-by: Liming Gao <liming.gao@intel.com>
* UefiCpuPkg PiSmmCpuDxeSmm: Refine some comments about SmmMemoryAttributeStar Zeng2018-04-042-23/+17
| | | | | | | | | | | | | | | 1. Fix some "support" to "supported". 2. Fix some "set" to "clear" in ClearMemoryAttributes interface. 3. Remove redundant comments for GetMemoryAttributes interface. Cc: Jian J Wang <jian.j.wang@intel.com> Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/MpInitLib: Disable interrupt at ExitBootServices AP MwaitHao Wu2018-03-202-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Within function ApWakeupFunction(): When source level debugger is enabled, AP interrupts will be enabled by EnableDebugAgent(). Then the AP function will be executed by: Procedure (Parameter); After the AP function returns, AP interrupts will be disabled when the APs are placed in loop mode (both HltLoop and MwaiLoop). However, at ExitBootServices, ApWakeupFunction() is called with 'Procedure' equals to RelocateApLoop(). (ExitBootServices callback registered within InitMpGlobalData()) RelocateApLoop() never returns, so it has to disable the AP interrupts by itself. However, we find that interrupts are only disabled for the HltLoop case, but not for the MwaitLoop case (within file MpFuncs.nasm). This commit adds the missing disabling of AP interrupts for MwaitLoop. Also, for X64, this commit will disable the interrupts before switching to 32-bit mode. Cc: Laszlo Ersek <lersek@redhat.com> Cc: Jian J Wang <jian.j.wang@intel.com> Cc: Star Zeng <star.zeng@intel.com> Cc: Eric Dong <eric.dong@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Hao Wu <hao.a.wu@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com> Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
* UefiCpuPkg CpuExceptionHandlerLib: use FixedPcdGetSize() as the macro valueLiming Gao2018-03-161-3/+3
| | | | | | | | | | | | FixedPcdGetSize() is used as the macro value, PcdGetSize() is used as global variable or function. Here usage is to access macro value. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Liming Gao <liming.gao@intel.com> Cc: Wang Jian J <jian.j.wang@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Michael Kinney <michael.d.kinney@intel.com> Reviewed-by: Jian J Wang <jian.j.wang@intel.com>
* UefiCpuPkg/MpInitLib: put mReservedApLoopFunc in executable memoryJian J Wang2018-03-081-4/+34
| | | | | | | | | | | | | | | | | | if PcdDxeNxMemoryProtectionPolicy is enabled for EfiReservedMemoryType of memory, #PF will be triggered for each APs after ExitBootServices in SCRT test. The root cause is that AP wakeup code executed at that time is stored in memory of type EfiReservedMemoryType (referenced by global mReservedApLoopFunc), which is marked as non-executable. This patch fixes this issue by setting memory of mReservedApLoopFunc to be executable immediately after allocation. Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/CpuCommonFeaturesLib: Fix coding style issueDandan Bi2018-03-081-1/+1
| | | | | | | | | | | | | Boolean values do not need to use explicit comparisons to TRUE or FALSE. Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Dandan Bi <dandan.bi@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com>
* UefiCpuPkg S3ResumePei: Signal S3SmmInitDoneStar Zeng2018-03-032-14/+37
| | | | | | | | | | Cc: Jiewen Yao <jiewen.yao@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Star Zeng <star.zeng@intel.com> Reviewed-by: Jiewen Yao <jiewen.yao@intel.com> Acked-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/CpuExceptionHandlerLib: fix incorrect init of exception stackJian J Wang2018-02-281-1/+1
| | | | | | | | | | | | | | | | | | This issue is introduced at following commit, which tried to add stack switch support on behalf of Stack Guard feature. 0ff5aa9cae1ea276668fa4398d047aa9fda3c2c7 The field KnownGoodStackTop in CPU_EXCEPTION_INIT_DATA is initialized to the start address of array mNewStack. This is wrong. It must be the end of mNewStack. This patch fixes this mistake. Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang <jian.j.wang@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/S3Resume: Remove useless perf codeDandan Bi2018-02-122-133/+1
| | | | | | | | | | | | | | | | | | | V2: Just update the commit message to reference the hash value of new performance infrastructure. Our new performance infrastructure (edk2 trunk commit hash value: SHA-1: 73fef64f14d1b97ae9bd4705df3becc022391eba ~ SHA-1: 115eae650bfd2be2c2bc37360f4a755065e774c4)can support to dump performance date form ACPI table in OS. So we can remove the old perf code to write performance data to OS. Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Liming Gao <liming.gao@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Dandan Bi <dandan.bi@intel.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Star Zeng <star.zeng@intel.com>
* UefiCpuPkg/S3Resume: Add more perf entry for S3 phaseBi, Dandan2018-02-091-1/+14
| | | | | | | | | | | | | | | | | | | | | | | V2: Just update the commit message. Add more perf entry to hook BootScriptDonePpi/EndOfPeiPpi/ EndOfS3Resume. Add the new perf entry with Identifier PERF_INMODULE_START_ID/PERF_INMODULE_END_ID which are defined in new performance infrastructure (edk2 trunk commit hash value: SHA-1: 73fef64f14d1b97ae9bd4705df3becc022391eba ~ SHA-1: 115eae650bfd2be2c2bc37360f4a755065e774c4). PERF_INMODULE_START_ID/PERF_INMODULE_END_ID are general Identifier which are used within a module. Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Liming Gao <liming.gao@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Dandan Bi <dandan.bi@intel.com> Reviewed-by: Liming Gao <liming.gao@intel.com> Acked-by: Laszlo Ersek <lersek@redhat.com>
* UefiCpuPkg/FeaturesLib: don't init MCi_CTL/STATUS when MCA's disabledRuiyu Ni2018-02-091-15/+17
| | | | | | | | | | | Today's McaInitialize() doesn't check State value before initialize MCi_CTL and MCi_STATUS. The patch fixes this issue by only initializing the two kinds of MSRs when State is enabled. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com> Reviewed-by: Eric Dong <eric.dong@intel.com>
* UefiCpuPkg/FeaturesLib: Fix Haswell CPU hang with 50% throttlingRuiyu Ni2018-02-081-29/+23
| | | | | | | | | | | | | | | Today's implementation only assumes SandyBridge CPU supports Extended On-Demand Clock Modulation Duty Cycle. Actually it is supported when CPUID.06h.EAX[5] == 1. When platform requests 50% throttling, it causes value 1000b set to the low-4 bits of IA32_CLOCK_MODULATION. But the wrong code sets 1000b to bits[1-3] which causes assertion. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Jeff Fan <vanjeff_919@hotmail.com> Reviewed-by: Eric Dong <eric.dong@intel.com>