diff options
author | Oliver Smith-Denny <osde@linux.microsoft.com> | 2024-03-09 11:06:03 -0800 |
---|---|---|
committer | mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> | 2024-03-14 16:29:22 +0000 |
commit | e7486b50646d6a645706b61d2f8d74b3dca23ce0 (patch) | |
tree | fb5097c4006b2e91131d20e0d5967a6efa7af114 /MdeModulePkg/Core | |
parent | 68461c2c37afe11c7dda2769efc10bf20d2a7b23 (diff) | |
download | edk2-e7486b50646d6a645706b61d2f8d74b3dca23ce0.tar.gz edk2-e7486b50646d6a645706b61d2f8d74b3dca23ce0.tar.bz2 edk2-e7486b50646d6a645706b61d2f8d74b3dca23ce0.zip |
MdeModulePkg: DxeCore: Do Not Apply Guards to Unsupported Types
Currently, there are multiple issues when page or pool guards are
allocated for runtime memory regions that are aligned to
non-EFI_PAGE_SIZE alignments. Multiple other issues have been fixed for
these same systems (notably ARM64 which has a 64k runtime page
allocation granularity) recently. The heap guard system is only built to
support 4k guard pages and 4k alignment.
Today, the address returned to a caller of AllocatePages will not be
aligned correctly to the runtime page allocation granularity, because
the heap guard system does not take non-4k alignment requirements into
consideration.
However, even with this bug fixed, the Memory Allocation Table cannot be
produced and an OS with a larger than 4k page granularity will not have
aligned memory regions because the guard pages are reported as part of
the same memory allocation. So what would have been, on an ARM64 system,
a 64k runtime memory allocation is actually a 72k memory allocation as
tracked by the Page.c code because the guard pages are tracked as part
of the same allocation. This is a core function of the current heap
guard architecture.
This could also be fixed with rearchitecting the heap guard system to
respect alignment requirements and shift the guard pages inside of the
outer rounded allocation or by having guard pages be the runtime
granularity. Both of these approaches have issues. In the former case,
we break UEFI spec 2.10 section 2.3.6 for AARCH64, which states that
each 64k page for runtime memory regions may not have mixed memory
attributes, which pushing the guard pages inside would create. In the
latter case, an immense amount of memory is wasted to support such large
guard pages, and with pool guard many systems could not support an
additional 128k allocation for all runtime memory.
The simpler and safer solution is to disallow page and pool guards for
runtime memory allocations for systems that have a runtime granularity
greater than the EFI_PAGE_SIZE (4k). The usefulness of such guards is
limited, as OSes do not map guard pages today, so there is only boot
time protection of these ranges. This also prevents other bugs from
being exposed by using guards for regions that have a non-4k alignment
requirement, as again, multiple have cropped up because the heap guard
system was not built to support it.
This patch adds both a static assert to ensure that either the runtime
granularity is the EFI_PAGE_SIZE or that the PCD bits are not set to
enable heap guard for runtime memory regions. It also adds a check in
the page and pool allocation system to ensure that at runtime we are not
allocating a runtime region and attempt to guard it (the PCDs are close
to being removed in favor of dynamic heap guard configurations).
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4674
Github PR: https://github.com/tianocore/edk2/pull/5382
Cc: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Diffstat (limited to 'MdeModulePkg/Core')
-rw-r--r-- | MdeModulePkg/Core/Dxe/Mem/HeapGuard.h | 14 | ||||
-rw-r--r-- | MdeModulePkg/Core/Dxe/Mem/Page.c | 11 | ||||
-rw-r--r-- | MdeModulePkg/Core/Dxe/Mem/Pool.c | 11 |
3 files changed, 36 insertions, 0 deletions
diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h index 24b4206c0e..578e857465 100644 --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h @@ -469,4 +469,18 @@ PromoteGuardedFreePages ( extern BOOLEAN mOnGuarding;
+//
+// The heap guard system does not support non-EFI_PAGE_SIZE alignments.
+// Architectures that require larger RUNTIME_PAGE_ALLOCATION_GRANULARITY
+// cannot have EfiRuntimeServicesCode, EfiRuntimeServicesData, EfiReservedMemoryType,
+// and EfiACPIMemoryNVS guarded. OSes do not map guard pages anyway, so this is a
+// minimal loss. Not guarding prevents alignment mismatches
+//
+STATIC_ASSERT (
+ RUNTIME_PAGE_ALLOCATION_GRANULARITY == EFI_PAGE_SIZE ||
+ (((FixedPcdGet64 (PcdHeapGuardPageType) & 0x461) == 0) &&
+ ((FixedPcdGet64 (PcdHeapGuardPoolType) & 0x461) == 0)),
+ "Unsupported Heap Guard configuration on system with greater than EFI_PAGE_SIZE RUNTIME_PAGE_ALLOCATION_GRANULARITY"
+ );
+
#endif
diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c b/MdeModulePkg/Core/Dxe/Mem/Page.c index cd201d36a3..26584648c2 100644 --- a/MdeModulePkg/Core/Dxe/Mem/Page.c +++ b/MdeModulePkg/Core/Dxe/Mem/Page.c @@ -1411,6 +1411,17 @@ CoreInternalAllocatePages ( Alignment = RUNTIME_PAGE_ALLOCATION_GRANULARITY;
}
+ //
+ // The heap guard system does not support non-EFI_PAGE_SIZE alignments.
+ // Architectures that require larger RUNTIME_PAGE_ALLOCATION_GRANULARITY
+ // will have the runtime memory regions unguarded. OSes do not
+ // map guard pages anyway, so this is a minimal loss. Not guarding prevents
+ // alignment mismatches
+ //
+ if (Alignment != EFI_PAGE_SIZE) {
+ NeedGuard = FALSE;
+ }
+
if (Type == AllocateAddress) {
if ((*Memory & (Alignment - 1)) != 0) {
return EFI_NOT_FOUND;
diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c b/MdeModulePkg/Core/Dxe/Mem/Pool.c index ccfce8c5f9..72293e6dfe 100644 --- a/MdeModulePkg/Core/Dxe/Mem/Pool.c +++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c @@ -381,6 +381,17 @@ CoreAllocatePoolI ( }
//
+ // The heap guard system does not support non-EFI_PAGE_SIZE alignments.
+ // Architectures that require larger RUNTIME_PAGE_ALLOCATION_GRANULARITY
+ // will have the runtime memory regions unguarded. OSes do not
+ // map guard pages anyway, so this is a minimal loss. Not guarding prevents
+ // alignment mismatches
+ //
+ if (Granularity != EFI_PAGE_SIZE) {
+ NeedGuard = FALSE;
+ }
+
+ //
// Adjust the size by the pool header & tail overhead
//
|