diff options
author | Jian J Wang <jian.j.wang@intel.com> | 2018-03-13 10:08:24 +0800 |
---|---|---|
committer | Star Zeng <star.zeng@intel.com> | 2018-03-14 16:15:27 +0800 |
commit | dd12683e1f82a8d17fb6167ce4b3f6f3a1d10559 (patch) | |
tree | 877f45c16f28cd0a10366b4dcbef0e1b0e31a8a2 /MdeModulePkg | |
parent | 58793b8838f500955c8a7a548b4b450e81798f6e (diff) | |
download | edk2-dd12683e1f82a8d17fb6167ce4b3f6f3a1d10559.tar.gz edk2-dd12683e1f82a8d17fb6167ce4b3f6f3a1d10559.tar.bz2 edk2-dd12683e1f82a8d17fb6167ce4b3f6f3a1d10559.zip |
MdeModulePkg/Core: fix mem alloc issues in heap guard
There're two ASSERT issues which will be triggered by boot loader of
Windows 10.
The first is caused by allocating memory in heap guard during another
memory allocation, which is not allowed in DXE core. Avoiding reentry
of memory allocation has been considered in heap guard feature. But
there's a hole in the code of function FindGuardedMemoryMap(). The fix
is adding AllocMapUnit parameter in the condition of while(), which
will prevent memory allocation from happenning during Guard page
check operation.
The second is caused by the core trying to allocate page 0 with Guard
page, which will cause the start address rolling back to the end of
supported system address. According to the requirement of heap guard,
the fix is just simply skipping the free memory at page 0 and let
the core continue searching free memory after it.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Diffstat (limited to 'MdeModulePkg')
-rw-r--r-- | MdeModulePkg/Core/Dxe/Mem/HeapGuard.c | 8 |
1 files changed, 6 insertions, 2 deletions
diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c index 19245049c2..ac043b5d9b 100644 --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c @@ -225,8 +225,8 @@ FindGuardedMemoryMap ( //
// Adjust current map table depth according to the address to access
//
- while (mMapLevel < GUARDED_HEAP_MAP_TABLE_DEPTH
- &&
+ while (AllocMapUnit &&
+ mMapLevel < GUARDED_HEAP_MAP_TABLE_DEPTH &&
RShiftU64 (
Address,
mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel - 1]
@@ -904,6 +904,10 @@ AdjustMemoryS ( }
Target = Start + Size - SizeRequested;
+ ASSERT (Target >= Start);
+ if (Target == 0) {
+ return 0;
+ }
if (!IsGuardPage (Start + Size)) {
// No Guard at tail to share. One more page is needed.
|