diff options
author | Ard Biesheuvel <ardb@kernel.org> | 2020-10-31 11:43:45 +0200 |
---|---|---|
committer | Mike Rapoport <rppt@linux.ibm.com> | 2020-11-04 10:42:57 +0200 |
commit | b9bc36704cca500e2b41be4c5bf615c1d7ddc3ce (patch) | |
tree | 16687fb5437c2cc2d7bcd6ea5d41d525a6817b17 | |
parent | 4ef8451b332662d004df269d4cdeb7d9f31419b5 (diff) | |
download | linux-stable-b9bc36704cca500e2b41be4c5bf615c1d7ddc3ce.tar.gz linux-stable-b9bc36704cca500e2b41be4c5bf615c1d7ddc3ce.tar.bz2 linux-stable-b9bc36704cca500e2b41be4c5bf615c1d7ddc3ce.zip |
ARM, xtensa: highmem: avoid clobbering non-page aligned memory reservations
free_highpages() iterates over the free memblock regions in high
memory, and marks each page as available for the memory management
system.
Until commit cddb5ddf2b76 ("arm, xtensa: simplify initialization of
high memory pages") it rounded beginning of each region upwards and end of
each region downwards.
However, after that commit free_highmem() rounds the beginning and end of
each region downwards, and we may end up freeing a page that is
memblock_reserve()d, resulting in memory corruption.
Restore the original rounding of the region boundaries to avoid freeing
reserved pages.
Fixes: cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages")
Link: https://lore.kernel.org/r/20201029110334.4118-1-ardb@kernel.org/
Link: https://lore.kernel.org/r/20201031094345.6984-1-rppt@kernel.org
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Co-developed-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
-rw-r--r-- | arch/arm/mm/init.c | 4 | ||||
-rw-r--r-- | arch/xtensa/mm/init.c | 4 |
2 files changed, 4 insertions, 4 deletions
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index d57112a276f5..c23dbf8bebee 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -354,8 +354,8 @@ static void __init free_highpages(void) /* set highmem page free */ for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { - unsigned long start = PHYS_PFN(range_start); - unsigned long end = PHYS_PFN(range_end); + unsigned long start = PFN_UP(range_start); + unsigned long end = PFN_DOWN(range_end); /* Ignore complete lowmem entries */ if (end <= max_low) diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c index c6fc83efee0c..8731b7ad9308 100644 --- a/arch/xtensa/mm/init.c +++ b/arch/xtensa/mm/init.c @@ -89,8 +89,8 @@ static void __init free_highpages(void) /* set highmem page free */ for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { - unsigned long start = PHYS_PFN(range_start); - unsigned long end = PHYS_PFN(range_end); + unsigned long start = PFN_UP(range_start); + unsigned long end = PFN_DOWN(range_end); /* Ignore complete lowmem entries */ if (end <= max_low) |