summaryrefslogtreecommitdiffstats
path: root/arch
diff options
context:
space:
mode:
authorDave P Martin <Dave.Martin@arm.com>2015-06-16 17:38:47 +0100
committerSasha Levin <sasha.levin@oracle.com>2015-07-03 23:02:32 -0400
commit0815b75f534fb3d55a6f4020513c6c932583f095 (patch)
tree4715155c9d1d1b082db9e795aa38846dbba94208 /arch
parente71254d2aa10d9b5f548bc06d2fbb018deeffac7 (diff)
downloadlinux-stable-0815b75f534fb3d55a6f4020513c6c932583f095.tar.gz
linux-stable-0815b75f534fb3d55a6f4020513c6c932583f095.tar.bz2
linux-stable-0815b75f534fb3d55a6f4020513c6c932583f095.zip
arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP
[ Upstream commit b9bcc919931611498e856eae9bf66337330d04cc ] The memmap freeing code in free_unused_memmap() computes the end of each memblock by adding the memblock size onto the base. However, if SPARSEMEM is enabled then the value (start) used for the base may already have been rounded downwards to work out which memmap entries to free after the previous memblock. This may cause memmap entries that are in use to get freed. In general, you're not likely to hit this problem unless there are at least 2 memblocks and one of them is not aligned to a sparsemem section boundary. Note that carve-outs can increase the number of memblocks by splitting the regions listed in the device tree. This problem doesn't occur with SPARSEMEM_VMEMMAP, because the vmemmap code deals with freeing the unused regions of the memmap instead of requiring the arch code to do it. This patch gets the memblock base out of the memblock directly when computing the block end address to ensure the correct value is used. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Diffstat (limited to 'arch')
-rw-r--r--arch/arm64/mm/init.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index c95464a33f36..f752943f75d2 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -238,7 +238,7 @@ static void __init free_unused_memmap(void)
* memmap entries are valid from the bank end aligned to
* MAX_ORDER_NR_PAGES.
*/
- prev_end = ALIGN(start + __phys_to_pfn(reg->size),
+ prev_end = ALIGN(__phys_to_pfn(reg->base + reg->size),
MAX_ORDER_NR_PAGES);
}