summaryrefslogtreecommitdiffstats
path: root/arch/arm64/mm
diff options
context:
space:
mode:
authorMike Kravetz <mike.kravetz@oracle.com>2021-10-05 13:25:29 -0700
committerCatalin Marinas <catalin.marinas@arm.com>2021-10-11 18:45:19 +0100
commit2e5809a4ddb15969503e43b06662a9a725f613ea (patch)
treef714a1faff3965da0160f347a615a39413861d01 /arch/arm64/mm
parent22b70e6f2da0a4c8b1421b00cfc3016bc9d4d9d4 (diff)
downloadlinux-stable-2e5809a4ddb15969503e43b06662a9a725f613ea.tar.gz
linux-stable-2e5809a4ddb15969503e43b06662a9a725f613ea.tar.bz2
linux-stable-2e5809a4ddb15969503e43b06662a9a725f613ea.zip
arm64/hugetlb: fix CMA gigantic page order for non-4K PAGE_SIZE
For non-4K PAGE_SIZE configs, the largest gigantic huge page size is CONT_PMD_SHIFT order. On arm64 with 64K PAGE_SIZE, the gigantic page is 16G. Therefore, one should be able to specify 'hugetlb_cma=16G' on the kernel command line so that one gigantic page can be allocated from CMA. However, when adding such an option the following message is produced: hugetlb_cma: cma area should be at least 8796093022208 MiB This is because the calculation for non-4K gigantic page order is incorrect in the arm64 specific routine arm64_hugetlb_cma_reserve(). Fixes: abb7962adc80 ("arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs") Cc: <stable@vger.kernel.org> # 5.9.x Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20211005202529.213812-1-mike.kravetz@oracle.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Diffstat (limited to 'arch/arm64/mm')
-rw-r--r--arch/arm64/mm/hugetlbpage.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 23505fc35324..a8158c948966 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -43,7 +43,7 @@ void __init arm64_hugetlb_cma_reserve(void)
#ifdef CONFIG_ARM64_4K_PAGES
order = PUD_SHIFT - PAGE_SHIFT;
#else
- order = CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT;
+ order = CONT_PMD_SHIFT - PAGE_SHIFT;
#endif
/*
* HugeTLB CMA reservation is required for gigantic