diff options
author | Gerald Schaefer <gerald.schaefer@linux.ibm.com> | 2022-08-19 18:53:43 +0200 |
---|---|---|
committer | Vasily Gorbik <gor@linux.ibm.com> | 2022-08-30 21:57:07 +0200 |
commit | 7c8d42fdf1a84b1a0dd60d6528309c8ec127e87c (patch) | |
tree | 41800d2467c9100daff7a8ac84e597099fe9d09c | |
parent | bdbf57bca6bf0b76a0f2681014552b25917c26e1 (diff) | |
download | linux-7c8d42fdf1a84b1a0dd60d6528309c8ec127e87c.tar.gz linux-7c8d42fdf1a84b1a0dd60d6528309c8ec127e87c.tar.bz2 linux-7c8d42fdf1a84b1a0dd60d6528309c8ec127e87c.zip |
s390/hugetlb: fix prepare_hugepage_range() check for 2 GB hugepages
The alignment check in prepare_hugepage_range() is wrong for 2 GB
hugepages, it only checks for 1 MB hugepage alignment.
This can result in kernel crash in __unmap_hugepage_range() at the
BUG_ON(start & ~huge_page_mask(h)) alignment check, for mappings
created with MAP_FIXED at unaligned address.
Fix this by correctly handling multiple hugepage sizes, similar to the
generic version of prepare_hugepage_range().
Fixes: d08de8e2d867 ("s390/mm: add support for 2GB hugepages")
Cc: <stable@vger.kernel.org> # 4.8+
Acked-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-rw-r--r-- | arch/s390/include/asm/hugetlb.h | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h index f22beda9e6d5..ccdbccfde148 100644 --- a/arch/s390/include/asm/hugetlb.h +++ b/arch/s390/include/asm/hugetlb.h @@ -28,9 +28,11 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, static inline int prepare_hugepage_range(struct file *file, unsigned long addr, unsigned long len) { - if (len & ~HPAGE_MASK) + struct hstate *h = hstate_file(file); + + if (len & ~huge_page_mask(h)) return -EINVAL; - if (addr & ~HPAGE_MASK) + if (addr & ~huge_page_mask(h)) return -EINVAL; return 0; } |