diff options
author | Hugh Dickins <hughd@google.com> | 2020-08-06 23:26:25 -0700 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2020-08-26 10:29:53 +0200 |
commit | 0b383dae5f81863b9a6767c2985500d461d4e566 (patch) | |
tree | 137169fb01d7db64b1f39a9e201795b5cdbcba62 /mm | |
parent | 61c4decf3743d69bb83bef15a1aedf6844d6f0ea (diff) | |
download | linux-stable-0b383dae5f81863b9a6767c2985500d461d4e566.tar.gz linux-stable-0b383dae5f81863b9a6767c2985500d461d4e566.tar.bz2 linux-stable-0b383dae5f81863b9a6767c2985500d461d4e566.zip |
khugepaged: khugepaged_test_exit() check mmget_still_valid()
[ Upstream commit bbe98f9cadff58cdd6a4acaeba0efa8565dabe65 ]
Move collapse_huge_page()'s mmget_still_valid() check into
khugepaged_test_exit() itself. collapse_huge_page() is used for anon THP
only, and earned its mmget_still_valid() check because it inserts a huge
pmd entry in place of the page table's pmd entry; whereas
collapse_file()'s retract_page_tables() or collapse_pte_mapped_thp()
merely clears the page table's pmd entry. But core dumping without mmap
lock must have been as open to mistaking a racily cleared pmd entry for a
page table at physical page 0, as exit_mmap() was. And we certainly have
no interest in mapping as a THP once dumping core.
Fixes: 59ea6d06cfa9 ("coredump: fix race condition between collapse_huge_page() and core dumping")
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Song Liu <songliubraving@fb.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: <stable@vger.kernel.org> [4.8+]
Link: http://lkml.kernel.org/r/alpine.LSU.2.11.2008021217020.27773@eggly.anvils
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/khugepaged.c | 5 |
1 files changed, 1 insertions, 4 deletions
diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 04b4c38d0c18..a1b7475c05d0 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -394,7 +394,7 @@ static void insert_to_mm_slots_hash(struct mm_struct *mm, static inline int khugepaged_test_exit(struct mm_struct *mm) { - return atomic_read(&mm->mm_users) == 0; + return atomic_read(&mm->mm_users) == 0 || !mmget_still_valid(mm); } int __khugepaged_enter(struct mm_struct *mm) @@ -1006,9 +1006,6 @@ static void collapse_huge_page(struct mm_struct *mm, * handled by the anon_vma lock + PG_lock. */ down_write(&mm->mmap_sem); - result = SCAN_ANY_PROCESS; - if (!mmget_still_valid(mm)) - goto out; result = hugepage_vma_revalidate(mm, address, &vma); if (result) goto out; |