diff options
author | Peter Xu <peterx@redhat.com> | 2024-04-17 17:18:35 -0400 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2024-05-17 11:55:52 +0200 |
commit | 4c806333efea1000a2a9620926f560ad2e1ca7cc (patch) | |
tree | ef0ba8704f147abd732a076ebe3954bcaffcb7d3 /mm | |
parent | cc8f0d90ba485e4ff052ce6d6f1ecea34a475348 (diff) | |
download | linux-stable-4c806333efea1000a2a9620926f560ad2e1ca7cc.tar.gz linux-stable-4c806333efea1000a2a9620926f560ad2e1ca7cc.tar.bz2 linux-stable-4c806333efea1000a2a9620926f560ad2e1ca7cc.zip |
mm/hugetlb: fix missing hugetlb_lock for resv uncharge
[ Upstream commit b76b46902c2d0395488c8412e1116c2486cdfcb2 ]
There is a recent report on UFFDIO_COPY over hugetlb:
https://lore.kernel.org/all/000000000000ee06de0616177560@google.com/
350: lockdep_assert_held(&hugetlb_lock);
Should be an issue in hugetlb but triggered in an userfault context, where
it goes into the unlikely path where two threads modifying the resv map
together. Mike has a fix in that path for resv uncharge but it looks like
the locking criteria was overlooked: hugetlb_cgroup_uncharge_folio_rsvd()
will update the cgroup pointer, so it requires to be called with the lock
held.
Link: https://lkml.kernel.org/r/20240417211836.2742593-3-peterx@redhat.com
Fixes: 79aa925bf239 ("hugetlb_cgroup: fix reservation accounting")
Signed-off-by: Peter Xu <peterx@redhat.com>
Reported-by: syzbot+4b8077a5fccc61c385a1@syzkaller.appspotmail.com
Reviewed-by: Mina Almasry <almasrymina@google.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/hugetlb.c | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index e720b9ac2833..e9ae0fc81dfb 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3190,9 +3190,12 @@ struct page *alloc_huge_page(struct vm_area_struct *vma, rsv_adjust = hugepage_subpool_put_pages(spool, 1); hugetlb_acct_memory(h, -rsv_adjust); - if (deferred_reserve) + if (deferred_reserve) { + spin_lock_irq(&hugetlb_lock); hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(h), pages_per_huge_page(h), folio); + spin_unlock_irq(&hugetlb_lock); + } } return page; |