diff options
author | David Hildenbrand <david@redhat.com> | 2024-01-05 16:57:29 +0100 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2024-01-05 10:17:43 -0800 |
commit | 9c5938694cd0e9e00bdfb7e60900673263daf4d5 (patch) | |
tree | 699f89fae37be66e4cd87a16f5acb948139d4bdf /include/linux/rmap.h | |
parent | 982ae058b2f08f576e4f3d4055f8916ba789f3d4 (diff) | |
download | linux-9c5938694cd0e9e00bdfb7e60900673263daf4d5.tar.gz linux-9c5938694cd0e9e00bdfb7e60900673263daf4d5.tar.bz2 linux-9c5938694cd0e9e00bdfb7e60900673263daf4d5.zip |
mm/rmap: silence VM_WARN_ON_FOLIO() in __folio_rmap_sanity_checks()
Unfortunately, vm_insert_page() and friends and up passing
driver-allocated folios into folio_add_file_rmap_pte() using
insert_page_into_pte_locked().
While these driver-allocated folios can be compound pages (large folios),
they are not proper "rmappable" folios.
In these VM_MIXEDMAP VMAs, there isn't really the concept of a reverse
mapping, so long-term, we should clean that up and not call into rmap
code.
For the time being, document how we can end up in rmap code with large
folios that are not marked rmappable.
Link: https://lkml.kernel.org/r/793c5cee-d5fc-4eb1-86a2-39e05686233d@redhat.com
Fixes: 68f0320824fa ("mm/rmap: convert folio_add_file_rmap_range() into folio_add_file_rmap_[pte|ptes|pmd]()")
Reported-by: syzbot+50ef73537bbc393a25bb@syzkaller.appspotmail.com
Closes: https://lkml.kernel.org/r/000000000000014174060e09316e@google.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Yin Fengwei <fengwei.yin@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'include/linux/rmap.h')
-rw-r--r-- | include/linux/rmap.h | 11 |
1 files changed, 9 insertions, 2 deletions
diff --git a/include/linux/rmap.h b/include/linux/rmap.h index fd6fe16fa358..b7944a833668 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -199,8 +199,15 @@ static inline void __folio_rmap_sanity_checks(struct folio *folio, { /* hugetlb folios are handled separately. */ VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); - VM_WARN_ON_FOLIO(folio_test_large(folio) && - !folio_test_large_rmappable(folio), folio); + + /* + * TODO: we get driver-allocated folios that have nothing to do with + * the rmap using vm_insert_page(); therefore, we cannot assume that + * folio_test_large_rmappable() holds for large folios. We should + * handle any desired mapcount+stats accounting for these folios in + * VM_MIXEDMAP VMAs separately, and then sanity-check here that + * we really only get rmappable folios. + */ VM_WARN_ON_ONCE(nr_pages <= 0); VM_WARN_ON_FOLIO(page_folio(page) != folio, folio); |