summaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorAndrew Bresticker <abrestic@rivosinc.com>2024-06-11 08:32:16 -0700
committerAndrew Morton <akpm@linux-foundation.org>2024-06-24 20:52:11 -0700
commitab1ffc86cb5bec1c92387b9811d9036512f8f4eb (patch)
treebc346066345e174dde4fe767dbc24036efca8d9b /mm
parentbf14ed81f571f8dba31cd72ab2e50fbcc877cc31 (diff)
downloadlinux-ab1ffc86cb5bec1c92387b9811d9036512f8f4eb.tar.gz
linux-ab1ffc86cb5bec1c92387b9811d9036512f8f4eb.tar.bz2
linux-ab1ffc86cb5bec1c92387b9811d9036512f8f4eb.zip
mm/memory: don't require head page for do_set_pmd()
The requirement that the head page be passed to do_set_pmd() was added in commit ef37b2ea08ac ("mm/memory: page_add_file_rmap() -> folio_add_file_rmap_[pte|pmd]()") and prevents pmd-mapping in the finish_fault() and filemap_map_pages() paths if the page to be inserted is anything but the head page for an otherwise suitable vma and pmd-sized page. Matthew said: : We're going to stop using PMDs to map large folios unless the fault is : within the first 4KiB of the PMD. No idea how many workloads that : affects, but it only needs to be backported as far as v6.8, so we may : as well backport it. Link: https://lkml.kernel.org/r/20240611153216.2794513-1-abrestic@rivosinc.com Fixes: ef37b2ea08ac ("mm/memory: page_add_file_rmap() -> folio_add_file_rmap_[pte|pmd]()") Signed-off-by: Andrew Bresticker <abrestic@rivosinc.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Hugh Dickins <hughd@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/memory.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/mm/memory.c b/mm/memory.c
index 25a77c4fe4a0..d10e616d7389 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4608,8 +4608,9 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
if (!thp_vma_suitable_order(vma, haddr, PMD_ORDER))
return ret;
- if (page != &folio->page || folio_order(folio) != HPAGE_PMD_ORDER)
+ if (folio_order(folio) != HPAGE_PMD_ORDER)
return ret;
+ page = &folio->page;
/*
* Just backoff if any subpage of a THP is corrupted otherwise