summaryrefslogtreecommitdiffstats
path: root/mm/huge_memory.c
diff options
context:
space:
mode:
authorKefeng Wang <wangkefeng.wang@huawei.com>2024-06-18 17:12:39 +0800
committerAndrew Morton <akpm@linux-foundation.org>2024-07-03 19:30:20 -0700
commit78fefd04c123493bbf28434768fa577b2153c79b (patch)
tree3c53f2c78b94554cbda1394c3ec0f90729491e79 /mm/huge_memory.c
parent08af2c12e3ef379de94ba5653f2f9ae9a588ecd9 (diff)
downloadlinux-78fefd04c123493bbf28434768fa577b2153c79b.tar.gz
linux-78fefd04c123493bbf28434768fa577b2153c79b.tar.bz2
linux-78fefd04c123493bbf28434768fa577b2153c79b.zip
mm: memory: convert clear_huge_page() to folio_zero_user()
Patch series "mm: improve clear and copy user folio", v2. Some folio conversions. An improvement is to move address alignment into the caller as it is only needed if we don't know which address will be accessed when clearing/copying user folios. This patch (of 4): Replace clear_huge_page() with folio_zero_user(), and take a folio instead of a page. Directly get number of pages by folio_nr_pages() to remove pages_per_huge_page argument, furthermore, move the address alignment from folio_zero_user() to the callers since the alignment is only needed when we don't know which address will be accessed. Link: https://lkml.kernel.org/r/20240618091242.2140164-1-wangkefeng.wang@huawei.com Link: https://lkml.kernel.org/r/20240618091242.2140164-2-wangkefeng.wang@huawei.com Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: David Hildenbrand <david@redhat.com> Cc: "Huang, Ying" <ying.huang@intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Muchun Song <muchun.song@linux.dev> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/huge_memory.c')
-rw-r--r--mm/huge_memory.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 14a05c643806..be598e9a5f98 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -944,10 +944,10 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
goto release;
}
- clear_huge_page(page, vmf->address, HPAGE_PMD_NR);
+ folio_zero_user(folio, vmf->address);
/*
* The memory barrier inside __folio_mark_uptodate makes sure that
- * clear_huge_page writes become visible before the set_pmd_at()
+ * folio_zero_user writes become visible before the set_pmd_at()
* write.
*/
__folio_mark_uptodate(folio);