summaryrefslogtreecommitdiffstats
path: root/mm/memory.c
diff options
context:
space:
mode:
authorKefeng Wang <wangkefeng.wang@huawei.com>2023-11-18 10:32:28 +0800
committerAndrew Morton <akpm@linux-foundation.org>2023-12-12 10:57:05 -0800
commit1486fb50136f4799946f5ecfe050094574647153 (patch)
tree904490f5dabda092285502c41c5710d3c7541d21 /mm/memory.c
parent6140edeea8bf30bf94c23b18c39448b43f528f46 (diff)
downloadlinux-1486fb50136f4799946f5ecfe050094574647153.tar.gz
linux-1486fb50136f4799946f5ecfe050094574647153.tar.bz2
linux-1486fb50136f4799946f5ecfe050094574647153.zip
mm: ksm: use more folio api in ksm_might_need_to_copy()
Patch series "mm: cleanup and use more folio in page fault", v3. Rename page_copy_prealloc() to folio_prealloc(), which is used by more functions, also do more folio conversion in page fault. This patch (of 5): Since ksm only support normal page, no swapout/in for ksm large folio too, add large folio check in ksm_might_need_to_copy(), also convert page->index to folio->index as page->index is going away. Then convert ksm_might_need_to_copy() to use more folio api to save nine compound_head() calls, short 'address' to reduce max-line-length. Link: https://lkml.kernel.org/r/20231118023232.1409103-1-wangkefeng.wang@huawei.com Link: https://lkml.kernel.org/r/20231118023232.1409103-2-wangkefeng.wang@huawei.com Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: David Hildenbrand <david@redhat.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com> Cc: Vishal Moola (Oracle) <vishal.moola@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/memory.c')
0 files changed, 0 insertions, 0 deletions