diff options
author | Matthew Wilcox (Oracle) <willy@infradead.org> | 2023-01-16 19:28:25 +0000 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2023-02-02 22:33:20 -0800 |
commit | 7efecffb8e7968c4a6c53177b0053ca4765fe233 (patch) | |
tree | 1b24f9d3ced3869c347ef1f83851fa1b0089376d /Documentation/mm | |
parent | 90c9d13a47d45f2f16530c4d62af2fa4d74dfd16 (diff) | |
download | linux-7efecffb8e7968c4a6c53177b0053ca4765fe233.tar.gz linux-7efecffb8e7968c4a6c53177b0053ca4765fe233.tar.bz2 linux-7efecffb8e7968c4a6c53177b0053ca4765fe233.zip |
mm: remove mlock_vma_page()
All callers now have a folio and can call mlock_vma_folio(). Update the
documentation to refer to mlock_vma_folio().
Link: https://lkml.kernel.org/r/20230116192827.2146732-3-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'Documentation/mm')
-rw-r--r-- | Documentation/mm/unevictable-lru.rst | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst index 552c63eef954..9257235fe904 100644 --- a/Documentation/mm/unevictable-lru.rst +++ b/Documentation/mm/unevictable-lru.rst @@ -311,7 +311,7 @@ do end up getting faulted into this VM_LOCKED VMA, they will be handled in the fault path - which is also how mlock2()'s MLOCK_ONFAULT areas are handled. For each PTE (or PMD) being faulted into a VMA, the page add rmap function -calls mlock_vma_page(), which calls mlock_folio() when the VMA is VM_LOCKED +calls mlock_vma_folio(), which calls mlock_folio() when the VMA is VM_LOCKED (unless it is a PTE mapping of a part of a transparent huge page). Or when it is a newly allocated anonymous page, folio_add_lru_vma() calls mlock_new_folio() instead: similar to mlock_folio(), but can make better @@ -413,7 +413,7 @@ However, since mlock_vma_pages_range() starts by setting VM_LOCKED on a VMA, before mlocking any pages already present, if one of those pages were migrated before mlock_pte_range() reached it, it would get counted twice in mlock_count. To prevent that, mlock_vma_pages_range() temporarily marks the VMA as VM_IO, -so that mlock_vma_page() will skip it. +so that mlock_vma_folio() will skip it. To complete page migration, we place the old and new pages back onto the LRU afterwards. The "unneeded" page - old page on success, new page on failure - @@ -552,6 +552,6 @@ and node unevictable list. rmap's folio_referenced_one(), called via vmscan's shrink_active_list() or shrink_page_list(), and rmap's try_to_unmap_one() called via shrink_page_list(), -check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_page() +check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_folio() to correct them. Such pages are culled to the unevictable list when released by the shrinker. |