diff options
author | Hugh Dickins <hughd@google.com> | 2022-02-14 18:23:29 -0800 |
---|---|---|
committer | Matthew Wilcox (Oracle) <willy@infradead.org> | 2022-02-17 11:56:44 -0500 |
commit | a213e5cf71cbcea4b23caedcb8fe6629a333b275 (patch) | |
tree | 09867e2d12bb2bb6942d2dfd855517550e4d85c3 /mm/madvise.c | |
parent | b67bf49ce7aae72f63739abee6ac25f64bf20081 (diff) | |
download | linux-a213e5cf71cbcea4b23caedcb8fe6629a333b275.tar.gz linux-a213e5cf71cbcea4b23caedcb8fe6629a333b275.tar.bz2 linux-a213e5cf71cbcea4b23caedcb8fe6629a333b275.zip |
mm/munlock: delete munlock_vma_pages_all(), allow oomreap
munlock_vma_pages_range() will still be required, when munlocking but
not munmapping a set of pages; but when unmapping a pte, the mlock count
will be maintained in much the same way as it will be maintained when
mapping in the pte. Which removes the need for munlock_vma_pages_all()
on mlocked vmas when munmapping or exiting: eliminating the catastrophic
contention on i_mmap_rwsem, and the need for page lock on the pages.
There is still a need to update locked_vm accounting according to the
munmapped vmas when munmapping: do that in detach_vmas_to_be_unmapped().
exit_mmap() does not need locked_vm updates, so delete unlock_range().
And wasn't I the one who forbade the OOM reaper to attack mlocked vmas,
because of the uncertainty in blocking on all those page locks?
No fear of that now, so permit the OOM reaper on mlocked vmas.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Diffstat (limited to 'mm/madvise.c')
-rw-r--r-- | mm/madvise.c | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/mm/madvise.c b/mm/madvise.c index 5604064df464..ae35d72627ef 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -530,6 +530,11 @@ static void madvise_cold_page_range(struct mmu_gather *tlb, tlb_end_vma(tlb, vma); } +static inline bool can_madv_lru_vma(struct vm_area_struct *vma) +{ + return !(vma->vm_flags & (VM_LOCKED|VM_HUGETLB|VM_PFNMAP)); +} + static long madvise_cold(struct vm_area_struct *vma, struct vm_area_struct **prev, unsigned long start_addr, unsigned long end_addr) |