diff options
author | Hugh Dickins <hughd@google.com> | 2022-02-14 18:38:47 -0800 |
---|---|---|
committer | Matthew Wilcox (Oracle) <willy@infradead.org> | 2022-02-17 11:59:40 -0500 |
commit | b74355078b6554271371532a5daa3b1a3db620f9 (patch) | |
tree | 2db40bdaeeb1d7842560e2d06b51236d7beb5f3a /mm/migrate.c | |
parent | 2fbb0c10d1e8222604132b3a3f81bfd8345a44b6 (diff) | |
download | linux-b74355078b6554271371532a5daa3b1a3db620f9.tar.gz linux-b74355078b6554271371532a5daa3b1a3db620f9.tar.bz2 linux-b74355078b6554271371532a5daa3b1a3db620f9.zip |
mm/munlock: page migration needs mlock pagevec drained
Page migration of a VM_LOCKED page tends to fail, because when the old
page is unmapped, it is put on the mlock pagevec with raised refcount,
which then fails the freeze.
At first I thought this would be fixed by a local mlock_page_drain() at
the upper rmap_walk() level - which would have nicely batched all the
munlocks of that page; but tests show that the task can too easily move
to another cpu, leaving pagevec residue behind which fails the migration.
So try_to_migrate_one() drain the local pagevec after page_remove_rmap()
from a VM_LOCKED vma; and do the same in try_to_unmap_one(), whose
TTU_IGNORE_MLOCK users would want the same treatment; and do the same
in remove_migration_pte() - not important when successfully inserting
a new page, but necessary when hoping to retry after failure.
Any new pagevec runs the risk of adding a new way of stranding, and we
might discover other corners where mlock_page_drain() or lru_add_drain()
would now help.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Diffstat (limited to 'mm/migrate.c')
-rw-r--r-- | mm/migrate.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/mm/migrate.c b/mm/migrate.c index f4bcf1541b62..e7d0b68d5dcb 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -251,6 +251,8 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, page_add_file_rmap(new, vma, false); set_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte); } + if (vma->vm_flags & VM_LOCKED) + mlock_page_drain(smp_processor_id()); /* No need to invalidate - it was non-present before */ update_mmu_cache(vma, pvmw.address, pvmw.pte); |