diff options
author | Miaohe Lin <linmiaohe@huawei.com> | 2022-05-19 20:50:27 +0800 |
---|---|---|
committer | akpm <akpm@linux-foundation.org> | 2022-05-27 09:33:46 -0700 |
commit | 14a762dd1977cf811516fd97b0262b747cac88f7 (patch) | |
tree | e2f6d75dd452a200adfaf248cf9877f874769d11 /mm | |
parent | 9f186f9e5fa9ebdaef909fd45f825a6ce281f13c (diff) | |
download | linux-stable-14a762dd1977cf811516fd97b0262b747cac88f7.tar.gz linux-stable-14a762dd1977cf811516fd97b0262b747cac88f7.tar.bz2 linux-stable-14a762dd1977cf811516fd97b0262b747cac88f7.zip |
mm/swapfile: fix lost swap bits in unuse_pte()
This is observed by code review only but not any real report.
When we turn off swapping we could have lost the bits stored in the swap
ptes. The new rmap-exclusive bit is fine since that turned into a page
flag, but not for soft-dirty and uffd-wp. Add them.
Link: https://lkml.kernel.org/r/20220519125030.21486-3-linmiaohe@huawei.com
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Suggested-by: Peter Xu <peterx@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
Cc: NeilBrown <neilb@suse.de>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/swapfile.c | 10 |
1 files changed, 7 insertions, 3 deletions
diff --git a/mm/swapfile.c b/mm/swapfile.c index b86d1cc8d00b..e45874fb2ec7 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1774,7 +1774,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, { struct page *swapcache; spinlock_t *ptl; - pte_t *pte; + pte_t *pte, new_pte; int ret = 1; swapcache = page; @@ -1823,8 +1823,12 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, page_add_new_anon_rmap(page, vma, addr); lru_cache_add_inactive_or_unevictable(page, vma); } - set_pte_at(vma->vm_mm, addr, pte, - pte_mkold(mk_pte(page, vma->vm_page_prot))); + new_pte = pte_mkold(mk_pte(page, vma->vm_page_prot)); + if (pte_swp_soft_dirty(*pte)) + new_pte = pte_mksoft_dirty(new_pte); + if (pte_swp_uffd_wp(*pte)) + new_pte = pte_mkuffd_wp(new_pte); + set_pte_at(vma->vm_mm, addr, pte, new_pte); swap_free(entry); out: pte_unmap_unlock(pte, ptl); |