diff options
author | Hugh Dickins <hughd@google.com> | 2013-02-22 16:36:09 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2013-02-23 17:50:23 -0800 |
commit | 9e16b7fb1d066d38d01fd57c449f2640c5d208cb (patch) | |
tree | 9c9cc35eea61bc16a563cfcdd5468b021daef0fa /mm/swapfile.c | |
parent | 5117b3b835f288314a2d4e5512bc1747e3a7c8ed (diff) | |
download | linux-9e16b7fb1d066d38d01fd57c449f2640c5d208cb.tar.gz linux-9e16b7fb1d066d38d01fd57c449f2640c5d208cb.tar.bz2 linux-9e16b7fb1d066d38d01fd57c449f2640c5d208cb.zip |
mm,ksm: swapoff might need to copy
Before establishing that KSM page migration was the cause of my
WARN_ON_ONCE(page_mapped(page))s, I suspected that they came from the
lack of a ksm_might_need_to_copy() in swapoff's unuse_pte() - which in
many respects is equivalent to faulting in a page.
In fact I've never caught that as the cause: but in theory it does at
least need the KSM_RUN_UNMERGE check in ksm_might_need_to_copy(), to
avoid bringing a KSM page back in when it's not supposed to be.
I intended to copy how it's done in do_swap_page(), but have a strong
aversion to how "swapcache" ends up being used there: rework it with
"page != swapcache".
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Petr Holasek <pholasek@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/swapfile.c')
-rw-r--r-- | mm/swapfile.c | 15 |
1 files changed, 14 insertions, 1 deletions
diff --git a/mm/swapfile.c b/mm/swapfile.c index 9b51266413cd..c72c648f750c 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -874,11 +874,17 @@ unsigned int count_swap_pages(int type, int free) static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, swp_entry_t entry, struct page *page) { + struct page *swapcache; struct mem_cgroup *memcg; spinlock_t *ptl; pte_t *pte; int ret = 1; + swapcache = page; + page = ksm_might_need_to_copy(page, vma, addr); + if (unlikely(!page)) + return -ENOMEM; + if (mem_cgroup_try_charge_swapin(vma->vm_mm, page, GFP_KERNEL, &memcg)) { ret = -ENOMEM; @@ -897,7 +903,10 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, get_page(page); set_pte_at(vma->vm_mm, addr, pte, pte_mkold(mk_pte(page, vma->vm_page_prot))); - page_add_anon_rmap(page, vma, addr); + if (page == swapcache) + page_add_anon_rmap(page, vma, addr); + else /* ksm created a completely new copy */ + page_add_new_anon_rmap(page, vma, addr); mem_cgroup_commit_charge_swapin(page, memcg); swap_free(entry); /* @@ -908,6 +917,10 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, out: pte_unmap_unlock(pte, ptl); out_nolock: + if (page != swapcache) { + unlock_page(page); + put_page(page); + } return ret; } |