summaryrefslogtreecommitdiffstats
path: root/mm/memory-failure.c
diff options
context:
space:
mode:
authorNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>2017-05-03 14:56:22 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2017-05-03 15:52:12 -0700
commit286c469a988fbaf68e3a97ddf1e6c245c1446968 (patch)
tree82ef1c1a429b6cb910d786bbc94561b79412ef38 /mm/memory-failure.c
parent8bcb74de764aaa261d6af3ce5ac723e435f00ff4 (diff)
downloadlinux-stable-286c469a988fbaf68e3a97ddf1e6c245c1446968.tar.gz
linux-stable-286c469a988fbaf68e3a97ddf1e6c245c1446968.tar.bz2
linux-stable-286c469a988fbaf68e3a97ddf1e6c245c1446968.zip
mm: hwpoison: call shake_page() after try_to_unmap() for mlocked page
Memory error handler calls try_to_unmap() for error pages in various states. If the error page is a mlocked page, error handling could fail with "still referenced by 1 users" message. This is because the page is linked to and stays in lru cache after the following call chain. try_to_unmap_one page_remove_rmap clear_page_mlock putback_lru_page lru_cache_add memory_failure() calls shake_page() to hanlde the similar issue, but current code doesn't cover because shake_page() is called only before try_to_unmap(). So this patches adds shake_page(). Fixes: 23a003bfd23ea9ea0b7756b920e51f64b284b468 ("mm/madvise: pass return code of memory_failure() to userspace") Link: http://lkml.kernel.org/r/20170417055948.GM31394@yexl-desktop Link: http://lkml.kernel.org/r/1493197841-23986-3-git-send-email-n-horiguchi@ah.jp.nec.com Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reported-by: kernel test robot <lkp@intel.com> Cc: Xiaolong Ye <xiaolong.ye@intel.com> Cc: Chen Gong <gong.chen@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/memory-failure.c')
-rw-r--r--mm/memory-failure.c8
1 files changed, 8 insertions, 0 deletions
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 9d87fcab96c9..73066b80d14a 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -916,6 +916,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
bool unmap_success;
int kill = 1, forcekill;
struct page *hpage = *hpagep;
+ bool mlocked = PageMlocked(hpage);
/*
* Here we are interested only in user-mapped pages, so skip any
@@ -980,6 +981,13 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
pfn, page_mapcount(hpage));
/*
+ * try_to_unmap() might put mlocked page in lru cache, so call
+ * shake_page() again to ensure that it's flushed.
+ */
+ if (mlocked)
+ shake_page(hpage, 0);
+
+ /*
* Now that the dirty bit has been propagated to the
* struct page and all unmaps done we can decide if
* killing is needed or not. Only kill when the page