diff options
author | Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> | 2017-05-03 14:56:22 -0700 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2018-03-24 11:00:21 +0100 |
commit | 0b42ce075d8ed11259b36cdbed99bbab8f1f69f1 (patch) | |
tree | 138c4a305f448fa44c48398732e4c908f692ca78 /mm | |
parent | 82d938a9ed8e8d880dfd7e4def4adc1578e2710e (diff) | |
download | linux-stable-0b42ce075d8ed11259b36cdbed99bbab8f1f69f1.tar.gz linux-stable-0b42ce075d8ed11259b36cdbed99bbab8f1f69f1.tar.bz2 linux-stable-0b42ce075d8ed11259b36cdbed99bbab8f1f69f1.zip |
mm: hwpoison: call shake_page() after try_to_unmap() for mlocked page
[ Upstream commit 286c469a988fbaf68e3a97ddf1e6c245c1446968 ]
Memory error handler calls try_to_unmap() for error pages in various
states. If the error page is a mlocked page, error handling could fail
with "still referenced by 1 users" message. This is because the page is
linked to and stays in lru cache after the following call chain.
try_to_unmap_one
page_remove_rmap
clear_page_mlock
putback_lru_page
lru_cache_add
memory_failure() calls shake_page() to hanlde the similar issue, but
current code doesn't cover because shake_page() is called only before
try_to_unmap(). So this patches adds shake_page().
Fixes: 23a003bfd23ea9ea0b7756b920e51f64b284b468 ("mm/madvise: pass return code of memory_failure() to userspace")
Link: http://lkml.kernel.org/r/20170417055948.GM31394@yexl-desktop
Link: http://lkml.kernel.org/r/1493197841-23986-3-git-send-email-n-horiguchi@ah.jp.nec.com
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Reported-by: kernel test robot <lkp@intel.com>
Cc: Xiaolong Ye <xiaolong.ye@intel.com>
Cc: Chen Gong <gong.chen@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/memory-failure.c | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 5aa71a82ca73..851efb004857 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -921,6 +921,7 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn, int ret; int kill = 1, forcekill; struct page *hpage = *hpagep; + bool mlocked = PageMlocked(hpage); /* * Here we are interested only in user-mapped pages, so skip any @@ -985,6 +986,13 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn, pfn, page_mapcount(hpage)); /* + * try_to_unmap() might put mlocked page in lru cache, so call + * shake_page() again to ensure that it's flushed. + */ + if (mlocked) + shake_page(hpage, 0); + + /* * Now that the dirty bit has been propagated to the * struct page and all unmaps done we can decide if * killing is needed or not. Only kill when the page |