summaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorJaewon Kim <jaewon31.kim@samsung.com>2015-09-08 15:02:21 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2015-09-29 19:26:12 +0200
commit154dff393c2a7b554469f626e442901ce8acc905 (patch)
tree1c31377d00a24d8b28d402f06f8270cb1e820788 /mm
parent804a6f7fb449d20d7fa560d040c75e95d8faf3e3 (diff)
downloadlinux-stable-154dff393c2a7b554469f626e442901ce8acc905.tar.gz
linux-stable-154dff393c2a7b554469f626e442901ce8acc905.tar.bz2
linux-stable-154dff393c2a7b554469f626e442901ce8acc905.zip
vmscan: fix increasing nr_isolated incurred by putback unevictable pages
commit c54839a722a02818677bcabe57e957f0ce4f841d upstream. reclaim_clean_pages_from_list() assumes that shrink_page_list() returns number of pages removed from the candidate list. But shrink_page_list() puts back mlocked pages without passing it to caller and without counting as nr_reclaimed. This increases nr_isolated. To fix this, this patch changes shrink_page_list() to pass unevictable pages back to caller. Caller will take care those pages. Minchan said: It fixes two issues. 1. With unevictable page, cma_alloc will be successful. Exactly speaking, cma_alloc of current kernel will fail due to unevictable pages. 2. fix leaking of NR_ISOLATED counter of vmstat With it, too_many_isolated works. Otherwise, it could make hang until the process get SIGKILL. Signed-off-by: Jaewon Kim <jaewon31.kim@samsung.com> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Mel Gorman <mgorman@techsingularity.net> Acked-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/vmscan.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0d024fc8aa8e..1a17bd7c0ce5 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1153,7 +1153,7 @@ cull_mlocked:
if (PageSwapCache(page))
try_to_free_swap(page);
unlock_page(page);
- putback_lru_page(page);
+ list_add(&page->lru, &ret_pages);
continue;
activate_locked: