summaryrefslogtreecommitdiffstats
path: root/mm/vmscan.c
diff options
context:
space:
mode:
authorDavid Rientjes <rientjes@google.com>2017-07-10 15:47:20 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2017-07-10 16:32:30 -0700
commit06226226773d56685de6ee9eb3f5d668e9f772ee (patch)
treec06a9def2199e162c70cb1a739d7f55fc939a4fd /mm/vmscan.c
parent0a1345f8fed962958047dc0148f94d9bed160824 (diff)
downloadlinux-stable-06226226773d56685de6ee9eb3f5d668e9f772ee.tar.gz
linux-stable-06226226773d56685de6ee9eb3f5d668e9f772ee.tar.bz2
linux-stable-06226226773d56685de6ee9eb3f5d668e9f772ee.zip
mm, vmscan: avoid thrashing anon lru when free + file is low
The purpose of the code that commit 623762517e23 ("revert 'mm: vmscan: do not swap anon pages just because free+file is low'") reintroduces is to prefer swapping anonymous memory rather than trashing the file lru. If the anonymous inactive lru for the set of eligible zones is considered low, however, or the length of the list for the given reclaim priority does not allow for effective anonymous-only reclaiming, then avoid forcing SCAN_ANON. Forcing SCAN_ANON will end up thrashing the small list and leave unreclaimed memory on the file lrus. If the inactive list is insufficient, fallback to balanced reclaim so the file lru doesn't remain untouched. [akpm@linux-foundation.org: fix build] Link: http://lkml.kernel.org/r/alpine.DEB.2.10.1705011432220.137835@chino.kir.corp.google.com Signed-off-by: David Rientjes <rientjes@google.com> Suggested-by: Minchan Kim <minchan@kernel.org> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/vmscan.c')
-rw-r--r--mm/vmscan.c13
1 files changed, 11 insertions, 2 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9e95fafc026b..e9210f825219 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2228,8 +2228,17 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
}
if (unlikely(pgdatfile + pgdatfree <= total_high_wmark)) {
- scan_balance = SCAN_ANON;
- goto out;
+ /*
+ * Force SCAN_ANON if there are enough inactive
+ * anonymous pages on the LRU in eligible zones.
+ * Otherwise, the small LRU gets thrashed.
+ */
+ if (!inactive_list_is_low(lruvec, false, memcg, sc, false) &&
+ lruvec_lru_size(lruvec, LRU_INACTIVE_ANON, sc->reclaim_idx)
+ >> sc->priority) {
+ scan_balance = SCAN_ANON;
+ goto out;
+ }
}
}