diff options
author | Vlastimil Babka <vbabka@suse.cz> | 2015-09-08 15:02:49 -0700 |
---|---|---|
committer | Sasha Levin <sasha.levin@oracle.com> | 2016-07-12 08:47:12 -0400 |
commit | f1f702e8044c1fb8791111b71b9cb2ff8b9c6e92 (patch) | |
tree | 19d6e4f55678c69e09a9703b78e2577a5ea4406c /mm | |
parent | a2d8c514753276394d68414f563591f174ef86cb (diff) | |
download | linux-stable-f1f702e8044c1fb8791111b71b9cb2ff8b9c6e92.tar.gz linux-stable-f1f702e8044c1fb8791111b71b9cb2ff8b9c6e92.tar.bz2 linux-stable-f1f702e8044c1fb8791111b71b9cb2ff8b9c6e92.zip |
mm, compaction: skip compound pages by order in free scanner
[ Upstream commit 683854270f84daa09baffe2b21d64ec88c614fa9 ]
[ Upstream commit 9fcd6d2e052eef525e94a9ae58dbe7ed4df4f5a7 ]
The compaction free scanner is looking for PageBuddy() pages and
skipping all others. For large compound pages such as THP or hugetlbfs,
we can save a lot of iterations if we skip them at once using their
compound_order(). This is generally unsafe and we can read a bogus
value of order due to a race, but if we are careful, the only danger is
skipping too much.
When tested with stress-highalloc from mmtests on 4GB system with 1GB
hugetlbfs pages, the vmstat compact_free_scanned count decreased by at
least 15%.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>
Acked-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/compaction.c | 25 |
1 files changed, 25 insertions, 0 deletions
diff --git a/mm/compaction.c b/mm/compaction.c index 8d010df763dc..347e998dbd16 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -371,6 +371,24 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, if (!valid_page) valid_page = page; + + /* + * For compound pages such as THP and hugetlbfs, we can save + * potentially a lot of iterations if we skip them at once. + * The check is racy, but we can consider only valid values + * and the only danger is skipping too much. + */ + if (PageCompound(page)) { + unsigned int comp_order = compound_order(page); + + if (likely(comp_order < MAX_ORDER)) { + blockpfn += (1UL << comp_order) - 1; + cursor += (1UL << comp_order) - 1; + } + + goto isolate_fail; + } + if (!PageBuddy(page)) goto isolate_fail; @@ -423,6 +441,13 @@ isolate_fail: } + /* + * There is a tiny chance that we have read bogus compound_order(), + * so be careful to not go outside of the pageblock. + */ + if (unlikely(blockpfn > end_pfn)) + blockpfn = end_pfn; + /* Record how far we have got within the block */ *start_pfn = blockpfn; |