diff options
author | Shaohua Li <shli@kernel.org> | 2013-04-29 15:08:36 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2013-04-29 15:54:38 -0700 |
commit | 5bc7b8aca942d03bf2716ddcfcb4e0b57e43a1b8 (patch) | |
tree | c76049e13755609ecbd4c4066fddd8bbfdd9f650 /mm/swap_state.c | |
parent | 1eec6702a80e04416d528846a5ff2122484d95ec (diff) | |
download | linux-stable-5bc7b8aca942d03bf2716ddcfcb4e0b57e43a1b8.tar.gz linux-stable-5bc7b8aca942d03bf2716ddcfcb4e0b57e43a1b8.tar.bz2 linux-stable-5bc7b8aca942d03bf2716ddcfcb4e0b57e43a1b8.zip |
mm: thp: add split tail pages to shrink page list in page reclaim
In page reclaim, huge page is split. split_huge_page() adds tail pages
to LRU list. Since we are reclaiming a huge page, it's better we
reclaim all subpages of the huge page instead of just the head page.
This patch adds split tail pages to shrink page list so the tail pages
can be reclaimed soon.
Before this patch, run a swap workload:
thp_fault_alloc 3492
thp_fault_fallback 608
thp_collapse_alloc 6
thp_collapse_alloc_failed 0
thp_split 916
With this patch:
thp_fault_alloc 4085
thp_fault_fallback 16
thp_collapse_alloc 90
thp_collapse_alloc_failed 0
thp_split 1272
fallback allocation is reduced a lot.
[akpm@linux-foundation.org: fix CONFIG_SWAP=n build]
Signed-off-by: Shaohua Li <shli@fusionio.com>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/swap_state.c')
-rw-r--r-- | mm/swap_state.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/swap_state.c b/mm/swap_state.c index fe43fd5578cf..b3d40dcf3624 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -160,7 +160,7 @@ void __delete_from_swap_cache(struct page *page) * Allocate swap space for the page and add the page to the * swap cache. Caller needs to hold the page lock. */ -int add_to_swap(struct page *page) +int add_to_swap(struct page *page, struct list_head *list) { swp_entry_t entry; int err; @@ -173,7 +173,7 @@ int add_to_swap(struct page *page) return 0; if (unlikely(PageTransHuge(page))) - if (unlikely(split_huge_page(page))) { + if (unlikely(split_huge_page_to_list(page, list))) { swapcache_free(entry, NULL); return 0; } |