summaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorYang Shi <yang.shi@linux.alibaba.com>2019-11-30 17:58:07 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2019-12-01 12:59:10 -0800
commit4afab1cd256e425803374b58702ea86a05b0acf9 (patch)
tree34e8c9bf32216e98ecc336c2fcb420a3966aacb7 /mm
parent26083eb6b15448e7ec5182e33f9b1ba7ebce3a62 (diff)
downloadlinux-4afab1cd256e425803374b58702ea86a05b0acf9.tar.gz
linux-4afab1cd256e425803374b58702ea86a05b0acf9.tar.bz2
linux-4afab1cd256e425803374b58702ea86a05b0acf9.zip
mm: shmem: use proper gfp flags for shmem_writepage()
The shmem_writepage() uses GFP_ATOMIC to allocate swap cache. GFP_ATOMIC used to mean __GFP_HIGH, but now it means __GFP_HIGH | __GFP_ATOMIC | __GFP_KSWAPD_RECLAIM. However, shmem_writepage() should write out to swap only in response to memory pressure, so __GFP_KSWAPD_RECLAIM looks useless since the caller may be kswapd itself or in direct reclaim already. In addition, XArray node allocations from PF_MEMALLOC contexts could completely exhaust the page allocator, __GFP_NOMEMALLOC stops emergency reserves from being allocated. Here just copy the gfp flags used by add_to_swap(). Hugh: "a cleanup to make the two calls look the same when they don't need to be different (whereas the call from __read_swap_cache_async() rightly uses a lower priority gfp)". Link: http://lkml.kernel.org/r/1572991351-86061-1-git-send-email-yang.shi@linux.alibaba.com Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com> Acked-by: Hugh Dickins <hughd@google.com> Cc: Michal Hocko <mhocko@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/shmem.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/mm/shmem.c b/mm/shmem.c
index 6e4e742db5c2..3c336b02cf08 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1369,7 +1369,8 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
if (list_empty(&info->swaplist))
list_add(&info->swaplist, &shmem_swaplist);
- if (add_to_swap_cache(page, swap, GFP_ATOMIC) == 0) {
+ if (add_to_swap_cache(page, swap,
+ __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN) == 0) {
spin_lock_irq(&info->lock);
shmem_recalc_inode(inode);
info->swapped++;