summaryrefslogtreecommitdiffstats
path: root/mm/shmem.c
diff options
context:
space:
mode:
authorHugh Dickins <hugh@veritas.com>2008-03-04 14:29:08 -0800
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2008-03-04 16:35:15 -0800
commit7e924aafa4b03ff71de34af8553d9a1ebc86c071 (patch)
tree21735b7369ede27f34c41184b0a686020b80e2c6 /mm/shmem.c
parent9442ec9df40d952b0de185ae5638a74970388e01 (diff)
downloadlinux-stable-7e924aafa4b03ff71de34af8553d9a1ebc86c071.tar.gz
linux-stable-7e924aafa4b03ff71de34af8553d9a1ebc86c071.tar.bz2
linux-stable-7e924aafa4b03ff71de34af8553d9a1ebc86c071.zip
memcg: mem_cgroup_charge never NULL
My memcgroup patch to fix hang with shmem/tmpfs added NULL page handling to mem_cgroup_charge_common. It seemed convenient at the time, but hard to justify now: there's a perfectly appropriate swappage to charge and uncharge instead, this is not on any hot path through shmem_getpage, and no performance hit was observed from the slight extra overhead. So revert that NULL page handling from mem_cgroup_charge_common; and make it clearer by bringing page_cgroup_assign_new_page_cgroup into its body - that was a helper I found more of a hindrance to understanding. Signed-off-by: Hugh Dickins <hugh@veritas.com> Cc: David Rientjes <rientjes@google.com> Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Hirokazu Takahashi <taka@valinux.co.jp> Cc: YAMAMOTO Takashi <yamamoto@valinux.co.jp> Cc: Paul Menage <menage@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/shmem.c')
-rw-r--r--mm/shmem.c9
1 files changed, 6 insertions, 3 deletions
diff --git a/mm/shmem.c b/mm/shmem.c
index 90b576cbc06e..3372bc579e89 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1370,14 +1370,17 @@ repeat:
shmem_swp_unmap(entry);
spin_unlock(&info->lock);
unlock_page(swappage);
- page_cache_release(swappage);
if (error == -ENOMEM) {
/* allow reclaim from this memory cgroup */
- error = mem_cgroup_cache_charge(NULL,
+ error = mem_cgroup_cache_charge(swappage,
current->mm, gfp & ~__GFP_HIGHMEM);
- if (error)
+ if (error) {
+ page_cache_release(swappage);
goto failed;
+ }
+ mem_cgroup_uncharge_page(swappage);
}
+ page_cache_release(swappage);
goto repeat;
}
} else if (sgp == SGP_READ && !filepage) {