summaryrefslogtreecommitdiffstats
path: root/mm/huge_memory.c
diff options
context:
space:
mode:
authorDavid Rientjes <rientjes@google.com>2012-05-29 15:06:23 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2012-05-29 16:22:19 -0700
commit1f1d06c34f7675026326cd9f39ff91e4555cf355 (patch)
treeb2493685179e3b222c915002648c3baba56318d2 /mm/huge_memory.c
parentbde8bd8a1d5242589ddcaef8e017b48b207c4729 (diff)
downloadlinux-1f1d06c34f7675026326cd9f39ff91e4555cf355.tar.gz
linux-1f1d06c34f7675026326cd9f39ff91e4555cf355.tar.bz2
linux-1f1d06c34f7675026326cd9f39ff91e4555cf355.zip
thp, memcg: split hugepage for memcg oom on cow
On COW, a new hugepage is allocated and charged to the memcg. If the system is oom or the charge to the memcg fails, however, the fault handler will return VM_FAULT_OOM which results in an oom kill. Instead, it's possible to fallback to splitting the hugepage so that the COW results only in an order-0 page being allocated and charged to the memcg which has a higher liklihood to succeed. This is expensive because the hugepage must be split in the page fault handler, but it is much better than unnecessarily oom killing a process. Signed-off-by: David Rientjes <rientjes@google.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <jweiner@redhat.com> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Michal Hocko <mhocko@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/huge_memory.c')
-rw-r--r--mm/huge_memory.c3
1 files changed, 3 insertions, 0 deletions
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index d7d7165156ca..edfeb8cb23df 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -952,6 +952,8 @@ int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
count_vm_event(THP_FAULT_FALLBACK);
ret = do_huge_pmd_wp_page_fallback(mm, vma, address,
pmd, orig_pmd, page, haddr);
+ if (ret & VM_FAULT_OOM)
+ split_huge_page(page);
put_page(page);
goto out;
}
@@ -959,6 +961,7 @@ int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))) {
put_page(new_page);
+ split_huge_page(page);
put_page(page);
ret |= VM_FAULT_OOM;
goto out;