summaryrefslogtreecommitdiffstats
path: root/mm/slub.c
diff options
context:
space:
mode:
authorRoman Gushchin <guro@fb.com>2021-02-24 12:03:11 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2021-02-24 13:38:29 -0800
commit2e9bd483159939ed2c0704b914294653c8341d25 (patch)
tree878ce98090f4ad550e476a5e80c746f62c44fda8 /mm/slub.c
parentcad8320b4b395702e49578580c70026c8271ea88 (diff)
downloadlinux-2e9bd483159939ed2c0704b914294653c8341d25.tar.gz
linux-2e9bd483159939ed2c0704b914294653c8341d25.tar.bz2
linux-2e9bd483159939ed2c0704b914294653c8341d25.zip
mm: memcg/slab: pre-allocate obj_cgroups for slab caches with SLAB_ACCOUNT
In general it's unknown in advance if a slab page will contain accounted objects or not. In order to avoid memory waste, an obj_cgroup vector is allocated dynamically when a need to account of a new object arises. Such approach is memory efficient, but requires an expensive cmpxchg() to set up the memcg/objcgs pointer, because an allocation can race with a different allocation on another cpu. But in some common cases it's known for sure that a slab page will contain accounted objects: if the page belongs to a slab cache with a SLAB_ACCOUNT flag set. It includes such popular objects like vm_area_struct, anon_vma, task_struct, etc. In such cases we can pre-allocate the objcgs vector and simple assign it to the page without any atomic operations, because at this early stage the page is not visible to anyone else. A very simplistic benchmark (allocating 10000000 64-bytes objects in a row) shows ~15% win. In the real life it seems that most workloads are not very sensitive to the speed of (accounted) slab allocations. [guro@fb.com: open-code set_page_objcgs() and add some comments, by Johannes] Link: https://lkml.kernel.org/r/20201113001926.GA2934489@carbon.dhcp.thefacebook.com [akpm@linux-foundation.org: fix it for mm-slub-call-account_slab_page-after-slab-page-initialization-fix.patch] Link: https://lkml.kernel.org/r/20201110195753.530157-2-guro@fb.com Signed-off-by: Roman Gushchin <guro@fb.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Shakeel Butt <shakeelb@google.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Christoph Lameter <cl@linux.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/slub.c')
-rw-r--r--mm/slub.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/slub.c b/mm/slub.c
index e0fef8f8d4bf..ca566361561e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1785,7 +1785,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
page->objects = oo_objects(oo);
- account_slab_page(page, oo_order(oo), s);
+ account_slab_page(page, oo_order(oo), s, flags);
page->slab_cache = s;
__SetPageSlab(page);