summaryrefslogtreecommitdiffstats
path: root/mm/slab.h
diff options
context:
space:
mode:
authorGreg Thelen <gthelen@google.com>2016-12-12 16:41:41 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2016-12-12 18:55:06 -0800
commitf728b0a5d72ae99c446f933912914a61254c03b6 (patch)
treee35973494392c657faece3a8191adb5816e86d7f /mm/slab.h
parente70954fd6d4b469517fd906ef1c33310e90ef9f0 (diff)
downloadlinux-f728b0a5d72ae99c446f933912914a61254c03b6.tar.gz
linux-f728b0a5d72ae99c446f933912914a61254c03b6.tar.bz2
linux-f728b0a5d72ae99c446f933912914a61254c03b6.zip
mm, slab: faster active and free stats
Reading /proc/slabinfo or monitoring slabtop(1) can become very expensive if there are many slab caches and if there are very lengthy per-node partial and/or free lists. Commit 07a63c41fa1f ("mm/slab: improve performance of gathering slabinfo stats") addressed the per-node full lists which showed a significant improvement when no objects were freed. This patch has the same motivation and optimizes the remainder of the usecases where there are very lengthy partial and free lists. This patch maintains per-node active_slabs (full and partial) and free_slabs rather than iterating the lists at runtime when reading /proc/slabinfo. When allocating 100GB of slab from a test cache where every slab page is on the partial list, reading /proc/slabinfo (includes all other slab caches on the system) takes ~247ms on average with 48 samples. As a result of this patch, the same read takes ~0.856ms on average. [rientjes@google.com: changelog] Link: http://lkml.kernel.org/r/alpine.DEB.2.10.1611081505240.13403@chino.kir.corp.google.com Signed-off-by: Greg Thelen <gthelen@google.com> Signed-off-by: David Rientjes <rientjes@google.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/slab.h')
-rw-r--r--mm/slab.h3
1 files changed, 2 insertions, 1 deletions
diff --git a/mm/slab.h b/mm/slab.h
index 699b072dc46e..26123c512fee 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -447,7 +447,8 @@ struct kmem_cache_node {
struct list_head slabs_partial; /* partial list first, better asm code */
struct list_head slabs_full;
struct list_head slabs_free;
- unsigned long num_slabs;
+ unsigned long active_slabs; /* length of slabs_partial+slabs_full */
+ unsigned long free_slabs; /* length of slabs_free */
unsigned long free_objects;
unsigned int free_limit;
unsigned int colour_next; /* Per-node cache coloring */