diff options
author | Johannes Weiner <hannes@cmpxchg.org> | 2017-07-06 15:40:43 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-07-06 16:24:35 -0700 |
commit | 385386cff4c6f047907655e05791d88198c4c523 (patch) | |
tree | d2dad3e3c9acdd389dd63a89c498fbedb284befa /mm/slab.c | |
parent | 2b2695f5fdf185092a72c81ca2c09ed1b9b37416 (diff) | |
download | linux-385386cff4c6f047907655e05791d88198c4c523.tar.gz linux-385386cff4c6f047907655e05791d88198c4c523.tar.bz2 linux-385386cff4c6f047907655e05791d88198c4c523.zip |
mm: vmstat: move slab statistics from zone to node counters
Patch series "mm: per-lruvec slab stats"
Josef is working on a new approach to balancing slab caches and the page
cache. For this to work, he needs slab cache statistics on the lruvec
level. These patches implement that by adding infrastructure that
allows updating and reading generic VM stat items per lruvec, then
switches some existing VM accounting sites, including the slab
accounting ones, to this new cgroup-aware API.
I'll follow up with more patches on this, because there is actually
substantial simplification that can be done to the memory controller
when we replace private memcg accounting with making the existing VM
accounting sites cgroup-aware. But this is enough for Josef to base his
slab reclaim work on, so here goes.
This patch (of 5):
To re-implement slab cache vs. page cache balancing, we'll need the
slab counters at the lruvec level, which, ever since lru reclaim was
moved from the zone to the node, is the intersection of the node, not
the zone, and the memcg.
We could retain the per-zone counters for when the page allocator dumps
its memory information on failures, and have counters on both levels -
which on all but NUMA node 0 is usually redundant. But let's keep it
simple for now and just move them. If anybody complains we can restore
the per-zone counters.
[hannes@cmpxchg.org: fix oops]
Link: http://lkml.kernel.org/r/20170605183511.GA8915@cmpxchg.org
Link: http://lkml.kernel.org/r/20170530181724.27197-3-hannes@cmpxchg.org
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/slab.c')
-rw-r--r-- | mm/slab.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/mm/slab.c b/mm/slab.c index 503317188926..a38634ed478e 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1425,10 +1425,10 @@ static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, nr_pages = (1 << cachep->gfporder); if (cachep->flags & SLAB_RECLAIM_ACCOUNT) - add_zone_page_state(page_zone(page), + add_node_page_state(page_pgdat(page), NR_SLAB_RECLAIMABLE, nr_pages); else - add_zone_page_state(page_zone(page), + add_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE, nr_pages); __SetPageSlab(page); @@ -1459,10 +1459,10 @@ static void kmem_freepages(struct kmem_cache *cachep, struct page *page) kmemcheck_free_shadow(page, order); if (cachep->flags & SLAB_RECLAIM_ACCOUNT) - sub_zone_page_state(page_zone(page), + sub_node_page_state(page_pgdat(page), NR_SLAB_RECLAIMABLE, nr_freed); else - sub_zone_page_state(page_zone(page), + sub_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE, nr_freed); BUG_ON(!PageSlab(page)); |