diff options
author | Chengming Zhou <chengming.zhou@linux.dev> | 2024-06-27 15:59:58 +0800 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2024-07-12 15:52:12 -0700 |
commit | 538148f9ba9e3136a877881e72ccbe56733daae2 (patch) | |
tree | fd398e57ddaae3be87c64393b83b7ca136733c9c /mm | |
parent | 81510a0eaa6916c2fbb0b2639f3e617a296979a3 (diff) | |
download | linux-stable-538148f9ba9e3136a877881e72ccbe56733daae2.tar.gz linux-stable-538148f9ba9e3136a877881e72ccbe56733daae2.tar.bz2 linux-stable-538148f9ba9e3136a877881e72ccbe56733daae2.zip |
mm/zsmalloc: clarify class per-fullness zspage counts
We always use insert_zspage() and remove_zspage() to update zspage's
fullness location, which will account correctly.
But this special async free path use "splice" instead of remove_zspage(),
so the per-fullness zspage count for ZS_INUSE_RATIO_0 won't decrease.
Clean things up by decreasing when iterate over the zspage free list.
This doesn't actually fix anything. ZS_INUSE_RATIO_0 is just a
"placeholder" which is never used anywhere.
Link: https://lkml.kernel.org/r/20240627075959.611783-1-chengming.zhou@linux.dev
Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/zsmalloc.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index fec1a39e5bbe..7fc25fa4e6b3 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1883,6 +1883,7 @@ static void async_free_zspage(struct work_struct *work) class = zspage_class(pool, zspage); spin_lock(&class->lock); + class_stat_dec(class, ZS_INUSE_RATIO_0, 1); __free_zspage(pool, class, zspage); spin_unlock(&class->lock); } |