summaryrefslogtreecommitdiffstats
path: root/include/linux/zswap.h
diff options
context:
space:
mode:
authorChengming Zhou <zhouchengming@bytedance.com>2024-01-19 11:22:23 +0000
committerAndrew Morton <akpm@linux-foundation.org>2024-02-22 10:24:39 -0800
commit44c7c734a5132fc02f5584c7207f1d0c483f3ccd (patch)
treeb39c42fb5a5e642d098dc930816962a75a5f2eab /include/linux/zswap.h
parentbb29fd7760ae39905127afd31fc83294625ff704 (diff)
downloadlinux-44c7c734a5132fc02f5584c7207f1d0c483f3ccd.tar.gz
linux-44c7c734a5132fc02f5584c7207f1d0c483f3ccd.tar.bz2
linux-44c7c734a5132fc02f5584c7207f1d0c483f3ccd.zip
mm/zswap: split zswap rb-tree
Each swapfile has one rb-tree to search the mapping of swp_entry_t to zswap_entry, that use a spinlock to protect, which can cause heavy lock contention if multiple tasks zswap_store/load concurrently. Optimize the scalability problem by splitting the zswap rb-tree into multiple rb-trees, each corresponds to SWAP_ADDRESS_SPACE_PAGES (64M), just like we did in the swap cache address_space splitting. Although this method can't solve the spinlock contention completely, it can mitigate much of that contention. Below is the results of kernel build in tmpfs with zswap shrinker enabled: linux-next zswap-lock-optimize real 1m9.181s 1m3.820s user 17m44.036s 17m40.100s sys 7m37.297s 4m54.622s So there are clearly improvements. Link: https://lkml.kernel.org/r/20240117-b4-zswap-lock-optimize-v2-2-b5cc55479090@bytedance.com Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Nhat Pham <nphamcs@gmail.com> Acked-by: Yosry Ahmed <yosryahmed@google.com> Cc: Chris Li <chriscli@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'include/linux/zswap.h')
-rw-r--r--include/linux/zswap.h4
1 files changed, 2 insertions, 2 deletions
diff --git a/include/linux/zswap.h b/include/linux/zswap.h
index eca388229d9a..91895ce1fdbc 100644
--- a/include/linux/zswap.h
+++ b/include/linux/zswap.h
@@ -30,7 +30,7 @@ struct zswap_lruvec_state {
bool zswap_store(struct folio *folio);
bool zswap_load(struct folio *folio);
void zswap_invalidate(int type, pgoff_t offset);
-int zswap_swapon(int type);
+int zswap_swapon(int type, unsigned long nr_pages);
void zswap_swapoff(int type);
void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg);
void zswap_lruvec_state_init(struct lruvec *lruvec);
@@ -51,7 +51,7 @@ static inline bool zswap_load(struct folio *folio)
}
static inline void zswap_invalidate(int type, pgoff_t offset) {}
-static inline int zswap_swapon(int type)
+static inline int zswap_swapon(int type, unsigned long nr_pages)
{
return 0;
}