summaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorKairui Song <kasong@tencent.com>2024-12-18 19:46:30 +0800
committerAndrew Morton <akpm@linux-foundation.org>2025-01-25 20:22:19 -0800
commita53f311349ca967b481f73501df5dfabb47a1d2f (patch)
tree7981fc9a6b0677183473da8a89e0bfc5f6afa61e /mm
parentb02fcc082a4ae1675a7f70a017ca4926286271ad (diff)
downloadlinux-stable-a53f311349ca967b481f73501df5dfabb47a1d2f.tar.gz
linux-stable-a53f311349ca967b481f73501df5dfabb47a1d2f.tar.bz2
linux-stable-a53f311349ca967b481f73501df5dfabb47a1d2f.zip
mm, memcontrol: avoid duplicated memcg enable check
Patch series "mm/swap_cgroup: remove global swap cgroup lock", v3. This series removes the global swap cgroup lock. The critical section of this lock is very short but it's still a bottle neck for mass parallel swap workloads. Up to 10% performance gain for tmpfs build kernel test on a 48c96t system under memory pressure, and no regression for other cases: This patch (of 3): mem_cgroup_uncharge_swap() includes a mem_cgroup_disabled() check, so the caller doesn't need to check that. Link: https://lkml.kernel.org/r/20241218114633.85196-1-ryncsn@gmail.com Link: https://lkml.kernel.org/r/20241218114633.85196-2-ryncsn@gmail.com Signed-off-by: Kairui Song <kasong@tencent.com> Reviewed-by: Yosry Ahmed <yosryahmed@google.com> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Acked-by: Shakeel Butt <shakeel.butt@linux.dev> Acked-by: Chris Li <chrisl@kernel.org> Cc: Barry Song <baohua@kernel.org> Cc: Hugh Dickins <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/memcontrol.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 7ddbb2d12eb9..5c373d275e7a 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -4595,7 +4595,7 @@ void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
* correspond 1:1 to page and swap slot lifetimes: we charge the
* page to memory here, and uncharge swap when the slot is freed.
*/
- if (!mem_cgroup_disabled() && do_memsw_account()) {
+ if (do_memsw_account()) {
/*
* The swap entry might not get freed for a long time,
* let's not wait for it. The page already received a