diff options
author | Boqun Feng <boqun.feng@gmail.com> | 2025-03-26 11:08:30 -0700 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2025-03-27 08:23:17 +0100 |
commit | 495f53d5cca0f939eaed9dca90b67e7e6fb0e30c (patch) | |
tree | bb5e9244eb8ba7afadae74043ce06d1c719e18f7 | |
parent | 61c39d8c83e2077f33e0a2c8980a76a7f323f0ce (diff) | |
download | linux-stable-495f53d5cca0f939eaed9dca90b67e7e6fb0e30c.tar.gz linux-stable-495f53d5cca0f939eaed9dca90b67e7e6fb0e30c.tar.bz2 linux-stable-495f53d5cca0f939eaed9dca90b67e7e6fb0e30c.zip |
locking/lockdep: Decrease nr_unused_locks if lock unused in zap_class()
Currently, when a lock class is allocated, nr_unused_locks will be
increased by 1, until it gets used: nr_unused_locks will be decreased by
1 in mark_lock(). However, one scenario is missed: a lock class may be
zapped without even being used once. This could result into a situation
that nr_unused_locks != 0 but no unused lock class is active in the
system, and when `cat /proc/lockdep_stats`, a WARN_ON() will
be triggered in a CONFIG_DEBUG_LOCKDEP=y kernel:
[...] DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused)
[...] WARNING: CPU: 41 PID: 1121 at kernel/locking/lockdep_proc.c:283 lockdep_stats_show+0xba9/0xbd0
And as a result, lockdep will be disabled after this.
Therefore, nr_unused_locks needs to be accounted correctly at
zap_class() time.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Waiman Long <longman@redhat.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20250326180831.510348-1-boqun.feng@gmail.com
-rw-r--r-- | kernel/locking/lockdep.c | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index b15757e63626..58d78a33ac65 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -6264,6 +6264,9 @@ static void zap_class(struct pending_free *pf, struct lock_class *class) hlist_del_rcu(&class->hash_entry); WRITE_ONCE(class->key, NULL); WRITE_ONCE(class->name, NULL); + /* Class allocated but not used, -1 in nr_unused_locks */ + if (class->usage_mask == 0) + debug_atomic_dec(nr_unused_locks); nr_lock_classes--; __clear_bit(class - lock_classes, lock_classes_in_use); if (class - lock_classes == max_lock_class_idx) |