diff options
author | Waiman Long <longman@redhat.com> | 2019-05-24 15:42:22 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2019-05-24 14:17:18 -0700 |
commit | 51816e9e113934281b44f1a352852ef7631e75ea (patch) | |
tree | 2f2d6cc2f33a22e3a0603d844396bf3f5d5b6592 /net/rxrpc/sysctl.c | |
parent | 0a72ef89901409847036664c23ba6eee7cf08e0e (diff) | |
download | linux-stable-51816e9e113934281b44f1a352852ef7631e75ea.tar.gz linux-stable-51816e9e113934281b44f1a352852ef7631e75ea.tar.bz2 linux-stable-51816e9e113934281b44f1a352852ef7631e75ea.zip |
locking/lock_events: Use this_cpu_add() when necessary
The kernel test robot has reported that the use of __this_cpu_add()
causes bug messages like:
BUG: using __this_cpu_add() in preemptible [00000000] code: ...
Given the imprecise nature of the count and the possibility of resetting
the count and doing the measurement again, this is not really a big
problem to use the unprotected __this_cpu_*() functions.
To make the preemption checking code happy, the this_cpu_*() functions
will be used if CONFIG_DEBUG_PREEMPT is defined.
The imprecise nature of the locking counts are also documented with
the suggestion that we should run the measurement a few times with the
counts reset in between to get a better picture of what is going on
under the hood.
Fixes: a8654596f0371 ("locking/rwsem: Enable lock event counting")
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'net/rxrpc/sysctl.c')
0 files changed, 0 insertions, 0 deletions