summaryrefslogtreecommitdiffstats
path: root/arch/x86/events
diff options
context:
space:
mode:
authorNikolay Borisov <nborisov@suse.com>2019-05-31 13:06:51 +0300
committerIngo Molnar <mingo@kernel.org>2019-06-17 12:09:24 +0200
commit9ffbe8ac05dbb4ab4a4836a55a47fc6be945a38f (patch)
treeb75a7823242b8595bec5059167168719643ac3b0 /arch/x86/events
parentba54f0c3f7c400a392c413d8ca21d3ada35f2584 (diff)
downloadlinux-9ffbe8ac05dbb4ab4a4836a55a47fc6be945a38f.tar.gz
linux-9ffbe8ac05dbb4ab4a4836a55a47fc6be945a38f.tar.bz2
linux-9ffbe8ac05dbb4ab4a4836a55a47fc6be945a38f.zip
locking/lockdep: Rename lockdep_assert_held_exclusive() -> lockdep_assert_held_write()
All callers of lockdep_assert_held_exclusive() use it to verify the correct locking state of either a semaphore (ldisc_sem in tty, mmap_sem for perf events, i_rwsem of inode for dax) or rwlock by apparmor. Thus it makes sense to rename _exclusive to _write since that's the semantics callers care. Additionally there is already lockdep_assert_held_read(), which this new naming is more consistent with. No functional changes. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190531100651.3969-1-nborisov@suse.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/x86/events')
-rw-r--r--arch/x86/events/core.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index f315425d8468..cf91d80b8452 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -2179,7 +2179,7 @@ static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)
* For now, this can't happen because all callers hold mmap_sem
* for write. If this changes, we'll need a different solution.
*/
- lockdep_assert_held_exclusive(&mm->mmap_sem);
+ lockdep_assert_held_write(&mm->mmap_sem);
if (atomic_inc_return(&mm->context.perf_rdpmc_allowed) == 1)
on_each_cpu_mask(mm_cpumask(mm), refresh_pce, NULL, 1);