summaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorPeter Zijlstra <peterz@infradead.org>2016-01-26 15:25:15 +0100
committerIngo Molnar <mingo@kernel.org>2016-01-29 08:35:36 +0100
commit5fa7c8ec57f70a7b5c6fe269fa9c51b9e465989c (patch)
treeb98a192b101a0813beb4f306925d887f30857842 /kernel
parentc6e5b73242d2d9172ea880483bc4ba7ffca0cfb2 (diff)
downloadlinux-5fa7c8ec57f70a7b5c6fe269fa9c51b9e465989c.tar.gz
linux-5fa7c8ec57f70a7b5c6fe269fa9c51b9e465989c.tar.bz2
linux-5fa7c8ec57f70a7b5c6fe269fa9c51b9e465989c.zip
perf: Remove/simplify lockdep annotation
Now that the perf_event_ctx_lock_nested() call has moved from put_event() into perf_event_release_kernel() the first reason is no longer valid as that can no longer happen. The second reason seems to have been invalidated when Al Viro made fput() unconditionally async in the following commit: 4a9d4b024a31 ("switch fput to task_work_add") such that munmap()->fput()->release()->perf_release() would no longer happen. Therefore, remove the annotation. This should increase the efficiency of lockdep coverage of perf locking. Suggested-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/events/core.c22
1 files changed, 1 insertions, 21 deletions
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 98c862aff8fa..f1e53e8d4ae2 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -3758,19 +3758,7 @@ int perf_event_release_kernel(struct perf_event *event)
if (!is_kernel_event(event))
perf_remove_from_owner(event);
- /*
- * There are two ways this annotation is useful:
- *
- * 1) there is a lock recursion from perf_event_exit_task
- * see the comment there.
- *
- * 2) there is a lock-inversion with mmap_sem through
- * perf_read_group(), which takes faults while
- * holding ctx->mutex, however this is called after
- * the last filedesc died, so there is no possibility
- * to trigger the AB-BA case.
- */
- ctx = perf_event_ctx_lock_nested(event, SINGLE_DEPTH_NESTING);
+ ctx = perf_event_ctx_lock(event);
WARN_ON_ONCE(ctx->parent_ctx);
perf_remove_from_context(event, DETACH_GROUP | DETACH_STATE);
perf_event_ctx_unlock(event, ctx);
@@ -8759,14 +8747,6 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn)
* perf_event_create_kernel_count() which does find_get_context()
* without ctx::mutex (it cannot because of the move_group double mutex
* lock thing). See the comments in perf_install_in_context().
- *
- * We can recurse on the same lock type through:
- *
- * perf_event_exit_event()
- * put_event()
- * mutex_lock(&ctx->mutex)
- *
- * But since its the parent context it won't be the same instance.
*/
mutex_lock(&child_ctx->mutex);