diff options
author | Peter Zijlstra <peterz@infradead.org> | 2011-06-07 00:23:28 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2011-06-07 13:02:41 +0200 |
commit | b58f6b0dd3d677338b9065388cc2cc942b86338e (patch) | |
tree | 1c10cd87480b3c05b100fb4d85afaecfe2dd5b1b /kernel/events/core.c | |
parent | 3ce2a0bc9dfb6423491afe0afc9f099e24b8cba4 (diff) | |
download | linux-stable-b58f6b0dd3d677338b9065388cc2cc942b86338e.tar.gz linux-stable-b58f6b0dd3d677338b9065388cc2cc942b86338e.tar.bz2 linux-stable-b58f6b0dd3d677338b9065388cc2cc942b86338e.zip |
perf, core: Fix initial task_ctx/event installation
A lost Quilt refresh of 2c29ef0fef8 (perf: Simplify and fix
__perf_install_in_context()) is causing grief and lockups,
reported by Jiri Olsa.
When installing an event in a task context, there's a number of
issues:
- there might not be an existing task context, in which case
we should install the now current context;
- there might already be a context, not the current one, in
which case we should de-schedule the old and install the new;
these cases were dealt with in the lost refresh, however there is one
further case that was found in testing:
- there might already be a context, the current one, in which
case we should still de-schedule, and should take care
to re-install it (note that task_ctx_sched_out() clears
cpuctx->task_ctx).
Reported-by: Jiri Olsa <jolsa@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1307399008.2497.971.camel@laptop
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/events/core.c')
-rw-r--r-- | kernel/events/core.c | 28 |
1 files changed, 17 insertions, 11 deletions
diff --git a/kernel/events/core.c b/kernel/events/core.c index ba89f40abe6a..5e8c7b1389bc 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -1505,25 +1505,31 @@ static int __perf_install_in_context(void *info) struct perf_event_context *task_ctx = cpuctx->task_ctx; struct task_struct *task = current; - perf_ctx_lock(cpuctx, cpuctx->task_ctx); + perf_ctx_lock(cpuctx, task_ctx); perf_pmu_disable(cpuctx->ctx.pmu); /* * If there was an active task_ctx schedule it out. */ - if (task_ctx) { + if (task_ctx) task_ctx_sched_out(task_ctx); - /* - * If the context we're installing events in is not the - * active task_ctx, flip them. - */ - if (ctx->task && task_ctx != ctx) { - raw_spin_unlock(&cpuctx->ctx.lock); - raw_spin_lock(&ctx->lock); - cpuctx->task_ctx = task_ctx = ctx; - } + + /* + * If the context we're installing events in is not the + * active task_ctx, flip them. + */ + if (ctx->task && task_ctx != ctx) { + if (task_ctx) + raw_spin_unlock(&task_ctx->lock); + raw_spin_lock(&ctx->lock); + task_ctx = ctx; + } + + if (task_ctx) { + cpuctx->task_ctx = task_ctx; task = task_ctx->task; } + cpu_ctx_sched_out(cpuctx, EVENT_ALL); update_context_time(ctx); |