diff options
author | eranian@google.com <eranian@google.com> | 2010-03-10 22:26:05 -0800 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2010-03-11 15:23:28 +0100 |
commit | 9b33fa6ba0e2f90fdf407501db801c2511121564 (patch) | |
tree | f5314e450afae1f1d4a9feadb0693e20584f62a6 /kernel/perf_event.c | |
parent | caa0142d84ceb0fc83e28f0475d0a7316cb6df77 (diff) | |
download | linux-9b33fa6ba0e2f90fdf407501db801c2511121564.tar.gz linux-9b33fa6ba0e2f90fdf407501db801c2511121564.tar.bz2 linux-9b33fa6ba0e2f90fdf407501db801c2511121564.zip |
perf_events: Improve task_sched_in()
This patch is an optimization in perf_event_task_sched_in() to avoid
scheduling the events twice in a row.
Without it, the perf_disable()/perf_enable() pair is invoked twice,
thereby pinned events counts while scheduling flexible events and we go
throuh hw_perf_enable() twice.
By encapsulating, the whole sequence into perf_disable()/perf_enable() we
ensure, hw_perf_enable() is going to be invoked only once because of the
refcount protection.
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1268288765-5326-1-git-send-email-eranian@google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/perf_event.c')
-rw-r--r-- | kernel/perf_event.c | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/kernel/perf_event.c b/kernel/perf_event.c index 52c69a34d697..3853d49c7d56 100644 --- a/kernel/perf_event.c +++ b/kernel/perf_event.c @@ -1368,6 +1368,8 @@ void perf_event_task_sched_in(struct task_struct *task) if (cpuctx->task_ctx == ctx) return; + perf_disable(); + /* * We want to keep the following priority order: * cpu pinned (that don't need to move), task pinned, @@ -1380,6 +1382,8 @@ void perf_event_task_sched_in(struct task_struct *task) ctx_sched_in(ctx, cpuctx, EVENT_FLEXIBLE); cpuctx->task_ctx = ctx; + + perf_enable(); } #define MAX_INTERRUPTS (~0ULL) |