diff options
author | Kan Liang <kan.liang@linux.intel.com> | 2020-08-21 12:57:52 -0700 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2020-09-10 11:19:34 +0200 |
commit | 556cccad389717d6eb4f5a24b45ff41cad3aaabf (patch) | |
tree | e6d7df64d411224602e0925c3bed2ecefcd58d8b /fs/xfs/xfs_mru_cache.h | |
parent | 35d1ce6bec133679ff16325d335217f108b84871 (diff) | |
download | linux-stable-556cccad389717d6eb4f5a24b45ff41cad3aaabf.tar.gz linux-stable-556cccad389717d6eb4f5a24b45ff41cad3aaabf.tar.bz2 linux-stable-556cccad389717d6eb4f5a24b45ff41cad3aaabf.zip |
perf/core: Pull pmu::sched_task() into perf_event_context_sched_in()
The pmu::sched_task() is a context switch callback. It passes the
cpuctx->task_ctx as a parameter to the lower code. To find the
cpuctx->task_ctx, the current code iterates a cpuctx list.
The same context was just iterated in perf_event_context_sched_in(),
which is invoked right before the pmu::sched_task().
Reuse the cpuctx->task_ctx from perf_event_context_sched_in() can avoid
the unnecessary iteration of the cpuctx list.
Both pmu::sched_task and perf_event_context_sched_in() have to disable
PMU. Pull the pmu::sched_task into perf_event_context_sched_in() can
also save the overhead from the PMU disable and reenable.
The new and old tasks may have equivalent contexts. The current code
optimize this case by swapping the context, which avoids the scheduling.
For this case, pmu::sched_task() is still required, e.g., restore the
LBR content.
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200821195754.20159-1-kan.liang@linux.intel.com
Diffstat (limited to 'fs/xfs/xfs_mru_cache.h')
0 files changed, 0 insertions, 0 deletions