diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2017-03-17 13:59:52 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-03-17 13:59:52 -0700 |
commit | a7fc726bb2e3ab3e960064392aa7cb66999d6927 (patch) | |
tree | 070f0b32f2dc5d8a06d48f1a15c161d4c97c0295 /arch | |
parent | cd21debe5318842a0bbd38c0327cfde2a3b90d65 (diff) | |
parent | a01851faab4b485e94c2ceaa1d0208a8d16ce367 (diff) | |
download | linux-a7fc726bb2e3ab3e960064392aa7cb66999d6927.tar.gz linux-a7fc726bb2e3ab3e960064392aa7cb66999d6927.tar.bz2 linux-a7fc726bb2e3ab3e960064392aa7cb66999d6927.zip |
Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf fixes from Thomas Gleixner:
"A set of perf related fixes:
- fix a CR4.PCE propagation issue caused by usage of mm instead of
active_mm and therefore propagated the wrong value.
- perf core fixes, which plug a use-after-free issue and make the
event inheritance on fork more robust.
- a tooling fix for symbol handling"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf symbols: Fix symbols__fixup_end heuristic for corner cases
x86/perf: Clarify why x86_pmu_event_mapped() isn't racy
x86/perf: Fix CR4.PCE propagation to use active_mm instead of mm
perf/core: Better explain the inherit magic
perf/core: Simplify perf_event_free_task()
perf/core: Fix event inheritance on fork()
perf/core: Fix use-after-free in perf_release()
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/events/core.c | 16 |
1 files changed, 14 insertions, 2 deletions
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 349d4d17aa7f..2aa1ad194db2 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -2101,8 +2101,8 @@ static int x86_pmu_event_init(struct perf_event *event) static void refresh_pce(void *ignored) { - if (current->mm) - load_mm_cr4(current->mm); + if (current->active_mm) + load_mm_cr4(current->active_mm); } static void x86_pmu_event_mapped(struct perf_event *event) @@ -2110,6 +2110,18 @@ static void x86_pmu_event_mapped(struct perf_event *event) if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED)) return; + /* + * This function relies on not being called concurrently in two + * tasks in the same mm. Otherwise one task could observe + * perf_rdpmc_allowed > 1 and return all the way back to + * userspace with CR4.PCE clear while another task is still + * doing on_each_cpu_mask() to propagate CR4.PCE. + * + * For now, this can't happen because all callers hold mmap_sem + * for write. If this changes, we'll need a different solution. + */ + lockdep_assert_held_exclusive(¤t->mm->mmap_sem); + if (atomic_inc_return(¤t->mm->context.perf_rdpmc_allowed) == 1) on_each_cpu_mask(mm_cpumask(current->mm), refresh_pce, NULL, 1); } |