summaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorPaul Mackerras <paulus@samba.org>2009-10-14 16:58:03 +1100
committerGreg Kroah-Hartman <gregkh@suse.de>2009-12-08 10:22:26 -0800
commit415cc7b7fe6fd663139da295d7bd2cde556345f0 (patch)
treea0e421fd7567913250df8ccfe63320ddbbed4b35 /kernel
parent2a959cfd1e6eff5ce71693bb6f7e753d71f5f088 (diff)
downloadlinux-stable-415cc7b7fe6fd663139da295d7bd2cde556345f0.tar.gz
linux-stable-415cc7b7fe6fd663139da295d7bd2cde556345f0.tar.bz2
linux-stable-415cc7b7fe6fd663139da295d7bd2cde556345f0.zip
perf_event: Adjust frequency and unthrottle for non-group-leader events
commit 03541f8b69c058162e4cf9675ec9181e6a204d55 upstream. The loop in perf_ctx_adjust_freq checks the frequency of sampling event counters, and adjusts the event interval and unthrottles the event if required, and resets the interrupt count for the event. However, at present it only looks at group leaders. This means that a sampling event that is not a group leader will eventually get throttled, once its interrupt count reaches sysctl_perf_event_sample_rate/HZ --- and that is guaranteed to happen, if the event is active for long enough, since the interrupt count never gets reset. Once it is throttled it never gets unthrottled, so it basically just stops working at that point. This fixes it by making perf_ctx_adjust_freq use ctx->event_list rather than ctx->group_list. The existing spin_lock/spin_unlock around the loop makes it unnecessary to put rcu_read_lock/ rcu_read_unlock around the list_for_each_entry_rcu(). Reported-by: Mark W. Krentel <krentel@cs.rice.edu> Signed-off-by: Paul Mackerras <paulus@samba.org> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <19157.26731.855609.165622@cargo.ozlabs.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/perf_counter.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/perf_counter.c b/kernel/perf_counter.c
index b1dc4684e66a..237fd07a369f 100644
--- a/kernel/perf_counter.c
+++ b/kernel/perf_counter.c
@@ -1363,7 +1363,7 @@ static void perf_ctx_adjust_freq(struct perf_counter_context *ctx)
u64 interrupts, freq;
spin_lock(&ctx->lock);
- list_for_each_entry(counter, &ctx->counter_list, list_entry) {
+ list_for_each_entry_rcu(counter, &ctx->counter_list, event_entry) {
if (counter->state != PERF_COUNTER_STATE_ACTIVE)
continue;