diff options
author | Yang Jihong <yangjihong1@huawei.com> | 2023-02-27 10:35:08 +0800 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2023-04-14 16:08:22 +0200 |
commit | 15def34e2635ab7e0e96f1bc32e1b69609f14942 (patch) | |
tree | 6f5eed1a6bf23dca68381cb036c35a57b208010d /kernel/events | |
parent | 872d28001be56b205bd9b3f97cea1571a1bde317 (diff) | |
download | linux-15def34e2635ab7e0e96f1bc32e1b69609f14942.tar.gz linux-15def34e2635ab7e0e96f1bc32e1b69609f14942.tar.bz2 linux-15def34e2635ab7e0e96f1bc32e1b69609f14942.zip |
perf/core: Fix hardlockup failure caused by perf throttle
commit e050e3f0a71bf ("perf: Fix broken interrupt rate throttling")
introduces a change in throttling threshold judgment. Before this,
compare hwc->interrupts and max_samples_per_tick, then increase
hwc->interrupts by 1, but this commit reverses order of these two
behaviors, causing the semantics of max_samples_per_tick to change.
In literal sense of "max_samples_per_tick", if hwc->interrupts ==
max_samples_per_tick, it should not be throttled, therefore, the judgment
condition should be changed to "hwc->interrupts > max_samples_per_tick".
In fact, this may cause the hardlockup to fail, The minimum value of
max_samples_per_tick may be 1, in this case, the return value of
__perf_event_account_interrupt function is 1.
As a result, nmi_watchdog gets throttled, which would stop PMU (Use x86
architecture as an example, see x86_pmu_handle_irq).
Fixes: e050e3f0a71b ("perf: Fix broken interrupt rate throttling")
Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20230227023508.102230-1-yangjihong1@huawei.com
Diffstat (limited to 'kernel/events')
-rw-r--r-- | kernel/events/core.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/kernel/events/core.c b/kernel/events/core.c index fb3e436bcd4a..82b95b8e0409 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -9433,8 +9433,8 @@ __perf_event_account_interrupt(struct perf_event *event, int throttle) hwc->interrupts = 1; } else { hwc->interrupts++; - if (unlikely(throttle - && hwc->interrupts >= max_samples_per_tick)) { + if (unlikely(throttle && + hwc->interrupts > max_samples_per_tick)) { __this_cpu_inc(perf_throttled_count); tick_dep_set_cpu(smp_processor_id(), TICK_DEP_BIT_PERF_EVENTS); hwc->interrupts = MAX_INTERRUPTS; |