summaryrefslogtreecommitdiffstats
path: root/arch
diff options
context:
space:
mode:
authorAlexandre Chartre <alexandre.chartre@oracle.com>2021-07-19 13:39:01 +0100
committerMarc Zyngier <maz@kernel.org>2021-08-02 14:26:34 +0100
commitca4f202d08ba7f24cc97dce14c6d20ec7a679135 (patch)
tree7fa03570486c90c00152610607344c943b2b2e3c /arch
parentf5eff40058a856c23c5ec2f31756f107a2b1ef84 (diff)
downloadlinux-stable-ca4f202d08ba7f24cc97dce14c6d20ec7a679135.tar.gz
linux-stable-ca4f202d08ba7f24cc97dce14c6d20ec7a679135.tar.bz2
linux-stable-ca4f202d08ba7f24cc97dce14c6d20ec7a679135.zip
KVM: arm64: Disabling disabled PMU counters wastes a lot of time
In a KVM guest on arm64, performance counters interrupts have an unnecessary overhead which slows down execution when using the "perf record" command and limits the "perf record" sampling period. The problem is that when a guest VM disables counters by clearing the PMCR_EL0.E bit (bit 0), KVM will disable all counters defined in PMCR_EL0 even if they are not enabled in PMCNTENSET_EL0. KVM disables a counter by calling into the perf framework, in particular by calling perf_event_create_kernel_counter() which is a time consuming operation. So, for example, with a Neoverse N1 CPU core which has 6 event counters and one cycle counter, KVM will always disable all 7 counters even if only one is enabled. This typically happens when using the "perf record" command in a guest VM: perf will disable all event counters with PMCNTENTSET_EL0 and only uses the cycle counter. And when using the "perf record" -F option with a high profiling frequency, the overhead of KVM disabling all counters instead of one on every counter interrupt becomes very noticeable. The problem is fixed by having KVM disable only counters which are enabled in PMCNTENSET_EL0. If a counter is not enabled in PMCNTENSET_EL0 then KVM will not enable it when setting PMCR_EL0.E and it will remain disabled as long as it is not enabled in PMCNTENSET_EL0. So there is effectively no need to disable a counter when clearing PMCR_EL0.E if it is not enabled PMCNTENSET_EL0. Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> [maz: moved 'mask' close to the actual user, simplifying the patch] Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210712170345.660272-1-alexandre.chartre@oracle.com Link: https://lore.kernel.org/r/20210719123902.1493805-4-maz@kernel.org
Diffstat (limited to 'arch')
-rw-r--r--arch/arm64/kvm/pmu-emul.c5
1 files changed, 3 insertions, 2 deletions
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index fae4e95b586c..dc65b58dc68f 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -563,20 +563,21 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
*/
void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
{
- unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
int i;
if (val & ARMV8_PMU_PMCR_E) {
kvm_pmu_enable_counter_mask(vcpu,
__vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
} else {
- kvm_pmu_disable_counter_mask(vcpu, mask);
+ kvm_pmu_disable_counter_mask(vcpu,
+ __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
}
if (val & ARMV8_PMU_PMCR_C)
kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0);
if (val & ARMV8_PMU_PMCR_P) {
+ unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
mask &= ~BIT(ARMV8_PMU_CYCLE_IDX);
for_each_set_bit(i, &mask, 32)
kvm_pmu_set_counter_value(vcpu, i, 0);