summaryrefslogtreecommitdiffstats
path: root/arch/hexagon
diff options
context:
space:
mode:
authorPeter Zijlstra <a.p.zijlstra@chello.nl>2011-11-20 20:36:02 +0100
committerIngo Molnar <mingo@elte.hu>2011-12-21 11:01:07 +0100
commit35edc2a5095efb189e60dc32bbb9d2663aec6d24 (patch)
tree3296a0dc54c4eb9d9ae5e0715d7521ecbb6d6f7e /arch/hexagon
parent9a0f05cb36888550d1509d60aa55788615abea44 (diff)
downloadlinux-35edc2a5095efb189e60dc32bbb9d2663aec6d24.tar.gz
linux-35edc2a5095efb189e60dc32bbb9d2663aec6d24.tar.bz2
linux-35edc2a5095efb189e60dc32bbb9d2663aec6d24.zip
perf, arch: Rework perf_event_index()
Put the logic to compute the event index into a per pmu method. This is required because the x86 rules are weird and wonderful and don't match the capabilities of the current scheme. AFAIK only powerpc actually has a usable userspace read of the PMCs but I'm not at all sure anybody actually used that. ARM is restored to the default since it currently does not support userspace access at all. And all software events are provided with a method that reports their index as 0 (disabled). Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Michael Cree <mcree@orcon.net.nz> Cc: Will Deacon <will.deacon@arm.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: Anton Blanchard <anton@samba.org> Cc: Eric B Munson <emunson@mgebm.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: David S. Miller <davem@davemloft.net> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Stephane Eranian <eranian@google.com> Cc: Arun Sharma <asharma@fb.com> Link: http://lkml.kernel.org/n/tip-dfydxodki16lylkt3gl2j7cw@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'arch/hexagon')
-rw-r--r--arch/hexagon/include/asm/perf_event.h2
1 files changed, 0 insertions, 2 deletions
diff --git a/arch/hexagon/include/asm/perf_event.h b/arch/hexagon/include/asm/perf_event.h
index 6c2910f91180..8b8526b491c7 100644
--- a/arch/hexagon/include/asm/perf_event.h
+++ b/arch/hexagon/include/asm/perf_event.h
@@ -19,6 +19,4 @@
#ifndef _ASM_PERF_EVENT_H
#define _ASM_PERF_EVENT_H
-#define PERF_EVENT_INDEX_OFFSET 0
-
#endif /* _ASM_PERF_EVENT_H */