summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* perf probe: Do not add offset twice to uprobe addressMasami Hiramatsu2014-02-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix perf-probe not to add offset value twice to uprobe probe address when post processing. The tevs[i].point.address struct member is the address of symbol+offset, but current perf-probe adjusts the point.address by adding the offset. As a result, the probe address becomes symbol+offset+offset. This may cause unexpected code corruption. Urgent fix is needed. Without this fix: --- # ./perf probe -x ./perf dso__load_vmlinux+4 # ./perf probe -l probe_perf:dso__load_vmlinux (on 0x000000000006d2b8) # nm ./perf.orig | grep dso__load_vmlinux\$ 000000000046d0a0 T dso__load_vmlinux --- You can see the given offset is 3 but the actual probed address is dso__load_vmlinux+8. With this fix: --- # ./perf probe -x ./perf dso__load_vmlinux+4 # ./perf probe -l probe_perf:dso__load_vmlinux (on 0x000000000006d2b4) --- Now the problem is fixed. Note: This bug is introduced by commit fb7345bbf7fad9bf72ef63a19c707970b9685812 Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: "David A. Long" <dave.long@linaro.org> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: yrl.pp-manager.tt@hitachi.com Link: http://lkml.kernel.org/r/20140205051858.6519.27314.stgit@kbuild-fedora.yrl.intra.hitachi.co.jp Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf/x86: Fix Userspace RDPMC switchPeter Zijlstra2014-02-091-1/+1
| | | | | | | | | | | | The current code forgets to change the CR4 state on the current CPU. Use on_each_cpu() instead of smp_call_function(). Reported-by: Mark Davies <junk@eslaf.co.uk> Suggested-by: Mark Davies <junk@eslaf.co.uk> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: fweisbec@gmail.com Link: http://lkml.kernel.org/n/tip-69efsat90ibhnd577zy3z9gh@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* perf/x86/intel/p6: Add userspace RDPMC quirk for PProPeter Zijlstra2014-02-093-16/+39
| | | | | | | | | | | | | | | | | PPro machines can die hard when PCE gets enabled due to a CPU erratum. The safe way it so disable it by default and keep it disabled. See erratum 26 in: http://download.intel.com/design/archives/processors/pro/docs/24268935.pdf Reported-and-Tested-by: Mark Davies <junk@eslaf.co.uk> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: Stephane Eranian <eranian@google.com> Cc: Vince Weaver <vince@deater.net> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140206170815.GW2936@laptop.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
* Merge tag 'perf-urgent-for-mingo' of ↵Ingo Molnar2014-02-0213-62/+162
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent Pull perf/urgent fixes from Arnaldo Carvalho de Melo: * Fix annotation for relocated kernel (Adrian Hunter) * Fix demangling of symbols in kernel and kernel modules (Avi Kivity) * Fix include for non x86 architectures (Francesco Fusco) * Fix ARM64 memory barriers (Peter Zijlstra) Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * perf buildid-cache: Check relocation when checking for existing kcoreAdrian Hunter2014-01-311-4/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | perf buildid-cache does not make another copy of kcore if the buildid and modules match an existing copy. That does not take into account the possibility that the kernel has been relocated. Extend the check to check if the reference relocation symbol matches too, otherwise do make a copy. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Jiri Olsa <jolsa@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1391004884-10334-10-git-send-email-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf tools: Adjust kallsyms for relocated kernelAdrian Hunter2014-01-311-2/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the kernel is relocated at boot time, kallsyms will not match data recorded previously. That does not matter for modules because they are corrected anyway. It also does not matter if vmlinux is being used for symbols. But if perf tools has only kallsyms then the symbols will not match. Fix by applying the delta gained by comparing the old and current addresses of the relocation reference symbol. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Jiri Olsa <jolsa@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1391004884-10334-9-git-send-email-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf tests: No need to set up ref_reloc_symAdrian Hunter2014-01-311-10/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that ref_reloc_sym is set up by machine__create_kernel_maps(), the "vmlinux symtab matches kallsyms" test does have to do it. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Jiri Olsa <jolsa@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1391004884-10334-8-git-send-email-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf symbols: Prevent the use of kcore if the kernel has movedAdrian Hunter2014-01-311-4/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use of kcore is predicated upon it matching the recorded data. If the kernel has been relocated at boot time (i.e. since the data was recorded) then do not use kcore. Note that it is possible to make a copy of kcore at the time the data is recorded using 'perf buildid-cache'. Then the perf tools will use the copy because it does match the data. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Jiri Olsa <jolsa@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1391004884-10334-7-git-send-email-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf record: Get ref_reloc_sym from kernel mapAdrian Hunter2014-01-313-30/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that ref_reloc_sym is set up when the kernel map is created, 'perf record' does not need to pass the symbol names to perf_event__synthesize_kernel_mmap() which can read the values needed from ref_reloc_sym directly. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Jiri Olsa <jolsa@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1391004884-10334-6-git-send-email-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf machine: Set up ref_reloc_sym in machine__create_kernel_maps()Adrian Hunter2014-01-312-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ref_reloc_sym is always needed for the kernel map in order to check for relocation. Consequently set it up when the kernel map is created. Otherwise it was only being set up by 'perf record'. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Jiri Olsa <jolsa@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1391004884-10334-5-git-send-email-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf machine: Add machine__get_kallsyms_filename()Adrian Hunter2014-01-311-8/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Separate out the logic used to make the kallsyms full path name for a machine. It will be reused in a subsequent patch. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Jiri Olsa <jolsa@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1391004884-10334-4-git-send-email-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf tools: Add kallsyms__get_function_start()Adrian Hunter2014-01-312-3/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Separate out the logic used to find the start address of the reference symbol used to track kernel relocation. kallsyms__get_function_start() is used in subsequent patches. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Jiri Olsa <jolsa@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1391004884-10334-3-git-send-email-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf symbols: Fix symbol annotation for relocated kernelAdrian Hunter2014-01-313-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Kernel maps map memory addresses to file offsets. For symbol annotation, objdump needs the object VMA addresses. For an unrelocated kernel, that is the same as the memory address. The addresses passed to objdump for symbol annotation did not take into account kernel relocation. This patch fixes that. Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Jiri Olsa <jolsa@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1391004884-10334-2-git-send-email-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf tools: Fix include for non x86 architecturesFrancesco Fusco2014-01-311-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 71ae8aac ("lib: introduce arch optimized hash library") added an include to <linux/hash.h> for setting up an architecture specific fast hash. Since perf includes directly the non-uapi kernel header, it cannot find <asm/hash.h> on non-x86 and thus prevents perf to be compiled on every architecture other than x86. The problem is the inclusion of <asm/hash.h> in hash.h that results in the following error originating from util/evlist.c: fatal error: asm/hash.h: No such file or directory This commit simply adds an empty <asm/hash.h> stub/file to fix the compile issue on non-x86 architectures. As perf does not use any of these new functions, it fixes the compilation and therefore seems to be the most appropriate solution to go with. Signed-off-by: Francesco Fusco <ffusco@redhat.com> Link: http://lkml.kernel.org/r/2cf8143aad65a6aa6fe30325ef8a65847141afa2.1390829373.git.ffusco@redhat.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf tools: Fix AAAAARGH64 memory barriersPeter Zijlstra2014-01-291-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Someone got the load and store barriers mixed up for AAAAARGH64. Turn them the right side up. Reported-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Fixes: a94d342b9cb0 ("tools/perf: Add required memory barriers") Cc: Ingo Molnar <mingo@kernel.org> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/20140124154002.GF31570@twins.programming.kicks-ass.net Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf tools: Demangle kernel and kernel module symbols tooAvi Kivity2014-01-271-1/+1
|/ | | | | | | | | | | | Some kernels contain C++ code, and thus their symbols need to be demangled. This allows 'perf kvm top' to generate readable output. Signed-off-by: Avi Kivity <avi@cloudius-systems.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/26f71bf5bf7ee1408e3f1a803556d5df18223ef1.1390420726.git.avi@cloudius-systems.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf/doc: Remove mention of non-existent set_perf_event_pending() from ↵Baruch Siach2014-01-261-1/+0
| | | | | | | | | | | | | | design.txt set_perf_event_pending() was removed in e360adbe ("irq_work: Add generic hardirq context callbacks"). Signed-off-by: Baruch Siach <baruch@tkos.co.il> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Link: http://lkml.kernel.org/r/4c54761865d40210be0628cb84701afc5d57b5d8.1390686193.git.baruch@tkos.co.il Signed-off-by: Ingo Molnar <mingo@kernel.org>
* Merge tag 'perf-urgent-for-mingo' of ↵Ingo Molnar2014-01-253-3/+4
|\ | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent Pull perf/urgent fixes from Arnaldo Carvalho de Melo: * Fix traceevent plugin path definitions (Josh Boyer) * Load map before using map->map_ip() (Masami Hiramatsu) Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * perf symbols: Load map before using map->map_ip()Masami Hiramatsu2014-01-231-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In map_groups__find_symbol() map->map_ip is used without ensuring the map is loaded. Then the address passed to map->map_ip isn't mapped at the first time. E.g. below code always fails to get a symbol at the first call; addr = /* Somewhere in the kernel text */ symbol_conf.try_vmlinux_path = true; symbol__init(); host_machine = machine__new_host(); sym = machine__find_kernel_function(host_machine, addr, NULL, NULL); /* Note that machine__find_kernel_function calls map_groups__find_symbol */ This ensures it by calling map__load before using it in map_groups__find_symbol(). Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: "David A. Long" <dave.long@linaro.org> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org> Cc: yrl.pp-manager.tt@hitachi.com Link: http://lkml.kernel.org/r/20140123022950.7206.17357.stgit@kbuild-fedora.yrl.intra.hitachi.co.jp Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf tools: Fix traceevent plugin path definitionsJosh Boyer2014-01-232-2/+2
|/ | | | | | | | | | | | | | | | | | | | | | | The plugindir_SQ definition contains $(prefix) which is not needed as the $(libdir) definition already contains prefix in it. This leads to the path including an extra prefix in it, e.g. /usr/usr/lib64. The -DPLUGIN_DIR defintion includes DESTDIR. This is incorrect, as it sets the plugin search path to include the value of DESTDIR. DESTDIR is a mechanism to install in a non-standard location such as a chroot or an RPM build root. In the RPM case, this leads to the search path being incorrect after the resulting RPM is installed (or in some cases an RPM build failure). Remove both of these unnecessary inclusions. Signed-off-by: Josh Boyer <jwboyer@fedoraproject.org> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/20140122150147.GK16455@hansolo.jdub.homelinux.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* Merge tag 'perf-core-for-mingo' of ↵Ingo Molnar2014-01-2310-15/+47
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent Pull perf tooling fixes and updates from Arnaldo Carvalho de Melo: * Fix JIT symbol resolution on heap (Namhyung Kim) * Fix wrong SVG height in 'timechart' (Stanislav Fomichev) * Free temp cpu_map in perf_session__cpu_bitmap (Stanislav Fomichev) * Fix NULL pointer reference bug with event unit in 'stat' (Stephane Eranian) * Fix memory corruption of xyarray when cpumask is used (Stephane Eranian) * Ensure sscanf does not overrun the "mem" field (Alan Cox) * Add support for the xtensa architecture (Baruch Siach) Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * perf symbols: Fix JIT symbol resolution on heapNamhyung Kim2014-01-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Gaurav reported that perf cannot profile JIT program if it executes the code on heap. This was because current map__new() only handle JIT on anon mappings - extends it to handle no_dso (heap, stack) case too. This patch assumes JIT profiling only provides dynamic function symbols so check the mapping type to distinguish the case. It'd provide no symbols for data mapping - if we need to support symbols on data mappings later it should be changed. Reported-by: Gaurav Jain <gjain@fb.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Gaurav Jain <gjain@fb.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Gaurav Jain <gjain@fb.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung.kim@lge.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1389836971-3549-1-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf stat: Fix memory corruption of xyarray when cpumask is usedStephane Eranian2014-01-201-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes a memory corruption problem with the xyarray when the evsel fds get closed at the end of the run_perf_stat() call. It could be triggered with: # perf stat -a -e power/energy-cores/ ls When cpumask are used by events (.e.g, RAPL or uncores) then the evsel fds are allocated based on the actual number of CPUs monitored. That number can be smaller than the total number of CPUs on the system. The problem arises at the end by perf stat closes the fds twice. When fds are closed, their entry in the xyarray are set to -1. The first close() on the evsel is made from __run_perf_stat() and it uses the actual number of CPUS for the event which is how the xyarray was allocated for. The second is from perf_evlist_close() but that one is on the total number of CPUs in the system, so it assume the xyarray was allocated to cover it. However it was not, and some writes corrupt memory. The fix is in perf_evlist_close() is to first try with the evsel->cpus if present, if not use the evlist->cpus. That fixes the problem. Signed-off-by: Stephane Eranian <eranian@google.com> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1389972846-6566-3-git-send-email-eranian@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf evsel: Remove duplicate member zeroing after freeStephane Eranian2014-01-201-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | No need to set evsel->fd to NULL after calling perf_evsel__free_fd(), as this method already does that. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-wu6kul8fpapr8iyqm685ewtf@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf tools: Ensure sscanf does not overrun the "mem" fieldAlan Cox2014-01-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Make the parsing robust. (perf has some other assumptions that BUFSIZE <= MAX_PATH which are not touched here) Reported-by: Jackie Chang Signed-off-by: Alan Cox <alan@linux.intel.com> Cc: Alan Cox <gnomes@lxorguk.ukuu.org.uk> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-g2uoiwbrpiimb63rx32qv8ne@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf stat: fix NULL pointer reference bug with event unitStephane Eranian2014-01-203-6/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes a problem with the handling of the newly introduced optional event unit. The following cmdline caused a segfault: $ perf stat -e cpu/event-0x3c/ ls This patch fixes the problem with the default setting for alias->unit which was eventually causing the segfault. Signed-off-by: Stephane Eranian <eranian@google.com> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1389972846-6566-2-git-send-email-eranian@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf tools: Add support for the xtensa architectureBaruch Siach2014-01-201-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Tested using kernel tracepoints on a QEMU simulated environment. Kernel support for perf depends on the patch "xtensa: enable HAVE_PERF_EVENTS", which is scheduled for v3.14. Hardware performance counters are not supported under xtensa yet. Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Max Filippov <jcmvbkbc@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: linux-xtensa@linux-xtensa.org Link: http://lkml.kernel.org/r/aafcdb22f04e2d3188d2938528939481be56b649.1389608855.git.baruch@tkos.co.il Signed-off-by: Baruch Siach <baruch@tkos.co.il> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf session: Free cpu_map in perf_session__cpu_bitmapStanislav Fomichev2014-01-201-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This method uses a temporary struct cpu_map to figure out the cpus present in the received cpu list in string form, but it failed to free it after returning. Fix it. Signed-off-by: Stanislav Fomichev <stfomichev@yandex-team.ru> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1390217980-22424-3-git-send-email-stfomichev@yandex-team.ru [ Use goto + err = -1 to do the delete just once, in the normal exit path ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * perf timechart: Fix wrong SVG heightStanislav Fomichev2014-01-201-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we call perf timechart with -p 0 arguments, it means we don't want any tasks related data. It works, but space for tasks data is reserved in the generated SVG. Remove this unused empty space via passing 0 as count to the open_svg. Signed-off-by: Stanislav Fomichev <stfomichev@yandex-team.ru> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1390217980-22424-2-git-send-email-stfomichev@yandex-team.ru Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* | Merge branch 'x86-x32-for-linus' of ↵Linus Torvalds2014-01-202-21/+23
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x32 uapi changes from Peter Anvin: "This is the first few of a set of patches by H.J. Lu to make the kernel uapi headers usable for x32, as required by some non-glibc libcs. These particular patches make the stat and statfs structures usable" * 'x86-x32-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86, x32: Use __kernel_long_t for __statfs_word x86, x32: Use __kernel_long_t/__kernel_ulong_t in x86-64 stat.h
| * | x86, x32: Use __kernel_long_t for __statfs_wordH.J. Lu2013-12-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | x32 statfs system call is the same as x86-64 statfs system call, which uses 64-bit integer for __statfs_word. This patch defines __statfs_word as __kernel_long_t instead of long. Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Link: http://lkml.kernel.org/r/CAMe9rOrcppHvC5g8U9n7D%2BpxVGdu1G598pge3Erfw7Pr-iEpAQ@mail.gmail.com Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | x86, x32: Use __kernel_long_t/__kernel_ulong_t in x86-64 stat.hH.J. Lu2013-12-201-20/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Both x32 and x86-64 use the same stat system call interface. But x32 long is 32-bit. This patch changes x86 uapi <asm/stat.h> to use __kernel_long_t/__kernel_ulong_t in x86-64 stat. Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Link: http://lkml.kernel.org/r/CAMe9rOquPtWEro0GQ=Z95pZJ=c7GGkSHynjN4FbiB4p445x-Ng@mail.gmail.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* | | Merge branch 'x86/mpx' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds2014-01-207-26/+133
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull x86 cpufeature and mpx updates from Peter Anvin: "This includes the basic infrastructure for MPX (Memory Protection Extensions) support, but does not include MPX support itself. It is, however, a prerequisite for KVM support for MPX, which I believe will be pushed later this merge window by the KVM team. This includes moving the functionality in futex_atomic_cmpxchg_inatomic() into a new function in uaccess.h so it can be reused - this will be used by the final MPX patches. The actual MPX functionality (map management and so on) will be pushed in a future merge window, when ready" * 'x86/mpx' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/intel/mpx: Remove unused LWP structure x86, mpx: Add MPX related opcodes to the x86 opcode map x86: replace futex_atomic_cmpxchg_inatomic() with user_atomic_cmpxchg_inatomic x86: add user_atomic_cmpxchg_inatomic at uaccess.h x86, xsave: Support eager-only xsave features, add MPX support x86, cpufeature: Define the Intel MPX feature flag
| * | | x86/intel/mpx: Remove unused LWP structureIngo Molnar2014-01-201-8/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We don't support LWP yet, don't give the impression that we do: represent the LWP state as opaque 128 bytes, the way Linux sees it currently. Cc: Qiaowei Ren <qiaowei.ren@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-ecarmjtfKpanpAapfck6dj6g@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | x86, mpx: Add MPX related opcodes to the x86 opcode mapQiaowei Ren2014-01-171-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds all the MPX instructions to x86 opcode map, so the x86 instruction decoder can decode MPX instructions. Signed-off-by: Qiaowei Ren <qiaowei.ren@intel.com> Link: http://lkml.kernel.org/r/1389518403-7715-4-git-send-email-qiaowei.ren@intel.com Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | x86: replace futex_atomic_cmpxchg_inatomic() with user_atomic_cmpxchg_inatomicQiaowei Ren2013-12-161-20/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | futex_atomic_cmpxchg_inatomic() is simply the 32-bit implementation of user_atomic_cmpxchg_inatomic(), which in turn is simply a generalization of the original code in futex_atomic_cmpxchg_inatomic(). Use the newly generalized user_atomic_cmpxchg_inatomic() as the futex implementation, too. [ hpa: retain the inline in futex.h rather than changing it to a macro ] Signed-off-by: Qiaowei Ren <qiaowei.ren@intel.com> Link: http://lkml.kernel.org/r/1387002303-6620-2-git-send-email-qiaowei.ren@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
| * | | x86: add user_atomic_cmpxchg_inatomic at uaccess.hQiaowei Ren2013-12-161-0/+92
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds user_atomic_cmpxchg_inatomic() to use CMPXCHG instruction against a user space address. This generalizes the already existing futex_atomic_cmpxchg_inatomic() so it can be used in other contexts. This will be used in the upcoming support for Intel MPX (Memory Protection Extensions.) [ hpa: replaced #ifdef inside a macro with IS_ENABLED() ] Signed-off-by: Qiaowei Ren <qiaowei.ren@intel.com> Link: http://lkml.kernel.org/r/1387002303-6620-1-git-send-email-qiaowei.ren@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
| * | | x86, xsave: Support eager-only xsave features, add MPX supportQiaowei Ren2013-12-063-4/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some features, like Intel MPX, work only if the kernel uses eagerfpu model. So we should force eagerfpu on unless the user has explicitly disabled it. Add definitions for Intel MPX and add it to the supported list. [ hpa: renamed XSTATE_FLEXIBLE to XSTATE_LAZY and added comments ] Signed-off-by: Qiaowei Ren <qiaowei.ren@intel.com> Link: http://lkml.kernel.org/r/9E0BE1322F2F2246BD820DA9FC397ADE014A6115@SHSMSX102.ccr.corp.intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | x86, cpufeature: Define the Intel MPX feature flagQiaowei Ren2013-12-061-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Define the Intel MPX (Memory Protection Extensions) CPU feature flag in the cpufeature list. Signed-off-by: Qiaowei Ren <qiaowei.ren@intel.com> Link: http://lkml.kernel.org/r/1386375658-2191-2-git-send-email-qiaowei.ren@intel.com Signed-off-by: Xudong Hao <xudong.hao@intel.com> Signed-off-by: Liu Jinsong <jinsong.liu@intel.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* | | | Merge branch 'x86-kaslr-for-linus' of ↵Linus Torvalds2014-01-2022-158/+654
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 kernel address space randomization support from Peter Anvin: "This enables kernel address space randomization for x86" * 'x86-kaslr-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86, kaslr: Clarify RANDOMIZE_BASE_MAX_OFFSET x86, kaslr: Remove unused including <linux/version.h> x86, kaslr: Use char array to gain sizeof sanity x86, kaslr: Add a circular multiply for better bit diffusion x86, kaslr: Mix entropy sources together as needed x86/relocs: Add percpu fixup for GNU ld 2.23 x86, boot: Rename get_flags() and check_flags() to *_cpuflags() x86, kaslr: Raise the maximum virtual address to -1 GiB on x86_64 x86, kaslr: Report kernel offset on panic x86, kaslr: Select random position from e820 maps x86, kaslr: Provide randomness functions x86, kaslr: Return location from decompress_kernel x86, boot: Move CPU flags out of cpucheck x86, relocs: Add more per-cpu gold special cases
| * | | | x86, kaslr: Clarify RANDOMIZE_BASE_MAX_OFFSETKees Cook2014-01-141-11/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The help text for RANDOMIZE_BASE_MAX_OFFSET was confusing. This has been clarified, and updated to be an export-only tunable. Signed-off-by: Kees Cook <keescook@chromium.org> Link: http://lkml.kernel.org/r/20131210202745.GA2961@www.outflux.net Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | | x86, kaslr: Remove unused including <linux/version.h>Wei Yongjun2014-01-141-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove including <linux/version.h> that don't need it. Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn> Link: http://lkml.kernel.org/r/CAPgLHd-Fjx1RybjWFAu1vHRfTvhWwMLL3x46BouC5uNxHPjy1A@mail.gmail.com Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | | x86, kaslr: Use char array to gain sizeof sanityKees Cook2013-11-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The build_str needs to be char [] not char * for the sizeof() to report the string length. Reported-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: http://lkml.kernel.org/r/20131112165607.GA5921@www.outflux.net Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| * | | | x86, kaslr: Add a circular multiply for better bit diffusionH. Peter Anvin2013-11-111-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we don't have RDRAND (in which case nothing else *should* matter), most sources have a highly biased entropy distribution. Use a circular multiply to diffuse the entropic bits. A circular multiply is a good operation for this: it is cheap on standard hardware and because it is symmetric (unlike an ordinary multiply) it doesn't introduce its own bias. Cc: Kees Cook <keescook@chromium.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Link: http://lkml.kernel.org/r/20131111222839.GA28616@www.outflux.net
| * | | | x86, kaslr: Mix entropy sources together as neededKees Cook2013-11-112-22/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Depending on availability, mix the RDRAND and RDTSC entropy together with XOR. Only when neither is available should the i8254 be used. Update the Kconfig documentation to reflect this. Additionally, since bits used for entropy is masked elsewhere, drop the needless masking in the get_random_long(). Similarly, use the entire TSC, not just the low 32 bits. Finally, to improve the starting entropy, do a simple hashing of a build-time versions string and the boot-time boot_params structure for some additional level of unpredictability. Signed-off-by: Kees Cook <keescook@chromium.org> Link: http://lkml.kernel.org/r/20131111222839.GA28616@www.outflux.net Signed-off-by: H. Peter Anvin <hpa@zytor.com>
| * | | | x86/relocs: Add percpu fixup for GNU ld 2.23Kees Cook2013-10-181-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The GNU linker tries to put __per_cpu_load into the percpu area, resulting in a lack of its relocation. Force this symbol to be relocated. Seen starting with GNU ld 2.23 and later. Reported-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Michael Davidson <md@google.com> Cc: Cong Ding <dinggnu@gmail.com> Link: http://lkml.kernel.org/r/20131016064314.GA2739@www.outflux.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | x86, boot: Rename get_flags() and check_flags() to *_cpuflags()H. Peter Anvin2013-10-134-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a function is used in more than one file it may not be possible to immediately tell from context what the intended meaning is. As such, it is more important that the naming be self-evident. Thus, change get_flags() to get_cpuflags(). For consistency, change check_flags() to check_cpuflags() even though it is only used in cpucheck.c. Link: http://lkml.kernel.org/r/1381450698-28710-2-git-send-email-keescook@chromium.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | | x86, kaslr: Raise the maximum virtual address to -1 GiB on x86_64Kees Cook2013-10-134-7/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On 64-bit, this raises the maximum location to -1 GiB (from -1.5 GiB), the upper limit currently, since the kernel fixmap page mappings need to be moved to use the other 1 GiB (which would be the theoretical limit when building with -mcmodel=kernel). Signed-off-by: Kees Cook <keescook@chromium.org> Link: http://lkml.kernel.org/r/1381450698-28710-7-git-send-email-keescook@chromium.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | | x86, kaslr: Report kernel offset on panicKees Cook2013-10-131-0/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the system panics, include the kernel offset in the report to assist in debugging. Signed-off-by: Kees Cook <keescook@chromium.org> Link: http://lkml.kernel.org/r/1381450698-28710-6-git-send-email-keescook@chromium.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | | x86, kaslr: Select random position from e820 mapsKees Cook2013-10-133-9/+202
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Counts available alignment positions across all e820 maps, and chooses one randomly for the new kernel base address, making sure not to collide with unsafe memory areas. Signed-off-by: Kees Cook <keescook@chromium.org> Link: http://lkml.kernel.org/r/1381450698-28710-5-git-send-email-keescook@chromium.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>