summaryrefslogtreecommitdiffstats
path: root/arch
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'for-2.6.38' of ↵Linus Torvalds2011-01-0730-97/+250
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu * 'for-2.6.38' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (30 commits) gameport: use this_cpu_read instead of lookup x86: udelay: Use this_cpu_read to avoid address calculation x86: Use this_cpu_inc_return for nmi counter x86: Replace uses of current_cpu_data with this_cpu ops x86: Use this_cpu_ops to optimize code vmstat: User per cpu atomics to avoid interrupt disable / enable irq_work: Use per cpu atomics instead of regular atomics cpuops: Use cmpxchg for xchg to avoid lock semantics x86: this_cpu_cmpxchg and this_cpu_xchg operations percpu: Generic this_cpu_cmpxchg() and this_cpu_xchg support percpu,x86: relocate this_cpu_add_return() and friends connector: Use this_cpu operations xen: Use this_cpu_inc_return taskstats: Use this_cpu_ops random: Use this_cpu_inc_return fs: Use this_cpu_inc_return in buffer.c highmem: Use this_cpu_xx_return() operations vmstat: Use this_cpu_inc_return for vm statistics x86: Support for this_cpu_add, sub, dec, inc_return percpu: Generic support for this_cpu_add, sub, dec, inc_return ... Fixed up conflicts: in arch/x86/kernel/{apic/nmi.c, apic/x2apic_uv_x.c, process.c} as per Tejun.
| * x86: udelay: Use this_cpu_read to avoid address calculationChristoph Lameter2011-01-041-1/+1
| | | | | | | | | | | | | | | | | | The code will use a segment prefix instead of doing the lookup and calculation. Signed-off-by: Christoph Lameter <cl@linux.com> Acked-by: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Tejun Heo <tj@kernel.org>
| * x86: Use this_cpu_inc_return for nmi counterTejun Heo2010-12-301-2/+1
| | | | | | | | | | | | | | | | | | | | | | this_cpu_inc_return() saves us a memory access there. Reviewed-by: Pekka Enberg <penberg@kernel.org> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Acked-by: H. Peter Anvin <hpa@zytor.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tejun Heo <tj@kernel.org>
| * x86: Replace uses of current_cpu_data with this_cpu opsTejun Heo2010-12-3010-27/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace all uses of current_cpu_data with this_cpu operations on the per cpu structure cpu_info. The scala accesses are replaced with the matching this_cpu ops which results in smaller and more efficient code. In the long run, it might be a good idea to remove cpu_data() macro too and use per_cpu macro directly. tj: updated description Cc: Yinghai Lu <yinghai@kernel.org> Cc: Ingo Molnar <mingo@elte.hu> Acked-by: H. Peter Anvin <hpa@zytor.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tejun Heo <tj@kernel.org>
| * x86: Use this_cpu_ops to optimize codeTejun Heo2010-12-3016-62/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | Go through x86 code and replace __get_cpu_var and get_cpu_var instances that refer to a scalar and are not used for address determinations. Cc: Yinghai Lu <yinghai@kernel.org> Cc: Ingo Molnar <mingo@elte.hu> Acked-by: Tejun Heo <tj@kernel.org> Acked-by: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tejun Heo <tj@kernel.org>
| * Merge branch 'this_cpu_ops' into for-2.6.38Tejun Heo2010-12-182-37/+153
| |\
| | * cpuops: Use cmpxchg for xchg to avoid lock semanticsChristoph Lameter2010-12-181-6/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use cmpxchg instead of xchg to realize this_cpu_xchg. xchg will cause LOCK overhead since LOCK is always implied but cmpxchg will not. Baselines: xchg() = 18 cycles (no segment prefix, LOCK semantics) __this_cpu_xchg = 1 cycle (simulated using this_cpu_read/write, two prefixes. Looks like the cpu can use loop optimization to get rid of most of the overhead) Cycles before: this_cpu_xchg = 37 cycles (segment prefix and LOCK (implied by xchg)) After: this_cpu_xchg = 11 cycle (using cmpxchg without lock semantics) Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tejun Heo <tj@kernel.org>
| | * x86: this_cpu_cmpxchg and this_cpu_xchg operationsChristoph Lameter2010-12-182-1/+109
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Provide support as far as the hardware capabilities of the x86 cpus allow. Define CONFIG_CMPXCHG_LOCAL in Kconfig.cpu to allow core code to test for fast cpuops implementations. V1->V2: - Take out the definition for this_cpu_cmpxchg_8 and move it into a separate patch. tj: - Reordered ops to better follow this_cpu_* organization. - Renamed macro temp variables similar to their existing neighbours. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tejun Heo <tj@kernel.org>
| | * percpu,x86: relocate this_cpu_add_return() and friendsTejun Heo2010-12-171-36/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - include/linux/percpu.h: this_cpu_add_return() and friends were located next to __this_cpu_add_return(). However, the overall organization is to first group by preemption safeness. Relocate this_cpu_add_return() and friends to preemption-safe area. - arch/x86/include/asm/percpu.h: Relocate percpu_add_return_op() after other more basic operations. Relocate [__]this_cpu_add_return_8() so that they're first grouped by preemption safeness. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Christoph Lameter <cl@linux.com>
| * | Merge branch 'this_cpu_ops' into for-2.6.38Tejun Heo2010-12-1782-659/+1004
| |\|
| | * x86: Support for this_cpu_add, sub, dec, inc_returnChristoph Lameter2010-12-171-0/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Supply an implementation for x86 in order to generate more efficient code. V2->V3: - Cleanup - Remove strange type checking from percpu_add_return_op. tj: - Dropped unused typedef from percpu_add_return_op(). - Renamed ret__ to paro_ret__ in percpu_add_return_op(). - Minor indentation adjustments. Acked-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tejun Heo <tj@kernel.org>
| * | xen: Use this_cpu_opsChristoph Lameter2010-12-174-11/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use this_cpu_ops to reduce code size and simplify things in various places. V3->V4: Move instance of this_cpu_inc_return to a later patchset so that this patch can be applied without infrastructure changes. Cc: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Acked-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tejun Heo <tj@kernel.org>
| * | kprobes: Use this_cpu_opsChristoph Lameter2010-12-171-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use this_cpu ops in various places to optimize per cpu data access. Cc: Jason Baron <jbaron@redhat.com> Cc: Namhyung Kim <namhyung@gmail.com> Acked-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tejun Heo <tj@kernel.org>
* | | Merge branch 'for-2.6.38' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wqLinus Torvalds2011-01-072-2/+3
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'for-2.6.38' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: (33 commits) usb: don't use flush_scheduled_work() speedtch: don't abuse struct delayed_work media/video: don't use flush_scheduled_work() media/video: explicitly flush request_module work ioc4: use static work_struct for ioc4_load_modules() init: don't call flush_scheduled_work() from do_initcalls() s390: don't use flush_scheduled_work() rtc: don't use flush_scheduled_work() mmc: update workqueue usages mfd: update workqueue usages dvb: don't use flush_scheduled_work() leds-wm8350: don't use flush_scheduled_work() mISDN: don't use flush_scheduled_work() macintosh/ams: don't use flush_scheduled_work() vmwgfx: don't use flush_scheduled_work() tpm: don't use flush_scheduled_work() sonypi: don't use flush_scheduled_work() hvsi: don't use flush_scheduled_work() xen: don't use flush_scheduled_work() gdrom: don't use flush_scheduled_work() ... Fixed up trivial conflict in drivers/media/video/bt8xx/bttv-input.c as per Tejun.
| * | | sh: don't use flush_scheduled_work()Tejun Heo2010-12-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | flush_scheduled_work() is deprecated and scheduled to be removed. Directly flush psw->work on removal instead. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: linux-sh@vger.kernel.org
| * | | arm/sharpsl: don't use flush_scheduled_work()Tejun Heo2010-12-241-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | flush_scheduled_work() is deprecated and scheduled to be removed. Directly flush toggle_charger and sharpsl_bat works on suspend instead. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Russell King <linux@arm.linux.org.uk>
* | | | Merge branch 'x86-apic-cleanups-for-linus' of ↵Linus Torvalds2011-01-079-228/+75
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-apic-cleanups-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: apic: Cleanup and simplify setup_local_APIC() x86: Further simplify mp_irq info handling x86: Unify 3 similar ways of saving mp_irqs info x86, ioapic: Avoid writing io_apic id if already correct x86, x2apic: Don't map lapic addr for preenabled x2apic systems x86, sfi: Use register_lapic_address() x86, apic: Use register_lapic_address() in init_apic_mapping() x86, apic: Remove early_init_lapic_mapping() x86, apic: Unify identical register_lapic_address() functions
| * \ \ \ Merge branch 'linus' into x86/apic-cleanupsIngo Molnar2011-01-07406-5089/+8360
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/x86/include/asm/io_apic.h Merge reason: Resolve the conflict, update to a more recent -rc base Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | | | x86: apic: Cleanup and simplify setup_local_APIC()Tejun Heo2010-12-101-12/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | setup_local_APIC() is used to setup local APIC early during CPU initialization and already assumes that preemption is disabled on entry. However, The function unnecessarily disables and enables preemption and uses smp_processor_id() multiple times in and out of the nested preemption disabled section. This gives the wrong impression that the function might be able to handle being called with preemption enabled and/or migrated to another processor in the middle. Make it clear that the function is always called with preemption disabled, drop the confusing preemption disable block and call smp_processor_id() once at the beginning of the function. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Cyrill Gorcunov <gorcunov@gmail.com> Reviewed-by: Pekka Enberg <penberg@kernel.org> Cc: Yinghai Lu <yinghai@kernel.org> Cc: brgerst@gmail.com LKML-Reference: <4D00B3B9.7060702@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * | | | | x86: Further simplify mp_irq info handlingFeng Tang2010-12-092-67/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | assign_to_mp_irq() is copying the struct mpc_intsrc members one by one. That's silly. Use memcpy() and let the compiler figure it out. Same for the identical function assign_to_mpc_intsrc() mp_irq_mpc_intsrc_cmp() is comparing the struct members one by one, but no caller ever checks the different return codes. Use memcmp() instead. Remove the extra printk in MP_ioapic_info() Signed-off-by: Feng Tang <feng.tang@linux.intel.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: "Alan Cox <alan@linux.intel.com> Cc: Len Brown <len.brown@intel.com> LKML-Reference: <20101208151857.212f0018@feng-i7> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * | | | | x86: Unify 3 similar ways of saving mp_irqs infoFeng Tang2010-12-095-110/+64
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are 3 places defining similar functions of saving IRQ vector info into mp_irqs[] array: mmparse/acpi/mrst. Replace the redundant code by a common function in io_apic.c as it's only called when CONFIG_X86_IO_APIC=y Signed-off-by: Feng Tang <feng.tang@intel.com> Cc: Alan Cox <alan@linux.intel.com> Cc: Len Brown <len.brown@intel.com> Cc: Yinghai Lu <yinghai@kernel.org> LKML-Reference: <20101207133204.4d913c5a@feng-i7> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * | | | | x86, ioapic: Avoid writing io_apic id if already correctYinghai Lu2010-12-091-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For 32bit mptable path, setup_ids_from_mpc() always writes the io_apic id register, even there is no change needed. Skip the write, when readout and mptable match. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Cc: Sebastian Siewior <bigeasy@linutronix.de> LKML-Reference: <4CFDF785.7010401@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * | | | | x86, x2apic: Don't map lapic addr for preenabled x2apic systemsYinghai Lu2010-12-091-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If x2apic is preenabled and used by the kernel, we don't need to map the lapic address. That mapping will never be used. So just skip that in register_lapic_address() Signed-off-by: Yinghai Lu <yinghai@kernel.org> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> LKML-Reference: <4CFDF69C.9070501@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * | | | | x86, sfi: Use register_lapic_address()Yinghai Lu2010-12-091-12/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | register_lapic_address() and mp_sfi_register_lapic_address() are almost identical. Use the common function. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Len Brown <lenb@kernel.org> LKML-Reference: <4CFDF693.6000908@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * | | | | x86, apic: Use register_lapic_address() in init_apic_mapping()Yinghai Lu2010-12-091-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the printk as well, we don't want to print when nothing changed. We print in register_lapic_address() already. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> LKML-Reference: <4CFDF68A.7020902@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * | | | | x86, apic: Remove early_init_lapic_mapping()Yinghai Lu2010-12-094-30/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It is almost the same as smp_register_lapic_addr(). We just need to let smp_read_mpc() call smp_register_lapic_addr() when early==1. Add the apic_printk to smp_register_lapic_address() Signed-off-by: Yinghai Lu <yinghai@kernel.org> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> LKML-Reference: <4CFDF681.3030509@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * | | | | x86, apic: Unify identical register_lapic_address() functionsYinghai Lu2010-12-094-27/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | They are the same, move the common function to apic.c to allow further cleanups. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Len Brown <lenb@kernel.org> LKML-Reference: <4CFDF675.4060305@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * | | | | Merge branch 'x86/platform' into x86/apic-cleanupsThomas Gleixner2010-12-0922-15/+1328
| |\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reason: apic cleanup series depends on x86/apic, x86/amd-nb and x86/platform Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * \ \ \ \ \ Merge branch 'x86/amd-nb' into x86/apic-cleanupsThomas Gleixner2010-12-091768-25702/+70804
| |\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reason: apic cleanup series depends on x86/apic, x86/amd-nb x86/platform Conflicts: arch/x86/include/asm/io_apic.h Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* | \ \ \ \ \ \ Merge branch 'for-linus' of git://git390.marist.edu/pub/scm/linux-2.6Linus Torvalds2011-01-0747-1276/+1357
|\ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'for-linus' of git://git390.marist.edu/pub/scm/linux-2.6: (65 commits) [S390] prevent unneccesary loops_per_jiffy recalculation [S390] cpuinfo: use get_online_cpus() instead of preempt_disable() [S390] smp: remove cpu hotplug messages [S390] mutex: enable spinning mutex on s390 [S390] mutex: Introduce arch_mutex_cpu_relax() [S390] cio: fix ccwgroup unregistration race condition [S390] perf: add DWARF register lookup for s390 [S390] cleanup ftrace backend functions [S390] ptrace cleanup [S390] smp/idle: call init_idle() before starting a new cpu [S390] smp: delay idle task creation [S390] dasd: Correct retry counter for terminated I/O. [S390] dasd: Add support for raw ECKD access. [S390] dasd: Prevent deadlock during suspend/resume. [S390] dasd: Improve handling of stolen DASD reservation [S390] dasd: do path verification for paths added at runtime [S390] dasd: add High Performance FICON multitrack support [S390] cio: reduce memory consumption of itcw structures [S390] nmi: enable machine checks early [S390] qeth: buffer count imbalance ...
| * | | | | | | | [S390] prevent unneccesary loops_per_jiffy recalculationHeiko Carstens2011-01-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the seqfile /proc/cpuinfo gets accesses for each possible cpu loops_per_jiffy gets recalculated. However its value is only needed on first access. In addition loops_per_jiffy should be recalculated when the machine reports a capability change. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] cpuinfo: use get_online_cpus() instead of preempt_disable()Heiko Carstens2011-01-051-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use get_online_cpus() instead of preempt_disable() to make sure cpus don't go offline while accessing their per cpu data. The preempt_disable() stuff is old code which was used before get_online_cpus() was available. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] smp: remove cpu hotplug messagesHeiko Carstens2011-01-053-16/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Get rid of messages that indicate if a cpu went online or offline. There is nothing special about this anymore and these messages might flood the kernel log buffer which makes debugging harder since more important messages might be overwritten. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] mutex: enable spinning mutex on s390Gerald Schaefer2011-01-051-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This enables the spinning mutex feature on s390 by removing HAVE_DEFAULT_NO_SPIN_MUTEXES from arch/s390/Kconfig. Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] mutex: Introduce arch_mutex_cpu_relax()Gerald Schaefer2011-01-053-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The spinning mutex implementation uses cpu_relax() in busy loops as a compiler barrier. Depending on the architecture, cpu_relax() may do more than needed in this specific mutex spin loops. On System z we also give up the time slice of the virtual cpu in cpu_relax(), which prevents effective spinning on the mutex. This patch replaces cpu_relax() in the spinning mutex code with arch_mutex_cpu_relax(), which can be defined by each architecture that selects HAVE_ARCH_MUTEX_CPU_RELAX. The default is still cpu_relax(), so this patch should not affect other architectures than System z for now. Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1290437256.7455.4.camel@thinkpad> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | | | | | | [S390] cleanup ftrace backend functionsMartin Schwidefsky2011-01-054-171/+135
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] ptrace cleanupMartin Schwidefsky2011-01-0515-263/+347
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Overhaul program event recording and the code dealing with the ptrace user space interface. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] smp/idle: call init_idle() before starting a new cpuHeiko Carstens2011-01-053-7/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Call init_idle() which (re-)initializes the idle task structure before it gets used on a new cpu. That way we can also get rid of the odd preempt_enable_no_resched() call we have in the cpu offline path within cpu_idle(). That call prevented preempt count imbalances between cpu hotplug operations. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] smp: delay idle task creationHeiko Carstens2011-01-051-15/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Delay idle task creation until a cpu gets set online instead of creating them for all possible cpus at system startup. For one cpu system this should safe more than 1 MB. On my debug system with lots of debug stuff enabled this saves 2 MB. Same as on x86. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] dasd: Add support for raw ECKD access.Stefan Haberland2011-01-051-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Normal I/O operations through the DASD device driver give only access to the data fields of an ECKD device even for track based I/O. This patch extends the DASD device driver to give access to whole ECKD tracks including count, key and data fields. Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] dasd: Improve handling of stolen DASD reservationStefan Weinhuber2011-01-051-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a DASD device has been reserved by a Linux system, and later this reservation is ‘stolen’ by a second system by means of an unconditional reserve, then the first system receives a notification about this fact. With this patch such an event can be either ignored, as before, or it can be used to let the device fail all I/O request, so that the device will not block anymore. Signed-off-by: Stefan Weinhuber <wein@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] nmi: enable machine checks earlyHeiko Carstens2011-01-051-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until now machine checks for the swapper process of the IPL cpu are just implicitly (and more or less accidently) enabled when the first time the idle process goes into idle state and loads an enabled wait psw. Before that machine checks are disabled. So let's enable them explicitly in trap_init() so we have a well defined time when machine checks are enabled. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] 31 bit entry.S update.Martin Schwidefsky2011-01-051-103/+109
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make the code in the 31 bit entry.S code as similar as possible to the 64 bit version in entry64.S. That makes it easier to add new code to the first level interrupt handler that affects both 31 and 64 bit kernels. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] cio: obtain mdc value per channel pathSebastian Ott2011-01-051-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support to accumulate the number of 64K-bytes blocks all paths to a device at least support for a transport command. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] nohz: optimize arch_needs_cpu()Heiko Carstens2011-01-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | arch_needs_cpu() gets always executed on the current cpu. Therefore the cpu parameter can be ignored it is possible to use __get_cpu_var() instead of per_cpu() to access the per_cpu variable, which will generate better code. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] qdio: outbound tasklet scan thresholdJan Glauber2011-01-051-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce a scan treshold for the qdio outbound queues. By setting the threshold the driver can tell qdio after how much used SBALs qdio should schedule the outbound tasklet that scans the queue for finished SBALs. The threshold is specific by the drivers because a Hipersockets device is much faster in utilizing outbound buffers than a ZFCP or OSA device. The default values after how many used SBALs the tasklet should run are: OSA: > 31 SBALs Hipersockets: > 7 SBALs zfcp: > 55 SBALs Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] hypfs: Move buffer allocation from open to readMichael Holzheu2011-01-056-118/+195
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the buffer for diagnose data is allocated in the open function of the debugfs file and is released in the close function. This has the drawback that a user (root) can pin that memory by not closing the file. This patch moves the buffer allocation to the read function. The buffer is automatically released after the buffer is copied to userspace. Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] current_thread_info optimizationMartin Schwidefsky2011-01-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use thread_info lowcore field for current_thread_info(), saves an unnecessary calculation. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] extint: get rid of early code plus cleanupHeiko Carstens2011-01-052-103/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Get rid of register/unregister_early_external_interrupt() and clean up the code while at it. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * | | | | | | | [S390] pfault: delay register of pfault interruptHeiko Carstens2011-01-053-15/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use an early init call to initialize pfault. That way it is possible to use the register_external_interrupt() instead of the early variant. No need to enable pfault any earlier since it has only effect if user space processes are running. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>