diff options
author | Thomas Gleixner <tglx@linutronix.de> | 2012-05-07 17:59:47 +0000 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2012-05-08 12:35:05 +0200 |
commit | 9cd75e13de2dcf32ecc21c7f277cff3c0ced059e (patch) | |
tree | ee14db83846f870aff7cc9e4c5e20633bc2c0136 /kernel/smp.c | |
parent | 392d9215782595e92afb318c0d48c930f8e571f0 (diff) | |
download | linux-stable-9cd75e13de2dcf32ecc21c7f277cff3c0ced059e.tar.gz linux-stable-9cd75e13de2dcf32ecc21c7f277cff3c0ced059e.tar.bz2 linux-stable-9cd75e13de2dcf32ecc21c7f277cff3c0ced059e.zip |
powerpc: Fix broken cpu_idle_wait() implementation
commit 771dae818 (powerpc/cpuidle: Add cpu_idle_wait() to allow
switching of idle routines) implemented cpu_idle_wait() for powerpc.
The changelog says:
"The equivalent routine for x86 is in arch/x86/kernel/process.c
but the powerpc implementation is different.":
Unfortunately the changelog is completely useless as it does not tell
_WHY_ it is different.
Aside of being different the implementation is patently wrong.
The rescheduling IPI is async. That means that there is no guarantee,
that the other cores have executed the IPI when cpu_idle_wait()
returns. But that's the whole purpose of this function: to guarantee
that no CPU uses the old idle handler anymore.
Use the smp_functional_call() based implementation, which fulfils the
requirements.
[ This code is going to replaced by a core version to remove all the
pointless copies in arch/*, but this one should go to stable ]
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
Cc: Trinabh Gupta <g.trinabh@gmail.com>
Cc: Arun R Bharadwaj <arun.r.bharadwaj@gmail.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Link: http://lkml.kernel.org/r/20120507175651.980164748@linutronix.de
Cc: stable@vger.kernel.org
Diffstat (limited to 'kernel/smp.c')
0 files changed, 0 insertions, 0 deletions