diff options
author | Vineet Gupta <vgupta@kernel.org> | 2020-05-06 14:41:12 -0700 |
---|---|---|
committer | Vineet Gupta <vgupta@kernel.org> | 2021-08-24 14:25:47 -0700 |
commit | ecf51c9fa0960fd25cd66f2280fb1980b0d2e300 (patch) | |
tree | c049cf0cbc7aae83536b81e3e7c5681ab492d47b /arch/arc/include | |
parent | 9d011e12075dc51fb57f8203a08cc5229fbcb2ef (diff) | |
download | linux-ecf51c9fa0960fd25cd66f2280fb1980b0d2e300.tar.gz linux-ecf51c9fa0960fd25cd66f2280fb1980b0d2e300.tar.bz2 linux-ecf51c9fa0960fd25cd66f2280fb1980b0d2e300.zip |
ARC: xchg: !LLSC: remove UP micro-optimization/hack
It gets in the way of cleaning things up and is a maintenance
pain-in-neck !
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
Diffstat (limited to 'arch/arc/include')
-rw-r--r-- | arch/arc/include/asm/cmpxchg.h | 8 |
1 files changed, 1 insertions, 7 deletions
diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h index d42917e803e1..f9564dbe39b7 100644 --- a/arch/arc/include/asm/cmpxchg.h +++ b/arch/arc/include/asm/cmpxchg.h @@ -113,15 +113,9 @@ static inline unsigned long __xchg(unsigned long val, volatile void *ptr, * - For !LLSC, cmpxchg() needs to use that lock (see above) and there is lot * of kernel code which calls xchg()/cmpxchg() on same data (see llist.h) * Hence xchg() needs to follow same locking rules. - * - * Technically the lock is also needed for UP (boils down to irq save/restore) - * but we can cheat a bit since cmpxchg() atomic_ops_lock() would cause irqs to - * be disabled thus can't possibly be interrupted/preempted/clobbered by xchg() - * Other way around, xchg is one instruction anyways, so can't be interrupted - * as such */ -#if !defined(CONFIG_ARC_HAS_LLSC) && defined(CONFIG_SMP) +#ifndef CONFIG_ARC_HAS_LLSC #define arch_xchg(ptr, with) \ ({ \ |