diff options
author | Anna-Maria Gleixner <anna-maria@linutronix.de> | 2018-06-12 18:16:20 +0200 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2018-06-12 23:33:24 +0200 |
commit | ccfbb5bed407053b27492a9adc06064d949a9aa6 (patch) | |
tree | 1d12a1c6ba9e081d8af26dee5e60f8ed89bcdff2 /lib/dec_and_lock.c | |
parent | f2ae67941138a1e53cb1bc6a1b5878a8bdc74d26 (diff) | |
download | linux-stable-ccfbb5bed407053b27492a9adc06064d949a9aa6.tar.gz linux-stable-ccfbb5bed407053b27492a9adc06064d949a9aa6.tar.bz2 linux-stable-ccfbb5bed407053b27492a9adc06064d949a9aa6.zip |
atomic: Add irqsave variant of atomic_dec_and_lock()
There are in-tree users of atomic_dec_and_lock() which must acquire the
spin lock with interrupts disabled. To workaround the lack of an irqsave
variant of atomic_dec_and_lock() they use local_irq_save() at the call
site. This causes extra code and creates in some places unneeded long
interrupt disabled times. These places need also extra treatment for
PREEMPT_RT due to the disconnect of the irq disabling and the lock
function.
Implement the missing irqsave variant of the function.
Signed-off-by: Anna-Maria Gleixner <anna-maria@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r20180612161621.22645-3-bigeasy@linutronix.de
Diffstat (limited to 'lib/dec_and_lock.c')
-rw-r--r-- | lib/dec_and_lock.c | 16 |
1 files changed, 16 insertions, 0 deletions
diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c index 347fa7ac2e8a..9555b68bb774 100644 --- a/lib/dec_and_lock.c +++ b/lib/dec_and_lock.c @@ -33,3 +33,19 @@ int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) } EXPORT_SYMBOL(_atomic_dec_and_lock); + +int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock, + unsigned long *flags) +{ + /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */ + if (atomic_add_unless(atomic, -1, 1)) + return 0; + + /* Otherwise do it the slow way */ + spin_lock_irqsave(lock, *flags); + if (atomic_dec_and_test(atomic)) + return 1; + spin_unlock_irqrestore(lock, *flags); + return 0; +} +EXPORT_SYMBOL(_atomic_dec_and_lock_irqsave); |