summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2023-11-09 22:22:13 -0800
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2023-12-20 15:32:37 +0100
commit894076cde78f36ed776ab1b19e860b28adb47e14 (patch)
tree154fc7df2dc5440359f1274ee049c649dad2e0f8
parentbadebd9ba26f8129c6dfb672a7d8b58486e7622a (diff)
downloadlinux-stable-894076cde78f36ed776ab1b19e860b28adb47e14.tar.gz
linux-stable-894076cde78f36ed776ab1b19e860b28adb47e14.tar.bz2
linux-stable-894076cde78f36ed776ab1b19e860b28adb47e14.zip
asm-generic: qspinlock: fix queued_spin_value_unlocked() implementation
[ Upstream commit 125b0bb95dd6bec81b806b997a4ccb026eeecf8f ] We really don't want to do atomic_read() or anything like that, since we already have the value, not the lock. The whole point of this is that we've loaded the lock from memory, and we want to check whether the value we loaded was a locked one or not. The main use of this is the lockref code, which loads both the lock and the reference count in one atomic operation, and then works on that combined value. With the atomic_read(), the compiler would pointlessly spill the value to the stack, in order to then be able to read it back "atomically". This is the qspinlock version of commit c6f4a9002252 ("asm-generic: ticket-lock: Optimize arch_spin_value_unlocked()") which fixed this same bug for ticket locks. Cc: Guo Ren <guoren@kernel.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Waiman Long <longman@redhat.com> Link: https://lore.kernel.org/all/CAHk-=whNRv0v6kQiV5QO6DJhjH4KEL36vWQ6Re8Csrnh4zbRkQ@mail.gmail.com/ Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
-rw-r--r--include/asm-generic/qspinlock.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index 66260777d644..c133ed318315 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -49,7 +49,7 @@ static __always_inline int queued_spin_is_locked(struct qspinlock *lock)
*/
static __always_inline int queued_spin_value_unlocked(struct qspinlock lock)
{
- return !atomic_read(&lock.val);
+ return !lock.val.counter;
}
/**