summaryrefslogtreecommitdiffstats
path: root/kernel/locking/qspinlock_paravirt.h
diff options
context:
space:
mode:
authorWaiman Long <longman@redhat.com>2024-03-18 20:50:04 -0400
committerIngo Molnar <mingo@kernel.org>2024-03-21 20:45:17 +0100
commit3774b28d8f3b9e8a946beb9550bee85e5454fc9f (patch)
treeb7bd0cbb64f1d2e4fb10639e2dda674422752685 /kernel/locking/qspinlock_paravirt.h
parent4ae3dc83b047d51485cce1a72be277a110d77c91 (diff)
downloadlinux-stable-3774b28d8f3b9e8a946beb9550bee85e5454fc9f.tar.gz
linux-stable-3774b28d8f3b9e8a946beb9550bee85e5454fc9f.tar.bz2
linux-stable-3774b28d8f3b9e8a946beb9550bee85e5454fc9f.zip
locking/qspinlock: Always evaluate lockevent* non-event parameter once
The 'inc' parameter of lockevent_add() and the cond parameter of lockevent_cond_inc() are only evaluated when CONFIG_LOCK_EVENT_COUNTS is on. That can cause problem if those parameters are expressions with side effect like a "++". Fix this by evaluating those non-event parameters once even if CONFIG_LOCK_EVENT_COUNTS is off. This will also eliminate the need of the __maybe_unused attribute to the wait_early local variable in pv_wait_node(). Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Boqun Feng <boqun.feng@gmail.com> Link: https://lore.kernel.org/r/20240319005004.1692705-1-longman@redhat.com
Diffstat (limited to 'kernel/locking/qspinlock_paravirt.h')
-rw-r--r--kernel/locking/qspinlock_paravirt.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h
index ae2b12f68b90..169950fe1aad 100644
--- a/kernel/locking/qspinlock_paravirt.h
+++ b/kernel/locking/qspinlock_paravirt.h
@@ -294,7 +294,7 @@ static void pv_wait_node(struct mcs_spinlock *node, struct mcs_spinlock *prev)
{
struct pv_node *pn = (struct pv_node *)node;
struct pv_node *pp = (struct pv_node *)prev;
- bool __maybe_unused wait_early;
+ bool wait_early;
int loop;
for (;;) {