diff options
author | Frederic Weisbecker <frederic@kernel.org> | 2021-10-19 02:08:10 +0200 |
---|---|---|
committer | Paul E. McKenney <paulmck@kernel.org> | 2021-12-07 16:24:44 -0800 |
commit | b3bb02fe5a2b538ae53eda1fe591dd6c81a91ad4 (patch) | |
tree | 96583dd9618ca6f834ad082d48b721ee6720ce44 /kernel/rcu | |
parent | 24ee940d89277602147ce1b8b4fd87b01b9a6660 (diff) | |
download | linux-stable-b3bb02fe5a2b538ae53eda1fe591dd6c81a91ad4.tar.gz linux-stable-b3bb02fe5a2b538ae53eda1fe591dd6c81a91ad4.tar.bz2 linux-stable-b3bb02fe5a2b538ae53eda1fe591dd6c81a91ad4.zip |
rcu/nocb: Make rcu_core() callbacks acceleration (de-)offloading safe
When callbacks are offloaded, the NOCB kthreads handle the callbacks
progression on behalf of rcu_core().
However during the (de-)offloading process, the kthread may not be
entirely up to the task. As a result some callbacks grace period
sequence number may remain stale for a while because rcu_core() won't
take care of them either.
Fix this with forcing callbacks acceleration from rcu_core() as long
as the offloading process isn't complete.
Reported-and-tested-by: Valentin Schneider <valentin.schneider@arm.com>
Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Neeraj Upadhyay <neeraju@codeaurora.org>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Diffstat (limited to 'kernel/rcu')
-rw-r--r-- | kernel/rcu/tree.c | 18 |
1 files changed, 16 insertions, 2 deletions
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 5985698f3341..cb9abb80377f 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2278,6 +2278,7 @@ rcu_report_qs_rdp(struct rcu_data *rdp) unsigned long flags; unsigned long mask; bool needwake = false; + bool needacc = false; struct rcu_node *rnp; WARN_ON_ONCE(rdp->cpu != smp_processor_id()); @@ -2305,16 +2306,29 @@ rcu_report_qs_rdp(struct rcu_data *rdp) * This GP can't end until cpu checks in, so all of our * callbacks can be processed during the next GP. * - * NOCB kthreads have their own way to deal with that. + * NOCB kthreads have their own way to deal with that... */ - if (!rcu_rdp_is_offloaded(rdp)) + if (!rcu_rdp_is_offloaded(rdp)) { needwake = rcu_accelerate_cbs(rnp, rdp); + } else if (!rcu_segcblist_completely_offloaded(&rdp->cblist)) { + /* + * ...but NOCB kthreads may miss or delay callbacks acceleration + * if in the middle of a (de-)offloading process. + */ + needacc = true; + } rcu_disable_urgency_upon_qs(rdp); rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags); /* ^^^ Released rnp->lock */ if (needwake) rcu_gp_kthread_wake(); + + if (needacc) { + rcu_nocb_lock_irqsave(rdp, flags); + rcu_accelerate_cbs_unlocked(rnp, rdp); + rcu_nocb_unlock_irqrestore(rdp, flags); + } } } |