diff options
author | Paul E. McKenney <paulmck@linux.ibm.com> | 2018-10-15 10:54:13 -0700 |
---|---|---|
committer | Paul E. McKenney <paulmck@linux.ibm.com> | 2018-11-12 08:56:25 -0800 |
commit | 97562c018135a9d01c59bd3bf95a9458548b79e2 (patch) | |
tree | e3382f990d2dce406e4a892f4b90958147379670 /Documentation/RCU | |
parent | 97949f0176da396c32e7c881cbfbc61642fb1266 (diff) | |
download | linux-97562c018135a9d01c59bd3bf95a9458548b79e2.tar.gz linux-97562c018135a9d01c59bd3bf95a9458548b79e2.tar.bz2 linux-97562c018135a9d01c59bd3bf95a9458548b79e2.zip |
doc: RCU scheduler spinlock rcu_read_unlock() restriction remains
Given RCU flavor consolidation, when rcu_read_unlock() is invoked with
interrupts disabled, the reporting of the corresponding quiescent state is
deferred until interrupts are re-enabled. There was therefore some hope
that this would allow dropping the restriction against holding scheduler
spinlocks across an rcu_read_unlock() without disabling interrupts across
the entire corresponding RCU read-side critical section. Unfortunately,
the need to quickly provide a quiescent state to expedited grace periods
sometimes requires a call to raise_softirq() during rcu_read_unlock()
execution. Because raise_softirq() can sometimes acquire the scheduler
spinlocks, the restriction must remain in effect. This commit therefore
updates the RCU requirements documentation accordingly.
Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
Diffstat (limited to 'Documentation/RCU')
-rw-r--r-- | Documentation/RCU/Design/Requirements/Requirements.html | 44 |
1 files changed, 29 insertions, 15 deletions
diff --git a/Documentation/RCU/Design/Requirements/Requirements.html b/Documentation/RCU/Design/Requirements/Requirements.html index f74a2233865c..9fca73e03a98 100644 --- a/Documentation/RCU/Design/Requirements/Requirements.html +++ b/Documentation/RCU/Design/Requirements/Requirements.html @@ -2475,23 +2475,37 @@ for context-switch-heavy <tt>CONFIG_NO_HZ_FULL=y</tt> workloads, but there is room for further improvement. <p> -In the past, it was forbidden to disable interrupts across an -<tt>rcu_read_unlock()</tt> unless that interrupt-disabled region -of code also included the matching <tt>rcu_read_lock()</tt>. -Violating this restriction could result in deadlocks involving the -scheduler's runqueue and priority-inheritance spinlocks. -This restriction was lifted when interrupt-disabled calls to -<tt>rcu_read_unlock()</tt> started deferring the reporting of -the resulting RCU-preempt quiescent state until the end of that +It is forbidden to hold any of scheduler's runqueue or priority-inheritance +spinlocks across an <tt>rcu_read_unlock()</tt> unless interrupts have been +disabled across the entire RCU read-side critical section, that is, +up to and including the matching <tt>rcu_read_lock()</tt>. +Violating this restriction can result in deadlocks involving these +scheduler spinlocks. +There was hope that this restriction might be lifted when interrupt-disabled +calls to <tt>rcu_read_unlock()</tt> started deferring the reporting of +the resulting RCU-preempt quiescent state until the end of the corresponding interrupts-disabled region. -This deferred reporting means that the scheduler's runqueue and -priority-inheritance locks cannot be held while reporting an RCU-preempt -quiescent state, which lifts the earlier restriction, at least from -a deadlock perspective. -Unfortunately, real-time systems using RCU priority boosting may +Unfortunately, timely reporting of the corresponding quiescent state +to expedited grace periods requires a call to <tt>raise_softirq()</tt>, +which can acquire these scheduler spinlocks. +In addition, real-time systems using RCU priority boosting need this restriction to remain in effect because deferred -quiescent-state reporting also defers deboosting, which in turn -degrades real-time latencies. +quiescent-state reporting would also defer deboosting, which in turn +would degrade real-time latencies. + +<p> +In theory, if a given RCU read-side critical section could be +guaranteed to be less than one second in duration, holding a scheduler +spinlock across that critical section's <tt>rcu_read_unlock()</tt> +would require only that preemption be disabled across the entire +RCU read-side critical section, not interrupts. +Unfortunately, given the possibility of vCPU preemption, long-running +interrupts, and so on, it is not possible in practice to guarantee +that a given RCU read-side critical section will complete in less than +one second. +Therefore, as noted above, if scheduler spinlocks are held across +a given call to <tt>rcu_read_unlock()</tt>, interrupts must be +disabled across the entire RCU read-side critical section. <h3><a name="Tracing and RCU">Tracing and RCU</a></h3> |