summaryrefslogtreecommitdiffstats
path: root/Documentation/RCU
diff options
context:
space:
mode:
authorPaul E. McKenney <paulmck@linux.ibm.com>2019-03-06 11:24:35 -0800
committerPaul E. McKenney <paulmck@linux.ibm.com>2019-03-26 14:37:06 -0700
commit884b429ae66758bd2e006dc5fd56757e0fde896d (patch)
treee5aab09c35fe06f23ce5eda2f7d4d57ade709d68 /Documentation/RCU
parentd1b493bbe101d85c86970f1b6c0401d4c1c473ed (diff)
downloadlinux-884b429ae66758bd2e006dc5fd56757e0fde896d.tar.gz
linux-884b429ae66758bd2e006dc5fd56757e0fde896d.tar.bz2
linux-884b429ae66758bd2e006dc5fd56757e0fde896d.zip
doc: Fix typos and otherwise modernize checklist.txt
This commit fixes some issues with Documentation/RCU/checklist.txt. Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
Diffstat (limited to 'Documentation/RCU')
-rw-r--r--Documentation/RCU/checklist.txt43
1 files changed, 25 insertions, 18 deletions
diff --git a/Documentation/RCU/checklist.txt b/Documentation/RCU/checklist.txt
index fcc59fea5cd4..e98ff261a438 100644
--- a/Documentation/RCU/checklist.txt
+++ b/Documentation/RCU/checklist.txt
@@ -318,7 +318,7 @@ over a rather long period of time, but improvements are always welcome!
11. Any lock acquired by an RCU callback must be acquired elsewhere
with softirq disabled, e.g., via spin_lock_irqsave(),
- spin_lock_bh(), etc. Failing to disable irq on a given
+ spin_lock_bh(), etc. Failing to disable softirq on a given
acquisition of that lock will result in deadlock as soon as
the RCU softirq handler happens to run your RCU callback while
interrupting that acquisition's critical section.
@@ -331,13 +331,16 @@ over a rather long period of time, but improvements are always welcome!
must use whatever locking or other synchronization is required
to safely access and/or modify that data structure.
- RCU callbacks are -usually- executed on the same CPU that executed
- the corresponding call_rcu() or call_srcu(). but are by -no-
- means guaranteed to be. For example, if a given CPU goes offline
- while having an RCU callback pending, then that RCU callback
- will execute on some surviving CPU. (If this was not the case,
- a self-spawning RCU callback would prevent the victim CPU from
- ever going offline.)
+ Do not assume that RCU callbacks will be executed on the same
+ CPU that executed the corresponding call_rcu() or call_srcu().
+ For example, if a given CPU goes offline while having an RCU
+ callback pending, then that RCU callback will execute on some
+ surviving CPU. (If this was not the case, a self-spawning RCU
+ callback would prevent the victim CPU from ever going offline.)
+ Furthermore, CPUs designated by rcu_nocbs= might well -always-
+ have their RCU callbacks executed on some other CPUs, in fact,
+ for some real-time workloads, this is the whole point of using
+ the rcu_nocbs= kernel boot parameter.
13. Unlike other forms of RCU, it -is- permissible to block in an
SRCU read-side critical section (demarked by srcu_read_lock()
@@ -379,8 +382,9 @@ over a rather long period of time, but improvements are always welcome!
never sends IPIs to other CPUs, so it is easier on
real-time workloads than is synchronize_rcu_expedited().
- Note that rcu_dereference() and rcu_assign_pointer() relate to
- SRCU just as they do to other forms of RCU.
+ Note that rcu_assign_pointer() relates to SRCU just as it does to
+ other forms of RCU, but instead of rcu_dereference() you should
+ use srcu_dereference() in order to avoid lockdep splats.
14. The whole point of call_rcu(), synchronize_rcu(), and friends
is to wait until all pre-existing readers have finished before
@@ -400,6 +404,9 @@ over a rather long period of time, but improvements are always welcome!
read-side critical sections. It is the responsibility of the
RCU update-side primitives to deal with this.
+ For SRCU readers, you can use smp_mb__after_srcu_read_unlock()
+ immediately after an srcu_read_unlock() to get a full barrier.
+
16. Use CONFIG_PROVE_LOCKING, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the
__rcu sparse checks to validate your RCU code. These can help
find problems as follows:
@@ -423,15 +430,15 @@ over a rather long period of time, but improvements are always welcome!
These debugging aids can help you find problems that are
otherwise extremely difficult to spot.
-17. If you register a callback using call_rcu() or call_srcu(),
- and pass in a function defined within a loadable module,
- then it in necessary to wait for all pending callbacks to
- be invoked after the last invocation and before unloading
- that module. Note that it is absolutely -not- sufficient to
- wait for a grace period! The current (say) synchronize_rcu()
- implementation waits only for all previous callbacks registered
- on the CPU that synchronize_rcu() is running on, but it is -not-
+17. If you register a callback using call_rcu() or call_srcu(), and
+ pass in a function defined within a loadable module, then it in
+ necessary to wait for all pending callbacks to be invoked after
+ the last invocation and before unloading that module. Note that
+ it is absolutely -not- sufficient to wait for a grace period!
+ The current (say) synchronize_rcu() implementation is -not-
guaranteed to wait for callbacks registered on other CPUs.
+ Or even on the current CPU if that CPU recently went offline
+ and came back online.
You instead need to use one of the barrier functions: