summaryrefslogtreecommitdiffstats
path: root/Documentation/RCU
diff options
context:
space:
mode:
authorJoel Fernandes (Google) <joel@joelfernandes.org>2018-10-05 16:18:10 -0700
committerPaul E. McKenney <paulmck@linux.ibm.com>2018-11-12 08:56:25 -0800
commit3398496483df3508764d24917deaa8ab5176969e (patch)
tree2f3aab93d09c2eb89b2ba3466e9d43cf79a8c3bb /Documentation/RCU
parented8f6fb247785d98ffe6babcf93b7bedd2c88fd8 (diff)
downloadlinux-stable-3398496483df3508764d24917deaa8ab5176969e.tar.gz
linux-stable-3398496483df3508764d24917deaa8ab5176969e.tar.bz2
linux-stable-3398496483df3508764d24917deaa8ab5176969e.zip
doc: rcu: Update core and full API in whatisRCU
RCU consolidation effort causes the update side of the RCU API to be consistent across all the 3 RCU flavors (normal, sched, bh). This commit therefore updates the full API in the whatisRCU document, thus encouraging people to use the consolidated RCU update API instead of the old RCU-bh and RCU-sched update APIs. Also rcu_dereference is documented to be the same for all 3 mechanisms (even before the consolidation), however its actually different - as using the right rcu_dereference primitive (such as rcu_dereference_bh for bh) is needed to make lock debugging work correctly. This update also corrects that. Also, add local_bh_disable() and local_bh_enable() as softirq protection primitives and correct a grammar error in a quiz answer. Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org> Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
Diffstat (limited to 'Documentation/RCU')
-rw-r--r--Documentation/RCU/whatisRCU.txt55
1 files changed, 28 insertions, 27 deletions
diff --git a/Documentation/RCU/whatisRCU.txt b/Documentation/RCU/whatisRCU.txt
index 86d82f7f3500..7c33445fd0e5 100644
--- a/Documentation/RCU/whatisRCU.txt
+++ b/Documentation/RCU/whatisRCU.txt
@@ -322,28 +322,27 @@ to their callers and (2) call_rcu() callbacks may be invoked. Efficient
implementations of the RCU infrastructure make heavy use of batching in
order to amortize their overhead over many uses of the corresponding APIs.
-There are no fewer than three RCU mechanisms in the Linux kernel; the
-diagram above shows the first one, which is by far the most commonly used.
-The rcu_dereference() and rcu_assign_pointer() primitives are used for
-all three mechanisms, but different defer and protect primitives are
-used as follows:
+There are at least three flavors of RCU usage in the Linux kernel. The diagram
+above shows the most common one. On the updater side, the rcu_assign_pointer(),
+sychronize_rcu() and call_rcu() primitives used are the same for all three
+flavors. However for protection (on the reader side), the primitives used vary
+depending on the flavor:
- Defer Protect
+a. rcu_read_lock() / rcu_read_unlock()
+ rcu_dereference()
-a. synchronize_rcu() rcu_read_lock() / rcu_read_unlock()
- call_rcu() rcu_dereference()
+b. rcu_read_lock_bh() / rcu_read_unlock_bh()
+ local_bh_disable() / local_bh_enable()
+ rcu_dereference_bh()
-b. synchronize_rcu_bh() rcu_read_lock_bh() / rcu_read_unlock_bh()
- call_rcu_bh() rcu_dereference_bh()
+c. rcu_read_lock_sched() / rcu_read_unlock_sched()
+ preempt_disable() / preempt_enable()
+ local_irq_save() / local_irq_restore()
+ hardirq enter / hardirq exit
+ NMI enter / NMI exit
+ rcu_dereference_sched()
-c. synchronize_sched() rcu_read_lock_sched() / rcu_read_unlock_sched()
- call_rcu_sched() preempt_disable() / preempt_enable()
- local_irq_save() / local_irq_restore()
- hardirq enter / hardirq exit
- NMI enter / NMI exit
- rcu_dereference_sched()
-
-These three mechanisms are used as follows:
+These three flavors are used as follows:
a. RCU applied to normal data structures.
@@ -867,18 +866,20 @@ RCU: Critical sections Grace period Barrier
bh: Critical sections Grace period Barrier
- rcu_read_lock_bh call_rcu_bh rcu_barrier_bh
- rcu_read_unlock_bh synchronize_rcu_bh
- rcu_dereference_bh synchronize_rcu_bh_expedited
+ rcu_read_lock_bh call_rcu rcu_barrier
+ rcu_read_unlock_bh synchronize_rcu
+ [local_bh_disable] synchronize_rcu_expedited
+ [and friends]
+ rcu_dereference_bh
rcu_dereference_bh_check
rcu_dereference_bh_protected
rcu_read_lock_bh_held
sched: Critical sections Grace period Barrier
- rcu_read_lock_sched synchronize_sched rcu_barrier_sched
- rcu_read_unlock_sched call_rcu_sched
- [preempt_disable] synchronize_sched_expedited
+ rcu_read_lock_sched call_rcu rcu_barrier
+ rcu_read_unlock_sched synchronize_rcu
+ [preempt_disable] synchronize_rcu_expedited
[and friends]
rcu_read_lock_sched_notrace
rcu_read_unlock_sched_notrace
@@ -890,8 +891,8 @@ sched: Critical sections Grace period Barrier
SRCU: Critical sections Grace period Barrier
- srcu_read_lock synchronize_srcu srcu_barrier
- srcu_read_unlock call_srcu
+ srcu_read_lock call_srcu srcu_barrier
+ srcu_read_unlock synchronize_srcu
srcu_dereference synchronize_srcu_expedited
srcu_dereference_check
srcu_read_lock_held
@@ -1034,7 +1035,7 @@ Answer: Just as PREEMPT_RT permits preemption of spinlock
spinlocks blocking while in RCU read-side critical
sections.
- Why the apparent inconsistency? Because it is it
+ Why the apparent inconsistency? Because it is
possible to use priority boosting to keep the RCU
grace periods short if need be (for example, if running
short of memory). In contrast, if blocking waiting