diff options
Diffstat (limited to 'Documentation/RCU/checklist.txt')
-rw-r--r-- | Documentation/RCU/checklist.txt | 49 |
1 files changed, 11 insertions, 38 deletions
diff --git a/Documentation/RCU/checklist.txt b/Documentation/RCU/checklist.txt index 49747717d905..6f469864d9f5 100644 --- a/Documentation/RCU/checklist.txt +++ b/Documentation/RCU/checklist.txt @@ -63,7 +63,7 @@ over a rather long period of time, but improvements are always welcome! pointer must be covered by rcu_read_lock(), rcu_read_lock_bh(), rcu_read_lock_sched(), or by the appropriate update-side lock. Disabling of preemption can serve as rcu_read_lock_sched(), but - is less readable. + is less readable and prevents lockdep from detecting locking issues. Letting RCU-protected pointers "leak" out of an RCU read-side critical section is every bid as bad as letting them leak out @@ -285,11 +285,7 @@ over a rather long period of time, but improvements are always welcome! here is that superuser already has lots of ways to crash the machine. - d. Use call_rcu_bh() rather than call_rcu(), in order to take - advantage of call_rcu_bh()'s faster grace periods. (This - is only a partial solution, though.) - - e. Periodically invoke synchronize_rcu(), permitting a limited + d. Periodically invoke synchronize_rcu(), permitting a limited number of updates per grace period. The same cautions apply to call_rcu_bh(), call_rcu_sched(), @@ -324,37 +320,14 @@ over a rather long period of time, but improvements are always welcome! will break Alpha, cause aggressive compilers to generate bad code, and confuse people trying to read your code. -11. Note that synchronize_rcu() -only- guarantees to wait until - all currently executing rcu_read_lock()-protected RCU read-side - critical sections complete. It does -not- necessarily guarantee - that all currently running interrupts, NMIs, preempt_disable() - code, or idle loops will complete. Therefore, if your - read-side critical sections are protected by something other - than rcu_read_lock(), do -not- use synchronize_rcu(). - - Similarly, disabling preemption is not an acceptable substitute - for rcu_read_lock(). Code that attempts to use preemption - disabling where it should be using rcu_read_lock() will break - in CONFIG_PREEMPT=y kernel builds. - - If you want to wait for interrupt handlers, NMI handlers, and - code under the influence of preempt_disable(), you instead - need to use synchronize_irq() or synchronize_sched(). - - This same limitation also applies to synchronize_rcu_bh() - and synchronize_srcu(), as well as to the asynchronous and - expedited forms of the three primitives, namely call_rcu(), - call_rcu_bh(), call_srcu(), synchronize_rcu_expedited(), - synchronize_rcu_bh_expedited(), and synchronize_srcu_expedited(). - -12. Any lock acquired by an RCU callback must be acquired elsewhere +11. Any lock acquired by an RCU callback must be acquired elsewhere with softirq disabled, e.g., via spin_lock_irqsave(), spin_lock_bh(), etc. Failing to disable irq on a given acquisition of that lock will result in deadlock as soon as the RCU softirq handler happens to run your RCU callback while interrupting that acquisition's critical section. -13. RCU callbacks can be and are executed in parallel. In many cases, +12. RCU callbacks can be and are executed in parallel. In many cases, the callback code simply wrappers around kfree(), so that this is not an issue (or, more accurately, to the extent that it is an issue, the memory-allocator locking handles it). However, @@ -370,7 +343,7 @@ over a rather long period of time, but improvements are always welcome! not the case, a self-spawning RCU callback would prevent the victim CPU from ever going offline.) -14. Unlike other forms of RCU, it -is- permissible to block in an +13. Unlike other forms of RCU, it -is- permissible to block in an SRCU read-side critical section (demarked by srcu_read_lock() and srcu_read_unlock()), hence the "SRCU": "sleepable RCU". Please note that if you don't need to sleep in read-side critical @@ -414,7 +387,7 @@ over a rather long period of time, but improvements are always welcome! Note that rcu_dereference() and rcu_assign_pointer() relate to SRCU just as they do to other forms of RCU. -15. The whole point of call_rcu(), synchronize_rcu(), and friends +14. The whole point of call_rcu(), synchronize_rcu(), and friends is to wait until all pre-existing readers have finished before carrying out some otherwise-destructive operation. It is therefore critically important to -first- remove any path @@ -426,13 +399,13 @@ over a rather long period of time, but improvements are always welcome! is the caller's responsibility to guarantee that any subsequent readers will execute safely. -16. The various RCU read-side primitives do -not- necessarily contain +15. The various RCU read-side primitives do -not- necessarily contain memory barriers. You should therefore plan for the CPU and the compiler to freely reorder code into and out of RCU read-side critical sections. It is the responsibility of the RCU update-side primitives to deal with this. -17. Use CONFIG_PROVE_LOCKING, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the +16. Use CONFIG_PROVE_LOCKING, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the __rcu sparse checks to validate your RCU code. These can help find problems as follows: @@ -455,7 +428,7 @@ over a rather long period of time, but improvements are always welcome! These debugging aids can help you find problems that are otherwise extremely difficult to spot. -18. If you register a callback using call_rcu(), call_rcu_bh(), +17. If you register a callback using call_rcu(), call_rcu_bh(), call_rcu_sched(), or call_srcu(), and pass in a function defined within a loadable module, then it in necessary to wait for all pending callbacks to be invoked after the last invocation @@ -469,8 +442,8 @@ over a rather long period of time, but improvements are always welcome! You instead need to use one of the barrier functions: o call_rcu() -> rcu_barrier() - o call_rcu_bh() -> rcu_barrier_bh() - o call_rcu_sched() -> rcu_barrier_sched() + o call_rcu_bh() -> rcu_barrier() + o call_rcu_sched() -> rcu_barrier() o call_srcu() -> srcu_barrier() However, these barrier functions are absolutely -not- guaranteed |