diff options
author | Boqun Feng <boqun.feng@gmail.com> | 2020-08-07 15:42:30 +0800 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2020-08-26 12:42:05 +0200 |
commit | f08e3888574d490b31481eef6d84c61bedba7a47 (patch) | |
tree | 525bf0b8fd30c22a9f4dfe5a459d42dd10e7a85e /COPYING | |
parent | 68e305678583f13a67e2ce22088c2520bd4f97b4 (diff) | |
download | linux-stable-f08e3888574d490b31481eef6d84c61bedba7a47.tar.gz linux-stable-f08e3888574d490b31481eef6d84c61bedba7a47.tar.bz2 linux-stable-f08e3888574d490b31481eef6d84c61bedba7a47.zip |
lockdep: Fix recursive read lock related safe->unsafe detection
Currently, in safe->unsafe detection, lockdep misses the fact that a
LOCK_ENABLED_IRQ_*_READ usage and a LOCK_USED_IN_IRQ_*_READ usage may
cause deadlock too, for example:
P1 P2
<irq disabled>
write_lock(l1); <irq enabled>
read_lock(l2);
write_lock(l2);
<in irq>
read_lock(l1);
Actually, all of the following cases may cause deadlocks:
LOCK_USED_IN_IRQ_* -> LOCK_ENABLED_IRQ_*
LOCK_USED_IN_IRQ_*_READ -> LOCK_ENABLED_IRQ_*
LOCK_USED_IN_IRQ_* -> LOCK_ENABLED_IRQ_*_READ
LOCK_USED_IN_IRQ_*_READ -> LOCK_ENABLED_IRQ_*_READ
To fix this, we need to 1) change the calculation of exclusive_mask() so
that READ bits are not dropped and 2) always call usage() in
mark_lock_irq() to check usage deadlocks, even when the new usage of the
lock is READ.
Besides, adjust usage_match() and usage_acculumate() to recursive read
lock changes.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200807074238.1632519-12-boqun.feng@gmail.com
Diffstat (limited to 'COPYING')
0 files changed, 0 insertions, 0 deletions