diff options
author | Kevin Hao <haokexin@gmail.com> | 2019-11-05 21:16:57 -0800 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2019-11-12 19:20:37 +0100 |
commit | 8e358a02761106abcfaac5eb8c59e44ba923e8ca (patch) | |
tree | bc119a39d33946981bbe0522d544051c6a59eb19 /lib | |
parent | 6c944fc51f0a798071967288144c9f9859864259 (diff) | |
download | linux-stable-8e358a02761106abcfaac5eb8c59e44ba923e8ca.tar.gz linux-stable-8e358a02761106abcfaac5eb8c59e44ba923e8ca.tar.bz2 linux-stable-8e358a02761106abcfaac5eb8c59e44ba923e8ca.zip |
dump_stack: avoid the livelock of the dump_lock
commit 5cbf2fff3bba8d3c6a4d47c1754de1cf57e2b01f upstream.
In the current code, we use the atomic_cmpxchg() to serialize the output
of the dump_stack(), but this implementation suffers the thundering herd
problem. We have observed such kind of livelock on a Marvell cn96xx
board(24 cpus) when heavily using the dump_stack() in a kprobe handler.
Actually we can let the competitors to wait for the releasing of the
lock before jumping to atomic_cmpxchg(). This will definitely mitigate
the thundering herd problem. Thanks Linus for the suggestion.
[akpm@linux-foundation.org: fix comment]
Link: http://lkml.kernel.org/r/20191030031637.6025-1-haokexin@gmail.com
Fixes: b58d977432c8 ("dump_stack: serialize the output from dump_stack()")
Signed-off-by: Kevin Hao <haokexin@gmail.com>
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'lib')
-rw-r--r-- | lib/dump_stack.c | 7 |
1 files changed, 6 insertions, 1 deletions
diff --git a/lib/dump_stack.c b/lib/dump_stack.c index 5cff72f18c4a..33ffbf308853 100644 --- a/lib/dump_stack.c +++ b/lib/dump_stack.c @@ -106,7 +106,12 @@ retry: was_locked = 1; } else { local_irq_restore(flags); - cpu_relax(); + /* + * Wait for the lock to release before jumping to + * atomic_cmpxchg() in order to mitigate the thundering herd + * problem. + */ + do { cpu_relax(); } while (atomic_read(&dump_lock) != -1); goto retry; } |