diff options
author | Peter Zijlstra <peterz@infradead.org> | 2019-10-01 11:18:37 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2019-11-13 08:01:30 +0100 |
commit | ff51ff84d82aea5a889b85f2b9fb3aa2b8691668 (patch) | |
tree | 2f5e8e6ff1c9dd57599318f82cad15298c2841b0 /kernel | |
parent | 0e3f1ad80fc8cb0c517fd9a9afb22752b741fa76 (diff) | |
download | linux-ff51ff84d82aea5a889b85f2b9fb3aa2b8691668.tar.gz linux-ff51ff84d82aea5a889b85f2b9fb3aa2b8691668.tar.bz2 linux-ff51ff84d82aea5a889b85f2b9fb3aa2b8691668.zip |
sched/core: Avoid spurious lock dependencies
While seemingly harmless, __sched_fork() does hrtimer_init(), which,
when DEBUG_OBJETS, can end up doing allocations.
This then results in the following lock order:
rq->lock
zone->lock.rlock
batched_entropy_u64.lock
Which in turn causes deadlocks when we do wakeups while holding that
batched_entropy lock -- as the random code does.
Solve this by moving __sched_fork() out from under rq->lock. This is
safe because nothing there relies on rq->lock, as also evident from the
other __sched_fork() callsite.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Qian Cai <cai@lca.pw>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: akpm@linux-foundation.org
Cc: bigeasy@linutronix.de
Cc: cl@linux.com
Cc: keescook@chromium.org
Cc: penberg@kernel.org
Cc: rientjes@google.com
Cc: thgarnie@google.com
Cc: tytso@mit.edu
Cc: will@kernel.org
Fixes: b7d5dc21072c ("random: add a spinlock_t to struct batched_entropy")
Link: https://lkml.kernel.org/r/20191001091837.GK4536@hirez.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched/core.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 0f2eb3629070..33cd25051f3a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6019,10 +6019,11 @@ void init_idle(struct task_struct *idle, int cpu) struct rq *rq = cpu_rq(cpu); unsigned long flags; + __sched_fork(0, idle); + raw_spin_lock_irqsave(&idle->pi_lock, flags); raw_spin_lock(&rq->lock); - __sched_fork(0, idle); idle->state = TASK_RUNNING; idle->se.exec_start = sched_clock(); idle->flags |= PF_IDLE; |