summaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorSebastian Andrzej Siewior <bigeasy@linutronix.de>2019-08-16 18:06:26 +0200
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2019-09-10 10:35:19 +0100
commit9ad89d579c01cc6449d4fc503f00914caa4bcc7e (patch)
treefbc95b253459386f8a7e03ef6fed5b8d194681ca /kernel
parent086ddc5e71721ce385382e926b8112148c2ad58a (diff)
downloadlinux-stable-9ad89d579c01cc6449d4fc503f00914caa4bcc7e.tar.gz
linux-stable-9ad89d579c01cc6449d4fc503f00914caa4bcc7e.tar.bz2
linux-stable-9ad89d579c01cc6449d4fc503f00914caa4bcc7e.zip
sched/core: Schedule new worker even if PI-blocked
[ Upstream commit b0fdc01354f45d43f082025636ef808968a27b36 ] If a task is PI-blocked (blocking on sleeping spinlock) then we don't want to schedule a new kworker if we schedule out due to lock contention because !RT does not do that as well. A spinning spinlock disables preemption and a worker does not schedule out on lock contention (but spin). On RT the RW-semaphore implementation uses an rtmutex so tsk_is_pi_blocked() will return true if a task blocks on it. In this case we will now start a new worker which may deadlock if one worker is waiting on progress from another worker. Since a RW-semaphore starts a new worker on !RT, we should do the same on RT. XFS is able to trigger this deadlock. Allow to schedule new worker if the current worker is PI-blocked. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20190816160626.12742-1-bigeasy@linutronix.de Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/sched/core.c5
1 files changed, 4 insertions, 1 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 4d5962232a55..42bc2986520d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3469,7 +3469,7 @@ void __noreturn do_task_dead(void)
static inline void sched_submit_work(struct task_struct *tsk)
{
- if (!tsk->state || tsk_is_pi_blocked(tsk))
+ if (!tsk->state)
return;
/*
@@ -3485,6 +3485,9 @@ static inline void sched_submit_work(struct task_struct *tsk)
preempt_enable_no_resched();
}
+ if (tsk_is_pi_blocked(tsk))
+ return;
+
/*
* If we are going to sleep and we have plugged IO queued,
* make sure to submit it to avoid deadlocks.