summaryrefslogtreecommitdiffstats
path: root/kernel/locking
diff options
context:
space:
mode:
authorPeter Zijlstra <peterz@infradead.org>2023-09-08 18:22:51 +0200
committerPeter Zijlstra <peterz@infradead.org>2023-09-20 09:31:12 +0200
commit6b596e62ed9f90c4a97e68ae1f7b1af5beeb3c05 (patch)
tree5fdde551fcf4a48f0a1046c34b6d994e0be0ffd6 /kernel/locking
parentde1474b46d889ee0367f6e71d9adfeb0711e4a8d (diff)
downloadlinux-stable-6b596e62ed9f90c4a97e68ae1f7b1af5beeb3c05.tar.gz
linux-stable-6b596e62ed9f90c4a97e68ae1f7b1af5beeb3c05.tar.bz2
linux-stable-6b596e62ed9f90c4a97e68ae1f7b1af5beeb3c05.zip
sched: Provide rt_mutex specific scheduler helpers
With PREEMPT_RT there is a rt_mutex recursion problem where sched_submit_work() can use an rtlock (aka spinlock_t). More specifically what happens is: mutex_lock() /* really rt_mutex */ ... __rt_mutex_slowlock_locked() task_blocks_on_rt_mutex() // enqueue current task as waiter // do PI chain walk rt_mutex_slowlock_block() schedule() sched_submit_work() ... spin_lock() /* really rtlock */ ... __rt_mutex_slowlock_locked() task_blocks_on_rt_mutex() // enqueue current task as waiter *AGAIN* // *CONFUSION* Fix this by making rt_mutex do the sched_submit_work() early, before it enqueues itself as a waiter -- before it even knows *if* it will wait. [[ basically Thomas' patch but with different naming and a few asserts added ]] Originally-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20230908162254.999499-5-bigeasy@linutronix.de
Diffstat (limited to 'kernel/locking')
0 files changed, 0 insertions, 0 deletions