diff options
author | Wenwen Chen <wenwen.chen@samsung.com> | 2023-05-25 16:26:26 +0800 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2023-06-21 16:02:07 +0200 |
commit | 5bf82a1bc7265004b41b624547e37014d108c5ca (patch) | |
tree | ad9a23b7a240525ec3877a00df6809cc248ad442 /io_uring | |
parent | 6c4510b2e61376399c2e2637751c896d1e3f6926 (diff) | |
download | linux-stable-5bf82a1bc7265004b41b624547e37014d108c5ca.tar.gz linux-stable-5bf82a1bc7265004b41b624547e37014d108c5ca.tar.bz2 linux-stable-5bf82a1bc7265004b41b624547e37014d108c5ca.zip |
io_uring: unlock sqd->lock before sq thread release CPU
[ Upstream commit 533ab73f5b5c95dcb4152b52d5482abcc824c690 ]
The sq thread actively releases CPU resources by calling the
cond_resched() and schedule() interfaces when it is idle. Therefore,
more resources are available for other threads to run.
There exists a problem in sq thread: it does not unlock sqd->lock before
releasing CPU resources every time. This makes other threads pending on
sqd->lock for a long time. For example, the following interfaces all
require sqd->lock: io_sq_offload_create(), io_register_iowq_max_workers()
and io_ring_exit_work().
Before the sq thread releases CPU resources, unlocking sqd->lock will
provide the user a better experience because it can respond quickly to
user requests.
Signed-off-by: Kanchan Joshi<joshi.k@samsung.com>
Signed-off-by: Wenwen Chen<wenwen.chen@samsung.com>
Link: https://lore.kernel.org/r/20230525082626.577862-1-wenwen.chen@samsung.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'io_uring')
-rw-r--r-- | io_uring/sqpoll.c | 6 |
1 files changed, 5 insertions, 1 deletions
diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c index 9db4bc1f521a..5e329e3cd470 100644 --- a/io_uring/sqpoll.c +++ b/io_uring/sqpoll.c @@ -255,9 +255,13 @@ static int io_sq_thread(void *data) sqt_spin = true; if (sqt_spin || !time_after(jiffies, timeout)) { - cond_resched(); if (sqt_spin) timeout = jiffies + sqd->sq_thread_idle; + if (unlikely(need_resched())) { + mutex_unlock(&sqd->lock); + cond_resched(); + mutex_lock(&sqd->lock); + } continue; } |