diff options
author | Ming Lei <ming.lei@redhat.com> | 2023-09-01 21:49:16 +0800 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2023-09-01 07:54:06 -0600 |
commit | b484a40dc1f16edb58e5430105a021e1916e6f27 (patch) | |
tree | 83d03fb7a8cacab3b52f3b707def4a7960ff2894 /io_uring | |
parent | bd6fc5da4c51107e1e0cec4a3a07963d1dae2c84 (diff) | |
download | linux-b484a40dc1f16edb58e5430105a021e1916e6f27.tar.gz linux-b484a40dc1f16edb58e5430105a021e1916e6f27.tar.bz2 linux-b484a40dc1f16edb58e5430105a021e1916e6f27.zip |
io_uring: fix IO hang in io_wq_put_and_exit from do_exit()
io_wq_put_and_exit() is called from do_exit(), but all FIXED_FILE requests
in io_wq aren't canceled in io_uring_cancel_generic() called from do_exit().
Meantime io_wq IO code path may share resource with normal iopoll code
path.
So if any HIPRI request is submittd via io_wq, this request may not get resouce
for moving on, given iopoll isn't possible in io_wq_put_and_exit().
The issue can be triggered when terminating 't/io_uring -n4 /dev/nullb0'
with default null_blk parameters.
Fix it by always cancelling all requests in io_wq by adding helper of
io_uring_cancel_wq(), and this way is reasonable because io_wq destroying
follows canceling requests immediately.
Closes: https://lore.kernel.org/linux-block/3893581.1691785261@warthog.procyon.org.uk/
Reported-by: David Howells <dhowells@redhat.com>
Cc: Chengming Zhou <zhouchengming@bytedance.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20230901134916.2415386-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'io_uring')
-rw-r--r-- | io_uring/io_uring.c | 32 |
1 files changed, 32 insertions, 0 deletions
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index e7675355048d..c6d9e4677073 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -3290,6 +3290,37 @@ static s64 tctx_inflight(struct io_uring_task *tctx, bool tracked) return percpu_counter_sum(&tctx->inflight); } +static void io_uring_cancel_wq(struct io_uring_task *tctx) +{ + int ret; + + if (!tctx->io_wq) + return; + + /* + * FIXED_FILE request isn't tracked in do_exit(), and these + * requests may be submitted to our io_wq as iopoll, so have to + * cancel them before destroying io_wq for avoiding IO hang + */ + do { + struct io_tctx_node *node; + unsigned long index; + + ret = 0; + xa_for_each(&tctx->xa, index, node) { + struct io_ring_ctx *ctx = node->ctx; + struct io_task_cancel cancel = { .task = current, .all = true, }; + enum io_wq_cancel cret; + + io_iopoll_try_reap_events(ctx); + cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_task_cb, + &cancel, true); + ret |= (cret != IO_WQ_CANCEL_NOTFOUND); + cond_resched(); + } + } while (ret); +} + /* * Find any io_uring ctx that this task has registered or done IO on, and cancel * requests. @sqd should be not-null IFF it's an SQPOLL thread cancellation. @@ -3361,6 +3392,7 @@ end_wait: finish_wait(&tctx->wait, &wait); } while (1); + io_uring_cancel_wq(tctx); io_uring_clean_tctx(tctx); if (cancel_all) { /* |