diff options
author | Pavel Begunkov <asml.silence@gmail.com> | 2023-09-07 13:50:08 +0100 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2023-09-07 09:02:29 -0600 |
commit | 27122c079f5b4b4ecf1323b65700edc57e07bf6e (patch) | |
tree | 7b25a4d76113e5914538cb67a0d97b912b1d3e34 /io_uring | |
parent | 45500dc4e01c167ee063f3dcc22f51ced5b2b1e9 (diff) | |
download | linux-stable-27122c079f5b4b4ecf1323b65700edc57e07bf6e.tar.gz linux-stable-27122c079f5b4b4ecf1323b65700edc57e07bf6e.tar.bz2 linux-stable-27122c079f5b4b4ecf1323b65700edc57e07bf6e.zip |
io_uring: fix unprotected iopoll overflow
[ 71.490669] WARNING: CPU: 3 PID: 17070 at io_uring/io_uring.c:769
io_cqring_event_overflow+0x47b/0x6b0
[ 71.498381] Call Trace:
[ 71.498590] <TASK>
[ 71.501858] io_req_cqe_overflow+0x105/0x1e0
[ 71.502194] __io_submit_flush_completions+0x9f9/0x1090
[ 71.503537] io_submit_sqes+0xebd/0x1f00
[ 71.503879] __do_sys_io_uring_enter+0x8c5/0x2380
[ 71.507360] do_syscall_64+0x39/0x80
We decoupled CQ locking from ->task_complete but haven't fixed up places
forcing locking for CQ overflows.
Fixes: ec26c225f06f5 ("io_uring: merge iopoll and normal completion paths")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'io_uring')
-rw-r--r-- | io_uring/io_uring.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 58d8dd34a45f..090913acf1db 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -908,7 +908,7 @@ static void __io_flush_post_cqes(struct io_ring_ctx *ctx) struct io_uring_cqe *cqe = &ctx->completion_cqes[i]; if (!io_fill_cqe_aux(ctx, cqe->user_data, cqe->res, cqe->flags)) { - if (ctx->task_complete) { + if (ctx->lockless_cq) { spin_lock(&ctx->completion_lock); io_cqring_event_overflow(ctx, cqe->user_data, cqe->res, cqe->flags, 0, 0); @@ -1566,7 +1566,7 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx) if (!(req->flags & REQ_F_CQE_SKIP) && unlikely(!io_fill_cqe_req(ctx, req))) { - if (ctx->task_complete) { + if (ctx->lockless_cq) { spin_lock(&ctx->completion_lock); io_req_cqe_overflow(req); spin_unlock(&ctx->completion_lock); |