summaryrefslogtreecommitdiffstats
path: root/io_uring/io_uring.h
diff options
context:
space:
mode:
authorJens Axboe <axboe@kernel.dk>2022-07-21 09:06:47 -0600
committerJens Axboe <axboe@kernel.dk>2022-07-24 18:39:18 -0600
commitf6b543fd03d347e8bf245cee4f2d54eb6ffd8fcb (patch)
tree5159aab78b234d41965a12ac991f8a020339cb0e /io_uring/io_uring.h
parent4f6a94d337408f23e199eee7d35ed6edc201df0c (diff)
downloadlinux-f6b543fd03d347e8bf245cee4f2d54eb6ffd8fcb.tar.gz
linux-f6b543fd03d347e8bf245cee4f2d54eb6ffd8fcb.tar.bz2
linux-f6b543fd03d347e8bf245cee4f2d54eb6ffd8fcb.zip
io_uring: ensure REQ_F_ISREG is set async offload
If we're offloading requests directly to io-wq because IOSQE_ASYNC was set in the sqe, we can miss hashing writes appropriately because we haven't set REQ_F_ISREG yet. This can cause a performance regression with buffered writes, as io-wq then no longer correctly serializes writes to that file. Ensure that we set the flags in io_prep_async_work(), which will cause the io-wq work item to be hashed appropriately. Fixes: 584b0180f0f4 ("io_uring: move read/write file prep state into actual opcode handler") Link: https://lore.kernel.org/io-uring/20220608080054.GB22428@xsang-OptiPlex-9020/ Reported-by: kernel test robot <oliver.sang@intel.com> Tested-by: Yin Fengwei <fengwei.yin@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'io_uring/io_uring.h')
-rw-r--r--io_uring/io_uring.h5
1 files changed, 5 insertions, 0 deletions
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 868f45d55543..5db0a60dc04e 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -41,6 +41,11 @@ struct file *io_file_get_normal(struct io_kiocb *req, int fd);
struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
unsigned issue_flags);
+static inline bool io_req_ffs_set(struct io_kiocb *req)
+{
+ return req->flags & REQ_F_FIXED_FILE;
+}
+
bool io_is_uring_fops(struct file *file);
bool io_alloc_async_data(struct io_kiocb *req);
void io_req_task_work_add(struct io_kiocb *req);