diff options
author | Jens Axboe <axboe@kernel.dk> | 2024-03-18 16:13:01 -0600 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2024-04-15 08:10:25 -0600 |
commit | a9165b83c1937eeed1f0c731468216d6371d647f (patch) | |
tree | b7444f11aca3a27908c5b1660344efe44cbc190e /io_uring/io_uring.c | |
parent | d80f940701302e84d1398ecb103083468b566a69 (diff) | |
download | linux-a9165b83c1937eeed1f0c731468216d6371d647f.tar.gz linux-a9165b83c1937eeed1f0c731468216d6371d647f.tar.bz2 linux-a9165b83c1937eeed1f0c731468216d6371d647f.zip |
io_uring/rw: always setup io_async_rw for read/write requests
read/write requests try to put everything on the stack, and then alloc
and copy if a retry is needed. This necessitates a bunch of nasty code
that deals with intermediate state.
Get rid of this, and have the prep side setup everything that is needed
upfront, which greatly simplifies the opcode handlers.
This includes adding an alloc cache for io_async_rw, to make it cheap
to handle.
In terms of cost, this should be basically free and transparent. For
the worst case of {READ,WRITE}_FIXED which didn't need it before,
performance is unaffected in the normal peak workload that is being
used to test that. Still runs at 122M IOPS.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'io_uring/io_uring.c')
-rw-r--r-- | io_uring/io_uring.c | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index e6e7794d497f..b0554d122931 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -311,6 +311,8 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p) sizeof(struct async_poll)); io_alloc_cache_init(&ctx->netmsg_cache, IO_ALLOC_CACHE_MAX, sizeof(struct io_async_msghdr)); + io_alloc_cache_init(&ctx->rw_cache, IO_ALLOC_CACHE_MAX, + sizeof(struct io_async_rw)); io_futex_cache_init(ctx); init_completion(&ctx->ref_comp); xa_init_flags(&ctx->personalities, XA_FLAGS_ALLOC1); @@ -2823,6 +2825,7 @@ static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx) io_eventfd_unregister(ctx); io_alloc_cache_free(&ctx->apoll_cache, io_apoll_cache_free); io_alloc_cache_free(&ctx->netmsg_cache, io_netmsg_cache_free); + io_alloc_cache_free(&ctx->rw_cache, io_rw_cache_free); io_futex_cache_free(ctx); io_destroy_buffers(ctx); mutex_unlock(&ctx->uring_lock); |