diff options
author | Jens Axboe <axboe@kernel.dk> | 2021-08-27 11:33:19 -0600 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2021-08-29 07:55:55 -0600 |
commit | 2e480058ddc21ec53a10e8b41623e245e908bdbc (patch) | |
tree | cdbd45542c899a929d2593c913264a34c87b0ce7 /fs/io-wq.c | |
parent | 90499ad00ca59320b5bb43392b7931e1bd84cad2 (diff) | |
download | linux-2e480058ddc21ec53a10e8b41623e245e908bdbc.tar.gz linux-2e480058ddc21ec53a10e8b41623e245e908bdbc.tar.bz2 linux-2e480058ddc21ec53a10e8b41623e245e908bdbc.zip |
io-wq: provide a way to limit max number of workers
io-wq divides work into two categories:
1) Work that completes in a bounded time, like reading from a regular file
or a block device. This type of work is limited based on the size of
the SQ ring.
2) Work that may never complete, we call this unbounded work. The amount
of workers here is just limited by RLIMIT_NPROC.
For various uses cases, it's handy to have the kernel limit the maximum
amount of pending workers for both categories. Provide a way to do with
with a new IORING_REGISTER_IOWQ_MAX_WORKERS operation.
IORING_REGISTER_IOWQ_MAX_WORKERS takes an array of two integers and sets
the max worker count to what is being passed in for each category. The
old values are returned into that same array. If 0 is being passed in for
either category, it simply returns the current value.
The value is capped at RLIMIT_NPROC. This actually isn't that important
as it's more of a hint, if we're exceeding the value then our attempt
to fork a new worker will fail. This happens naturally already if more
than one node is in the system, as these values are per-node internally
for io-wq.
Reported-by: Johannes Lundberg <johalun0@gmail.com>
Link: https://github.com/axboe/liburing/issues/420
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'fs/io-wq.c')
-rw-r--r-- | fs/io-wq.c | 29 |
1 files changed, 29 insertions, 0 deletions
diff --git a/fs/io-wq.c b/fs/io-wq.c index 8da9bb103916..4b5fc621ab39 100644 --- a/fs/io-wq.c +++ b/fs/io-wq.c @@ -1152,6 +1152,35 @@ int io_wq_cpu_affinity(struct io_wq *wq, cpumask_var_t mask) return 0; } +/* + * Set max number of unbounded workers, returns old value. If new_count is 0, + * then just return the old value. + */ +int io_wq_max_workers(struct io_wq *wq, int *new_count) +{ + int i, node, prev = 0; + + for (i = 0; i < 2; i++) { + if (new_count[i] > task_rlimit(current, RLIMIT_NPROC)) + new_count[i] = task_rlimit(current, RLIMIT_NPROC); + } + + rcu_read_lock(); + for_each_node(node) { + struct io_wqe_acct *acct; + + for (i = 0; i < 2; i++) { + acct = &wq->wqes[node]->acct[i]; + prev = max_t(int, acct->max_workers, prev); + if (new_count[i]) + acct->max_workers = new_count[i]; + new_count[i] = prev; + } + } + rcu_read_unlock(); + return 0; +} + static __init int io_wq_init(void) { int ret; |