diff options
author | Paolo Valente <paolo.valente@linaro.org> | 2019-03-12 09:59:27 +0100 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2019-06-15 11:54:58 +0200 |
commit | f88e587c64b73c8e6ace2281b74ae4499bcb502b (patch) | |
tree | 5552bd113c25bad280fcbd6ba577d4bf3958894a | |
parent | 179e70d0260f71b19d4a06d726e7d716fc562875 (diff) | |
download | linux-stable-f88e587c64b73c8e6ace2281b74ae4499bcb502b.tar.gz linux-stable-f88e587c64b73c8e6ace2281b74ae4499bcb502b.tar.bz2 linux-stable-f88e587c64b73c8e6ace2281b74ae4499bcb502b.zip |
block, bfq: increase idling for weight-raised queues
[ Upstream commit 778c02a236a8728bb992de10ed1f12c0be5b7b0e ]
If a sync bfq_queue has a higher weight than some other queue, and
remains temporarily empty while in service, then, to preserve the
bandwidth share of the queue, it is necessary to plug I/O dispatching
until a new request arrives for the queue. In addition, a timeout
needs to be set, to avoid waiting for ever if the process associated
with the queue has actually finished its I/O.
Even with the above timeout, the device is however not fed with new
I/O for a while, if the process has finished its I/O. If this happens
often, then throughput drops and latencies grow. For this reason, the
timeout is kept rather low: 8 ms is the current default.
Unfortunately, such a low value may cause, on the opposite end, a
violation of bandwidth guarantees for a process that happens to issue
new I/O too late. The higher the system load, the higher the
probability that this happens to some process. This is a problem in
scenarios where service guarantees matter more than throughput. One
important case are weight-raised queues, which need to be granted a
very high fraction of the bandwidth.
To address this issue, this commit lower-bounds the plugging timeout
for weight-raised queues to 20 ms. This simple change provides
relevant benefits. For example, on a PLEXTOR PX-256M5S, with which
gnome-terminal starts in 0.6 seconds if there is no other I/O in
progress, the same applications starts in
- 0.8 seconds, instead of 1.2 seconds, if ten files are being read
sequentially in parallel
- 1 second, instead of 2 seconds, if, in parallel, five files are
being read sequentially, and five more files are being written
sequentially
Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com>
Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <sashal@kernel.org>
-rw-r--r-- | block/bfq-iosched.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 3b44bd28fc45..7d45ac451745 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -2282,6 +2282,8 @@ static void bfq_arm_slice_timer(struct bfq_data *bfqd) if (BFQQ_SEEKY(bfqq) && bfqq->wr_coeff == 1 && bfq_symmetric_scenario(bfqd)) sl = min_t(u64, sl, BFQ_MIN_TT); + else if (bfqq->wr_coeff > 1) + sl = max_t(u32, sl, 20ULL * NSEC_PER_MSEC); bfqd->last_idling_start = ktime_get(); hrtimer_start(&bfqd->idle_slice_timer, ns_to_ktime(sl), |