summaryrefslogtreecommitdiffstats
path: root/block/blk-throttle.c
diff options
context:
space:
mode:
authorHou Tao <houtao1@huawei.com>2019-05-21 15:59:03 +0800
committerJens Axboe <axboe@kernel.dk>2019-09-15 16:02:08 -0600
commit3d24430694077313c75c6b89f618db09943621e4 (patch)
treef8688e775734e33e5616d916287e21f30147eb44 /block/blk-throttle.c
parent89f3b6d62f2c7c1ed7b2e672be605016d9ff60f2 (diff)
downloadlinux-stable-3d24430694077313c75c6b89f618db09943621e4.tar.gz
linux-stable-3d24430694077313c75c6b89f618db09943621e4.tar.bz2
linux-stable-3d24430694077313c75c6b89f618db09943621e4.zip
block: make rq sector size accessible for block stats
Currently rq->data_len will be decreased by partial completion or zeroed by completion, so when blk_stat_add() is invoked, data_len will be zero and there will never be samples in poll_cb because blk_mq_poll_stats_bkt() will return -1 if data_len is zero. We could move blk_stat_add() back to __blk_mq_complete_request(), but that would make the effort of trying to call ktime_get_ns() once in vain. Instead we can reuse throtl_size field, and use it for both block stats and block throttle, and adjust the logic in blk_mq_poll_stats_bkt() accordingly. Fixes: 4bc6339a583c ("block: move blk_stat_add() to __blk_mq_end_request()") Tested-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-throttle.c')
-rw-r--r--block/blk-throttle.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 0445c998c377..18f773e52dfb 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -2248,7 +2248,8 @@ void blk_throtl_stat_add(struct request *rq, u64 time_ns)
struct request_queue *q = rq->q;
struct throtl_data *td = q->td;
- throtl_track_latency(td, rq->throtl_size, req_op(rq), time_ns >> 10);
+ throtl_track_latency(td, blk_rq_stats_sectors(rq), req_op(rq),
+ time_ns >> 10);
}
void blk_throtl_bio_endio(struct bio *bio)