diff options
author | Ming Lei <ming.lei@redhat.com> | 2017-12-18 20:22:14 +0800 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2018-01-06 09:18:00 -0700 |
commit | 6a501bf0807b5dc024fe52a4f956800a352c39ab (patch) | |
tree | cc869dfac6aaf6205896fd1e2adf9631f4d75b70 /block/blk-merge.c | |
parent | 92681eca6104d2ec2fdf9fc65f529deb226ab0a1 (diff) | |
download | linux-6a501bf0807b5dc024fe52a4f956800a352c39ab.tar.gz linux-6a501bf0807b5dc024fe52a4f956800a352c39ab.tar.bz2 linux-6a501bf0807b5dc024fe52a4f956800a352c39ab.zip |
blk-merge: compute bio->bi_seg_front_size efficiently
It is enough to check and compute bio->bi_seg_front_size just
after the 1st segment is found, but current code checks that
for each bvec, which is inefficient.
This patch follows the way in __blk_recalc_rq_segments()
for computing bio->bi_seg_front_size, and it is more efficient
and code becomes more readable too.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-merge.c')
-rw-r--r-- | block/blk-merge.c | 9 |
1 files changed, 5 insertions, 4 deletions
diff --git a/block/blk-merge.c b/block/blk-merge.c index f5dedd57dff6..a476337a8ff4 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -146,22 +146,21 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, bvprvp = &bvprv; sectors += bv.bv_len >> 9; - if (nsegs == 1 && seg_size > front_seg_size) - front_seg_size = seg_size; continue; } new_segment: if (nsegs == queue_max_segments(q)) goto split; + if (nsegs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; + nsegs++; bvprv = bv; bvprvp = &bvprv; seg_size = bv.bv_len; sectors += bv.bv_len >> 9; - if (nsegs == 1 && seg_size > front_seg_size) - front_seg_size = seg_size; } do_split = false; @@ -174,6 +173,8 @@ split: bio = new; } + if (nsegs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; bio->bi_seg_front_size = front_seg_size; if (seg_size > bio->bi_seg_back_size) bio->bi_seg_back_size = seg_size; |