Message ID | 1456836718-11509-1-git-send-email-ming.lei@canonical.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Tue, Mar 1, 2016 at 8:51 PM, Ming Lei <ming.lei@canonical.com> wrote: > It is enough to check and compute bio->bi_seg_front_size just > after the 1st segment is found, but current code checks that > for each bvec, which is inefficient. > > This patch follows the way in __blk_recalc_rq_segments() > for computing bio->bi_seg_front_size, and it is more efficient > and code becomes more readable too. Gentle ping, :-) IMO, it is nice to follow the logic in __blk_recalc_rq_segments(), so hope this one can be merged in for-next soon. Thanks, Ming > > Signed-off-by: Ming Lei <ming.lei@canonical.com> > --- > block/blk-merge.c | 9 +++++---- > 1 file changed, 5 insertions(+), 4 deletions(-) > > diff --git a/block/blk-merge.c b/block/blk-merge.c > index 2613531..e4c74a0 100644 > --- a/block/blk-merge.c > +++ b/block/blk-merge.c > @@ -131,22 +131,21 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, > bvprvp = &bvprv; > sectors += bv.bv_len >> 9; > > - if (nsegs == 1 && seg_size > front_seg_size) > - front_seg_size = seg_size; > continue; > } > new_segment: > if (nsegs == queue_max_segments(q)) > goto split; > > + if (nsegs == 1 && seg_size > front_seg_size) > + front_seg_size = seg_size; > + > nsegs++; > bvprv = bv; > bvprvp = &bvprv; > seg_size = bv.bv_len; > sectors += bv.bv_len >> 9; > > - if (nsegs == 1 && seg_size > front_seg_size) > - front_seg_size = seg_size; > } > > do_split = false; > @@ -159,6 +158,8 @@ split: > bio = new; > } > > + if (nsegs == 1 && seg_size > front_seg_size) > + front_seg_size = seg_size; > bio->bi_seg_front_size = front_seg_size; > if (seg_size > bio->bi_seg_back_size) > bio->bi_seg_back_size = seg_size; > -- > 1.9.1 > -- To unsubscribe from this list: send the line "unsubscribe linux-block" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/block/blk-merge.c b/block/blk-merge.c index 2613531..e4c74a0 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -131,22 +131,21 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, bvprvp = &bvprv; sectors += bv.bv_len >> 9; - if (nsegs == 1 && seg_size > front_seg_size) - front_seg_size = seg_size; continue; } new_segment: if (nsegs == queue_max_segments(q)) goto split; + if (nsegs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; + nsegs++; bvprv = bv; bvprvp = &bvprv; seg_size = bv.bv_len; sectors += bv.bv_len >> 9; - if (nsegs == 1 && seg_size > front_seg_size) - front_seg_size = seg_size; } do_split = false; @@ -159,6 +158,8 @@ split: bio = new; } + if (nsegs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; bio->bi_seg_front_size = front_seg_size; if (seg_size > bio->bi_seg_back_size) bio->bi_seg_back_size = seg_size;
It is enough to check and compute bio->bi_seg_front_size just after the 1st segment is found, but current code checks that for each bvec, which is inefficient. This patch follows the way in __blk_recalc_rq_segments() for computing bio->bi_seg_front_size, and it is more efficient and code becomes more readable too. Signed-off-by: Ming Lei <ming.lei@canonical.com> --- block/blk-merge.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-)