From patchwork Mon Jun 26 12:10:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 9809459 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5203660209 for ; Mon, 26 Jun 2017 12:26:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5880622B26 for ; Mon, 26 Jun 2017 12:26:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4D36D23B24; Mon, 26 Jun 2017 12:26:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D14BD22B26 for ; Mon, 26 Jun 2017 12:26:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752438AbdFZMSP (ORCPT ); Mon, 26 Jun 2017 08:18:15 -0400 Received: from mx1.redhat.com ([209.132.183.28]:35688 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752426AbdFZMRZ (ORCPT ); Mon, 26 Jun 2017 08:17:25 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8248F4E4C6; Mon, 26 Jun 2017 12:17:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 8248F4E4C6 Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=ming.lei@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 8248F4E4C6 Received: from localhost (ovpn-12-86.pek2.redhat.com [10.72.12.86]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6BD297815C; Mon, 26 Jun 2017 12:17:03 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Huang Ying , Andrew Morton , Alexander Viro Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ming Lei Subject: [PATCH v2 26/51] block: use bio_for_each_segment_mp() to compute segments count Date: Mon, 26 Jun 2017 20:10:09 +0800 Message-Id: <20170626121034.3051-27-ming.lei@redhat.com> In-Reply-To: <20170626121034.3051-1-ming.lei@redhat.com> References: <20170626121034.3051-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Mon, 26 Jun 2017 12:17:14 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Firstly it is more efficient to use bio_for_each_segment_mp() in both blk_bio_segment_split() and __blk_recalc_rq_segments() to compute how many segments there are in the bio. Secondaly once bio_for_each_segment_mp() is used, the bvec may need to be splitted because its length can be very long and more than max segment size, so we have to support to split one bvec into several segments. Thirdly during splitting mp bvec into segments, max segment number may be reached, then the bio need to be splitted when this happens. Signed-off-by: Ming Lei --- block/blk-merge.c | 97 ++++++++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 79 insertions(+), 18 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index c6fcc49b9aea..8d2c2d763456 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -96,6 +96,62 @@ static inline unsigned get_max_io_size(struct request_queue *q, return sectors; } +/* + * Split the bvec @bv into segments, and update all kinds of + * variables. + */ +static bool bvec_split_segs(struct request_queue *q, struct bio_vec *bv, + unsigned *nsegs, unsigned *last_seg_size, + unsigned *front_seg_size, unsigned *sectors) +{ + bool need_split = false; + unsigned len = bv->bv_len; + unsigned total_len = 0; + unsigned new_nsegs = 0, seg_size = 0; + + if ((*nsegs >= queue_max_segments(q)) || !len) + return need_split; + + /* + * Multipage bvec may be too big to hold in one segment, + * so the current bvec has to be splitted as multiple + * segments. + */ + while (new_nsegs + *nsegs < queue_max_segments(q)) { + seg_size = min(queue_max_segment_size(q), len); + + new_nsegs++; + total_len += seg_size; + len -= seg_size; + + if ((queue_virt_boundary(q) && ((bv->bv_offset + + total_len) & queue_virt_boundary(q))) || !len) + break; + } + + /* split in the middle of the bvec */ + if (len) + need_split = true; + + /* update front segment size */ + if (!*nsegs) { + unsigned first_seg_size = seg_size; + + if (new_nsegs > 1) + first_seg_size = queue_max_segment_size(q); + if (*front_seg_size < first_seg_size) + *front_seg_size = first_seg_size; + } + + /* update other varibles */ + *last_seg_size = seg_size; + *nsegs += new_nsegs; + if (sectors) + *sectors += total_len >> 9; + + return need_split; +} + static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio *bio, struct bio_set *bs, @@ -110,7 +166,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, const unsigned max_sectors = get_max_io_size(q, bio); unsigned advance = 0; - bio_for_each_segment(bv, bio, iter) { + bio_for_each_segment_mp(bv, bio, iter) { /* * If the queue doesn't support SG gaps and adding this * offset would create a gap, disallow it. @@ -125,8 +181,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, */ if (nsegs < queue_max_segments(q) && sectors < max_sectors) { - nsegs++; - sectors = max_sectors; + /* split in the middle of bvec */ + bv.bv_len = (max_sectors - sectors) << 9; + bvec_split_segs(q, &bv, &nsegs, + &seg_size, + &front_seg_size, + §ors); } goto split; } @@ -138,10 +198,9 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, goto new_segment; if (seg_size + bv.bv_len > queue_max_segment_size(q)) { /* - * On assumption is that initial value of - * @seg_size(equals to bv.bv_len) won't be - * bigger than max segment size, but will - * becomes false after multipage bvec comes. + * The initial value of @seg_size won't be + * bigger than max segment size, because we + * split the bvec via bvec_split_segs(). */ advance = queue_max_segment_size(q) - seg_size; @@ -173,11 +232,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, if (nsegs == 1 && seg_size > front_seg_size) front_seg_size = seg_size; - nsegs++; bvprv = bv; bvprvp = &bvprv; - seg_size = bv.bv_len; - sectors += bv.bv_len >> 9; + + if (bvec_split_segs(q, &bv, &nsegs, &seg_size, + &front_seg_size, §ors)) + goto split; /* restore the bvec for iterator */ if (advance) { @@ -251,6 +311,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, struct bio_vec bv, bvprv = { NULL }; int cluster, prev = 0; unsigned int seg_size, nr_phys_segs; + unsigned front_seg_size = bio->bi_seg_front_size; struct bio *fbio, *bbio; struct bvec_iter iter; @@ -271,7 +332,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, seg_size = 0; nr_phys_segs = 0; for_each_bio(bio) { - bio_for_each_segment(bv, bio, iter) { + bio_for_each_segment_mp(bv, bio, iter) { /* * If SG merging is disabled, each bio vector is * a segment @@ -293,20 +354,20 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, continue; } new_segment: - if (nr_phys_segs == 1 && seg_size > - fbio->bi_seg_front_size) - fbio->bi_seg_front_size = seg_size; + if (nr_phys_segs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; - nr_phys_segs++; bvprv = bv; prev = 1; - seg_size = bv.bv_len; + bvec_split_segs(q, &bv, &nr_phys_segs, &seg_size, + &front_seg_size, NULL); } bbio = bio; } - if (nr_phys_segs == 1 && seg_size > fbio->bi_seg_front_size) - fbio->bi_seg_front_size = seg_size; + if (nr_phys_segs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; + fbio->bi_seg_front_size = front_seg_size; if (seg_size > bbio->bi_seg_back_size) bbio->bi_seg_back_size = seg_size;