From patchwork Tue Dec 27 15:56:16 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 9489447 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B9A2162AAD for ; Tue, 27 Dec 2016 16:03:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ABAD3201BC for ; Tue, 27 Dec 2016 16:03:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A08FA223B2; Tue, 27 Dec 2016 16:03:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 31A0D201BC for ; Tue, 27 Dec 2016 16:03:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755952AbcL0QC0 (ORCPT ); Tue, 27 Dec 2016 11:02:26 -0500 Received: from mail-pf0-f195.google.com ([209.85.192.195]:36862 "EHLO mail-pf0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755745AbcL0QB3 (ORCPT ); Tue, 27 Dec 2016 11:01:29 -0500 Received: by mail-pf0-f195.google.com with SMTP id c4so18354662pfb.3; Tue, 27 Dec 2016 08:01:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=3K3+qfwZoDy9gKFeKag9ptcxodJZSNyAsKGHk937pkk=; b=XLz7K6bqLxjs9XRSwll4aW5WHWHaMV4tW1lrQImulEzaxLb6EipZzrKNLiGxcYXteu UEW4LwMtYCcz1t+iSjMmtXwoPMbeRT9tpnZixEacvguNMU8Tr9aXveQP16gwE1grmaAS a0G7iHwWsWHVo2hVPU/zqprhIzWzg1TJGcEe7AGV2sTyAGOLvtPOkUyCM+rPlv/2btoa Z3BjHXWUIV34+Cfcja2/rWuF2KD0qxJK3mnvMwrmwLQpjNCTFijvW1AuN84dj5QH3XBD yVAGBipQAxpxjojK05HiwKer9+W2CgV5hL4yFlD3TBs8S/C1CIzOkdJGpSeUqKk5RRf/ WYEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=3K3+qfwZoDy9gKFeKag9ptcxodJZSNyAsKGHk937pkk=; b=pXuxipR88FyU9VVCacLEIMxQuCL5FkepDgm5bM/gzPyCa+r6OV5EOVGpIGMB94sCmf lOEmWt/cVZ8JIv02sQiK9E9jplt665CmT5rj2N6+y2uRxoFR5cibcofvp2xBZ8ADPwIy u2TjOLI/u+KgGtfsAgRXs84RMuABBaxLsXoFPOm3+FP1YCrZ+ZOCOTPf5vkHf8iMtzZm 8ZQfcj8wnDFn5oKxVJQufh1znDD5f4LUDjLQNdlE9STRO3IKj+kJJQDVsrzG/8NtHTyL YwYG9N4DPRTuEwY1gI4DmLpOk5IGHfMHe+YlokOq3GOFJEdhOZ+d5o7UJFWJ3qFqSdk/ Vpxg== X-Gm-Message-State: AIkVDXIGnQsq1oTRmfLLCVD8yJKUtJWwL+PsAOuDyXNnwQyWM0ORnfuFxB1aoU2lwwA1nw== X-Received: by 10.84.197.129 with SMTP id n1mr68414844pld.30.1482854488325; Tue, 27 Dec 2016 08:01:28 -0800 (PST) Received: from localhost ([45.35.47.137]) by smtp.gmail.com with ESMTPSA id w5sm90733813pfl.31.2016.12.27.08.01.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 27 Dec 2016 08:01:27 -0800 (PST) From: Ming Lei To: Jens Axboe , linux-kernel@vger.kernel.org Cc: linux-block@vger.kernel.org, Christoph Hellwig , Ming Lei , Jens Axboe Subject: [PATCH v1 27/54] block: use bio_for_each_segment_mp() to compute segments count Date: Tue, 27 Dec 2016 23:56:16 +0800 Message-Id: <1482854250-13481-28-git-send-email-tom.leiming@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1482854250-13481-1-git-send-email-tom.leiming@gmail.com> References: <1482854250-13481-1-git-send-email-tom.leiming@gmail.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Firstly it is more efficient to use bio_for_each_segment_mp() in both blk_bio_segment_split() and __blk_recalc_rq_segments() to compute how many segments there are in the bio. Secondaly once bio_for_each_segment_mp() is used, the bvec may need to be splitted because its length can be very long and more than max segment size, so we have to split one bvec into several segments. Thirdly during splitting mp bvec into segments, max segment number may be reached, then the bio need to be splitted. Signed-off-by: Ming Lei --- block/blk-merge.c | 98 +++++++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 80 insertions(+), 18 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 05b6a3ef63f6..a0e97959db7b 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -82,6 +82,63 @@ static inline unsigned get_max_io_size(struct request_queue *q, return sectors; } +/* + * Split the bvec @bv into segments, and update all kinds of + * variables. + */ +static bool bvec_split_segs(struct request_queue *q, struct bio_vec *bv, + unsigned *nsegs, unsigned *last_seg_size, + unsigned *front_seg_size, unsigned *sectors) +{ + bool need_split = false; + unsigned len = bv->bv_len; + unsigned total_len = 0; + unsigned new_nsegs = 0, seg_size = 0; + int idx; + + if ((*nsegs >= queue_max_segments(q)) || !len) + return need_split; + + /* + * Multipage bvec may be too big to hold in one segment, + * so the current bvec has to be splitted as multiple + * segments. + */ + while (new_nsegs + *nsegs < queue_max_segments(q)) { + seg_size = min(queue_max_segment_size(q), len); + + new_nsegs++; + total_len += seg_size; + len -= seg_size; + + if ((queue_virt_boundary(q) && ((bv->bv_offset + + total_len) & queue_virt_boundary(q))) || !len) + break; + } + + /* split in the middle of the bvec */ + if (len) + need_split = true; + + /* update front segment size */ + if (!*nsegs) { + unsigned first_seg_size = seg_size; + + if (new_nsegs > 1) + first_seg_size = queue_max_segment_size(q); + if (*front_seg_size < first_seg_size) + *front_seg_size = first_seg_size; + } + + /* update other varibles */ + *last_seg_size = seg_size; + *nsegs += new_nsegs; + if (sectors) + *sectors += total_len >> 9; + + return need_split; +} + static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio *bio, struct bio_set *bs, @@ -97,7 +154,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, unsigned bvecs = 0; unsigned advance = 0; - bio_for_each_segment(bv, bio, iter) { + bio_for_each_segment_mp(bv, bio, iter) { /* * With arbitrary bio size, the incoming bio may be very * big. We have to split the bio into small bios so that @@ -133,8 +190,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, */ if (nsegs < queue_max_segments(q) && sectors < max_sectors) { - nsegs++; - sectors = max_sectors; + /* split in the middle of bvec */ + bv.bv_len = (max_sectors - sectors) << 9; + bvec_split_segs(q, &bv, &nsegs, + &seg_size, + &front_seg_size, + §ors); } goto split; } @@ -146,10 +207,9 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, goto new_segment; if (seg_size + bv.bv_len > queue_max_segment_size(q)) { /* - * On assumption is that initial value of - * @seg_size(equals to bv.bv_len) won't be - * bigger than max segment size, but will - * becomes false after multipage bvec comes. + * The initial value of @seg_size won't be + * bigger than max segment size, because we + * split the bvec via bvec_split_segs(). */ advance = queue_max_segment_size(q) - seg_size; @@ -181,11 +241,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, if (nsegs == 1 && seg_size > front_seg_size) front_seg_size = seg_size; - nsegs++; bvprv = bv; bvprvp = &bvprv; - seg_size = bv.bv_len; - sectors += bv.bv_len >> 9; + + if (bvec_split_segs(q, &bv, &nsegs, &seg_size, + &front_seg_size, §ors)) + goto split; /* restore the bvec for iterator */ if (advance) { @@ -261,6 +322,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, struct bio_vec bv, bvprv = { NULL }; int cluster, prev = 0; unsigned int seg_size, nr_phys_segs; + unsigned front_seg_size = bio->bi_seg_front_size; struct bio *fbio, *bbio; struct bvec_iter iter; @@ -281,7 +343,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, seg_size = 0; nr_phys_segs = 0; for_each_bio(bio) { - bio_for_each_segment(bv, bio, iter) { + bio_for_each_segment_mp(bv, bio, iter) { /* * If SG merging is disabled, each bio vector is * a segment @@ -303,20 +365,20 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, continue; } new_segment: - if (nr_phys_segs == 1 && seg_size > - fbio->bi_seg_front_size) - fbio->bi_seg_front_size = seg_size; + if (nr_phys_segs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; - nr_phys_segs++; bvprv = bv; prev = 1; - seg_size = bv.bv_len; + bvec_split_segs(q, &bv, &nr_phys_segs, &seg_size, + &front_seg_size, NULL); } bbio = bio; } - if (nr_phys_segs == 1 && seg_size > fbio->bi_seg_front_size) - fbio->bi_seg_front_size = seg_size; + if (nr_phys_segs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; + fbio->bi_seg_front_size = front_seg_size; if (seg_size > bbio->bi_seg_back_size) bbio->bi_seg_back_size = seg_size;