From patchwork Sat Jun 9 12:29:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10455539 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1E55460467 for ; Sat, 9 Jun 2018 12:31:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0F039223A5 for ; Sat, 9 Jun 2018 12:31:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 01127223A4; Sat, 9 Jun 2018 12:31:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6550A223A4 for ; Sat, 9 Jun 2018 12:31:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 74F436B026A; Sat, 9 Jun 2018 08:31:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6D6146B026B; Sat, 9 Jun 2018 08:31:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 59EE06B026C; Sat, 9 Jun 2018 08:31:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt0-f198.google.com (mail-qt0-f198.google.com [209.85.216.198]) by kanga.kvack.org (Postfix) with ESMTP id 2C3E86B026A for ; Sat, 9 Jun 2018 08:31:52 -0400 (EDT) Received: by mail-qt0-f198.google.com with SMTP id d7-v6so13272870qth.21 for ; Sat, 09 Jun 2018 05:31:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=JVjyYRyTMhUgS5W5kQbAkH1YT8WNCvaUeuauRPiVir8=; b=HqkGF5uj6J/hpMldRGjsjNQ1RnB5b1KWOi2rW6eFWJbl3/ccmcNYJ1loJBulMgDPY0 vtVu+Mz0NiYPa1MV3VKZgUGIorq/gleau9BtZPcCsA2PonoFG/fvman698U64NJ5g29B RQC3H41PI2J92u6lIy/099gigcdBoNsrQWeSAn7KYGYP+IVPBJTMSR6X4Hsh7ZGrKjwu NhauIOtjpcodx/9og+j/jRlFZxnkCh/596vK5xHtS/wnShsce6PwWkFyPJ0cIjl1bbwF brn10XSzc5tRdNcmrK2F4RPMzaQvikk4dLwfEI3o5yocFC5rzV7Vvo1TeEORIhCCzR/e Zaxg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: APt69E1WZlR6IEgPabK2crbhvmfXHVSfpzZzI0MzdiJN/qfysdJoEw5b Ew2aAwzEDtZijKVqBDm4WsMLT8vK1jom+m4b/EWFHDVfxY3CEvukhIZP7mrHj/KZA9WIVKdRd+z IWILkPTi148qFwVWYrRjr9NzU1CmhCWkIIlsLXG6CX6l59rZ9EmqEyykFP9MavmU15w== X-Received: by 2002:a37:7901:: with SMTP id u1-v6mr8311728qkc.111.1528547511910; Sat, 09 Jun 2018 05:31:51 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJxB0vRvn//1u/+F/NzVxvMcyn1by48pPyJ6zhj7rOLNusylmjX7TgnrkVnvGaIevDmPEul X-Received: by 2002:a37:7901:: with SMTP id u1-v6mr8311698qkc.111.1528547511220; Sat, 09 Jun 2018 05:31:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528547511; cv=none; d=google.com; s=arc-20160816; b=RffLyISsX4fp+1h1NDZb6versetRRrqLFq9RBC3yutLX6jm1R+qBAXBZCyXYw/CNGs T9L1ZBSw56CiUkFyOIhqwig4mc5YWX0sNgv5L7xNNmVhJKPo6GE5LHqFKw1wIDcMHNQM 3psMPmbnQvLTBlLcNPrFXoNDC46gWP3cGY0ZaQNl8SyLOQMw4Ck6p7zHCbc6SL0U3uYP FvWtoTc6lHywcDF3mlTU6OevB1Y7gDCZM0d/bLqHjRw9e35RPlgni8JmUUD2IUaTy6L/ LbI+i3T1hjTnWy6Q2CBJuTc3jCc00Ui4iSuFAdWIz/iUNk85p72LGn35RkrhrwIxEd3D Phsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=JVjyYRyTMhUgS5W5kQbAkH1YT8WNCvaUeuauRPiVir8=; b=qSZGhNhgGZ5g3kBp7SvII7dlbDzUDCA8qo2dw6fPRgkEPYpIM61lyl1rDXiJsDa6xL CEO6lEj8Fv9z7iDktJuHn62NxIz4+NZ/jgbGzZFu9FbuvU4mYjA71gBeXEp4dAWxEKQW y0Hr/gFJ1hzPlXrqz1rgitbZf+McWPasQPP57eE/ilM+7dxs8cBqUSz9o5ZPYkXBhxAQ eZLvFZBQPZeqnQatTwMCMOX8yX/X+d6+xrDmK7/2Jx3gCS133XeZHH3lFEsATlLCf8p0 HVKCBfRMwCAUB8uipCLLoOK8e1PUme1jGsYIu3aiYkI9FZBHNj4pGumxfbs3vM0A5lTe 5+Yw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx3-rdu2.redhat.com. [66.187.233.73]) by mx.google.com with ESMTPS id v185-v6si769447qkd.338.2018.06.09.05.31.51 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 09 Jun 2018 05:31:51 -0700 (PDT) Received-SPF: pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) client-ip=66.187.233.73; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C2B06C12C0; Sat, 9 Jun 2018 12:31:50 +0000 (UTC) Received: from localhost (ovpn-12-40.pek2.redhat.com [10.72.12.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id A912642236; Sat, 9 Jun 2018 12:31:39 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Alexander Viro , Kent Overstreet Cc: David Sterba , Huang Ying , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , "Darrick J . Wong" , Coly Li , Filipe Manana , Randy Dunlap , Ming Lei Subject: [PATCH V6 06/30] block: use bio_for_each_chunk() to compute multipage bvec count Date: Sat, 9 Jun 2018 20:29:50 +0800 Message-Id: <20180609123014.8861-7-ming.lei@redhat.com> In-Reply-To: <20180609123014.8861-1-ming.lei@redhat.com> References: <20180609123014.8861-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Sat, 09 Jun 2018 12:31:50 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Sat, 09 Jun 2018 12:31:50 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Firstly it is more efficient to use bio_for_each_chunk() in both blk_bio_segment_split() and __blk_recalc_rq_segments() to compute how many multipage bvecs there are in the bio. Secondaly once bio_for_each_chunk() is used, the bvec may need to be splitted because its length can be very longer than max segment size, so we have to split the big bvec into several segments. Thirdly during splitting multipage bvec into segments, max segment number may be reached, then the bio need to be splitted when this happens. Signed-off-by: Ming Lei --- block/blk-merge.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 76 insertions(+), 14 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index aaec38cc37b8..2493fe027953 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -97,6 +97,62 @@ static inline unsigned get_max_io_size(struct request_queue *q, return sectors; } +/* + * Split the bvec @bv into segments, and update all kinds of + * variables. + */ +static bool bvec_split_segs(struct request_queue *q, struct bio_vec *bv, + unsigned *nsegs, unsigned *last_seg_size, + unsigned *front_seg_size, unsigned *sectors) +{ + bool need_split = false; + unsigned len = bv->bv_len; + unsigned total_len = 0; + unsigned new_nsegs = 0, seg_size = 0; + + if ((*nsegs >= queue_max_segments(q)) || !len) + return need_split; + + /* + * Multipage bvec may be too big to hold in one segment, + * so the current bvec has to be splitted as multiple + * segments. + */ + while (new_nsegs + *nsegs < queue_max_segments(q)) { + seg_size = min(queue_max_segment_size(q), len); + + new_nsegs++; + total_len += seg_size; + len -= seg_size; + + if ((queue_virt_boundary(q) && ((bv->bv_offset + + total_len) & queue_virt_boundary(q))) || !len) + break; + } + + /* split in the middle of the bvec */ + if (len) + need_split = true; + + /* update front segment size */ + if (!*nsegs) { + unsigned first_seg_size = seg_size; + + if (new_nsegs > 1) + first_seg_size = queue_max_segment_size(q); + if (*front_seg_size < first_seg_size) + *front_seg_size = first_seg_size; + } + + /* update other varibles */ + *last_seg_size = seg_size; + *nsegs += new_nsegs; + if (sectors) + *sectors += total_len >> 9; + + return need_split; +} + static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio *bio, struct bio_set *bs, @@ -110,7 +166,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio *new = NULL; const unsigned max_sectors = get_max_io_size(q, bio); - bio_for_each_segment(bv, bio, iter) { + bio_for_each_chunk(bv, bio, iter) { /* * If the queue doesn't support SG gaps and adding this * offset would create a gap, disallow it. @@ -125,8 +181,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, */ if (nsegs < queue_max_segments(q) && sectors < max_sectors) { - nsegs++; - sectors = max_sectors; + /* split in the middle of bvec */ + bv.bv_len = (max_sectors - sectors) << 9; + bvec_split_segs(q, &bv, &nsegs, + &seg_size, + &front_seg_size, + §ors); } goto split; } @@ -153,11 +213,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, if (nsegs == 1 && seg_size > front_seg_size) front_seg_size = seg_size; - nsegs++; bvprv = bv; bvprvp = &bvprv; - seg_size = bv.bv_len; - sectors += bv.bv_len >> 9; + + if (bvec_split_segs(q, &bv, &nsegs, &seg_size, + &front_seg_size, §ors)) + goto split; } @@ -235,6 +296,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, struct bio_vec bv, bvprv = { NULL }; int cluster, prev = 0; unsigned int seg_size, nr_phys_segs; + unsigned front_seg_size = bio->bi_seg_front_size; struct bio *fbio, *bbio; struct bvec_iter iter; @@ -255,7 +317,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, seg_size = 0; nr_phys_segs = 0; for_each_bio(bio) { - bio_for_each_segment(bv, bio, iter) { + bio_for_each_chunk(bv, bio, iter) { /* * If SG merging is disabled, each bio vector is * a segment @@ -277,20 +339,20 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, continue; } new_segment: - if (nr_phys_segs == 1 && seg_size > - fbio->bi_seg_front_size) - fbio->bi_seg_front_size = seg_size; + if (nr_phys_segs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; - nr_phys_segs++; bvprv = bv; prev = 1; - seg_size = bv.bv_len; + bvec_split_segs(q, &bv, &nr_phys_segs, &seg_size, + &front_seg_size, NULL); } bbio = bio; } - if (nr_phys_segs == 1 && seg_size > fbio->bi_seg_front_size) - fbio->bi_seg_front_size = seg_size; + if (nr_phys_segs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; + fbio->bi_seg_front_size = front_seg_size; if (seg_size > bbio->bi_seg_back_size) bbio->bi_seg_back_size = seg_size;