From patchwork Wed Jun 27 12:45:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10491399 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 34466602B3 for ; Wed, 27 Jun 2018 12:48:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 21CDC28D80 for ; Wed, 27 Jun 2018 12:48:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1323528D84; Wed, 27 Jun 2018 12:48:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 74F8D28D80 for ; Wed, 27 Jun 2018 12:48:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 68BC76B027F; Wed, 27 Jun 2018 08:48:22 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 662866B0280; Wed, 27 Jun 2018 08:48:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57A276B0281; Wed, 27 Jun 2018 08:48:22 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk0-f200.google.com (mail-qk0-f200.google.com [209.85.220.200]) by kanga.kvack.org (Postfix) with ESMTP id 2A4BA6B027F for ; Wed, 27 Jun 2018 08:48:22 -0400 (EDT) Received: by mail-qk0-f200.google.com with SMTP id i127-v6so1849306qkc.22 for ; Wed, 27 Jun 2018 05:48:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=jdKLbB/DSP2711ve6Kd0YM9TlnWZ1eUkWcMG5uV3+OE=; b=N6N+p/kpXwwV1lkf5cQm6s3RFC3oJSIUuavZWb3sSdAnksStwyId6CNvNs4ruOhPXS uW77LeJirUUPsicmv+cAR9Ql4F9mQ6whE9F4UVzd46b2nkmjylfokyIZ5cUwSc9YQzN0 32TUQSXer6W3ow/3v5oZOo/6WuhxQW6ieUi3/D4aTdfUlsj/6aAWWdcPJPZrsfBQVH4w f478hCyRP6pHHxSowutssb/CiAo+rxPJus0uYFk8lRLo2jsvaL4TWZtgRlo7ulGsWlos rWH+QJcH7xmahMKhNaleuZrt3dmRZL+T9pIwpiR4AL/vrR/RIdU+uj/X+zsNg9XIAbaN 7KDw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: APt69E32ns3Tp26RayN1VpPbHZ47TLjw31Eysdt9vjG94Q3TO3PYj0g0 eza+g7MglncFC/T/EkA7mLnPJLnT2Vx50xDYXamFOmPqhywNoL0AxPnKSLKGGpda1JFw6D0NGnt lUhG+K2J7RIaIckBY6HA7rE9yFxBOswwELYZV2Rh2GhxPTjbWA6oqPr0UqoK5P9ZyLw== X-Received: by 2002:ac8:16ee:: with SMTP id y43-v6mr5340869qtk.148.1530103701936; Wed, 27 Jun 2018 05:48:21 -0700 (PDT) X-Google-Smtp-Source: AAOMgpe9ug2O8GryV0Yo++4cvV3O2FGgk4NfBeIMlKCcdYYRV5K6qFfZMNePuMH2wdoEeEp5CrIU X-Received: by 2002:ac8:16ee:: with SMTP id y43-v6mr5340818qtk.148.1530103700962; Wed, 27 Jun 2018 05:48:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530103700; cv=none; d=google.com; s=arc-20160816; b=qaRON7+6RCmNjFuelpHRlbnVmmRTw0bdQD04MFSTkpKf+jK7iBsPLfMtB3NPIeKoJB OQq2PEU+kcuwohVhAUL8AYZPugarS0NPNvm4/MDuKFuFArE0nErNfMEgeWhimgnE5XH8 dGOC9p+9IwkshWHIru2Y8Wtl5Lv91CbRluolMAkRXo8uovM4SUYO/HGnV4DEAB3QSK9p ABxjRSslnefYdsDYB0pjPoM6gWJ+IUTemz94OSGKsNbBUaN9udcrXKxddnlk6XuD8ONa 3OpVDcF0DbbnW4/NAqc9bRzpo5jB8C8nXdeSjrzqXzWDgKbrxQu7g/6B2W7a8XgQ+iRF IHuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=jdKLbB/DSP2711ve6Kd0YM9TlnWZ1eUkWcMG5uV3+OE=; b=RZOXCjv8FAWg5yTXQfsybA+EW2nAyu6KxAFQ/lznwXB2bHGY9EHLR1zdvcXBcysYp/ 9nL2WXgcslGZpJfgEEgYPn9uONmP53dlpGS0lDOhv0LlZ2hCZVkKD4Wc9DhVnC10vtxN 66SXAq2TkwuEJZ9uQH0oYrbzWtAI/8YyQTJ6gAE9VeTLf5yEWCDEOGkdpuBy45HAISXa w04JGxCmbts3TXPuXqAGKf49rudi52zHIFKhS0uAY5uBmkHQBjGgCywj2pbMuo5PNGZ9 6w1bIe6Nbi9rt25idtYxZTAlDITWLpcr1CwMtFBotFWzi0owu0IlyEzK1Of/UQ8KaJNT Ke9g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx3-rdu2.redhat.com. [66.187.233.73]) by mx.google.com with ESMTPS id 103-v6si4092606qks.312.2018.06.27.05.48.20 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 27 Jun 2018 05:48:20 -0700 (PDT) Received-SPF: pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) client-ip=66.187.233.73; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8B28D8182D2E; Wed, 27 Jun 2018 12:48:20 +0000 (UTC) Received: from localhost (ovpn-12-44.pek2.redhat.com [10.72.12.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8175F2156888; Wed, 27 Jun 2018 12:48:11 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Kent Overstreet Cc: David Sterba , Huang Ying , Mike Snitzer , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , "Darrick J . Wong" , Coly Li , Filipe Manana , Randy Dunlap , Ming Lei Subject: [PATCH V7 12/24] block: use bio_for_each_bvec() to compute multipage bvec count Date: Wed, 27 Jun 2018 20:45:36 +0800 Message-Id: <20180627124548.3456-13-ming.lei@redhat.com> In-Reply-To: <20180627124548.3456-1-ming.lei@redhat.com> References: <20180627124548.3456-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Wed, 27 Jun 2018 12:48:20 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Wed, 27 Jun 2018 12:48:20 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Firstly it is more efficient to use bio_for_each_bvec() in both blk_bio_segment_split() and __blk_recalc_rq_segments() to compute how many multipage bvecs there are in the bio. Secondaly once bio_for_each_bvec() is used, the bvec may need to be splitted because its length can be very longer than max segment size, so we have to split the big bvec into several segments. Thirdly during splitting multipage bvec into segments, max segment number may be reached, then the bio need to be splitted when this happens. Signed-off-by: Ming Lei --- block/blk-merge.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 76 insertions(+), 14 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index aaec38cc37b8..bf1dceb9656a 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -97,6 +97,62 @@ static inline unsigned get_max_io_size(struct request_queue *q, return sectors; } +/* + * Split the bvec @bv into segments, and update all kinds of + * variables. + */ +static bool bvec_split_segs(struct request_queue *q, struct bio_vec *bv, + unsigned *nsegs, unsigned *last_seg_size, + unsigned *front_seg_size, unsigned *sectors) +{ + bool need_split = false; + unsigned len = bv->bv_len; + unsigned total_len = 0; + unsigned new_nsegs = 0, seg_size = 0; + + if ((*nsegs >= queue_max_segments(q)) || !len) + return need_split; + + /* + * Multipage bvec may be too big to hold in one segment, + * so the current bvec has to be splitted as multiple + * segments. + */ + while (new_nsegs + *nsegs < queue_max_segments(q)) { + seg_size = min(queue_max_segment_size(q), len); + + new_nsegs++; + total_len += seg_size; + len -= seg_size; + + if ((queue_virt_boundary(q) && ((bv->bv_offset + + total_len) & queue_virt_boundary(q))) || !len) + break; + } + + /* split in the middle of the bvec */ + if (len) + need_split = true; + + /* update front segment size */ + if (!*nsegs) { + unsigned first_seg_size = seg_size; + + if (new_nsegs > 1) + first_seg_size = queue_max_segment_size(q); + if (*front_seg_size < first_seg_size) + *front_seg_size = first_seg_size; + } + + /* update other varibles */ + *last_seg_size = seg_size; + *nsegs += new_nsegs; + if (sectors) + *sectors += total_len >> 9; + + return need_split; +} + static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio *bio, struct bio_set *bs, @@ -110,7 +166,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio *new = NULL; const unsigned max_sectors = get_max_io_size(q, bio); - bio_for_each_segment(bv, bio, iter) { + bio_for_each_bvec(bv, bio, iter) { /* * If the queue doesn't support SG gaps and adding this * offset would create a gap, disallow it. @@ -125,8 +181,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, */ if (nsegs < queue_max_segments(q) && sectors < max_sectors) { - nsegs++; - sectors = max_sectors; + /* split in the middle of bvec */ + bv.bv_len = (max_sectors - sectors) << 9; + bvec_split_segs(q, &bv, &nsegs, + &seg_size, + &front_seg_size, + §ors); } goto split; } @@ -153,11 +213,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, if (nsegs == 1 && seg_size > front_seg_size) front_seg_size = seg_size; - nsegs++; bvprv = bv; bvprvp = &bvprv; - seg_size = bv.bv_len; - sectors += bv.bv_len >> 9; + + if (bvec_split_segs(q, &bv, &nsegs, &seg_size, + &front_seg_size, §ors)) + goto split; } @@ -235,6 +296,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, struct bio_vec bv, bvprv = { NULL }; int cluster, prev = 0; unsigned int seg_size, nr_phys_segs; + unsigned front_seg_size = bio->bi_seg_front_size; struct bio *fbio, *bbio; struct bvec_iter iter; @@ -255,7 +317,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, seg_size = 0; nr_phys_segs = 0; for_each_bio(bio) { - bio_for_each_segment(bv, bio, iter) { + bio_for_each_bvec(bv, bio, iter) { /* * If SG merging is disabled, each bio vector is * a segment @@ -277,20 +339,20 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, continue; } new_segment: - if (nr_phys_segs == 1 && seg_size > - fbio->bi_seg_front_size) - fbio->bi_seg_front_size = seg_size; + if (nr_phys_segs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; - nr_phys_segs++; bvprv = bv; prev = 1; - seg_size = bv.bv_len; + bvec_split_segs(q, &bv, &nr_phys_segs, &seg_size, + &front_seg_size, NULL); } bbio = bio; } - if (nr_phys_segs == 1 && seg_size > fbio->bi_seg_front_size) - fbio->bi_seg_front_size = seg_size; + if (nr_phys_segs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; + fbio->bi_seg_front_size = front_seg_size; if (seg_size > bbio->bi_seg_back_size) bbio->bi_seg_back_size = seg_size;