From patchwork Fri May 25 03:45:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10426031 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3742B602D8 for ; Fri, 25 May 2018 03:48:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 263702959E for ; Fri, 25 May 2018 03:48:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1A900295A4; Fri, 25 May 2018 03:48:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 83B9A2959E for ; Fri, 25 May 2018 03:48:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 65AA06B0296; Thu, 24 May 2018 23:48:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 60AC06B0298; Thu, 24 May 2018 23:48:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4FB3B6B0299; Thu, 24 May 2018 23:48:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt0-f199.google.com (mail-qt0-f199.google.com [209.85.216.199]) by kanga.kvack.org (Postfix) with ESMTP id 25A296B0296 for ; Thu, 24 May 2018 23:48:02 -0400 (EDT) Received: by mail-qt0-f199.google.com with SMTP id c8-v6so2796995qth.21 for ; Thu, 24 May 2018 20:48:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=WPuo4v9Jht5MsSI7ZWhB8yaGTKi430/G8fCv3eW7y6M=; b=OKT56eYc01xwoqofdIlygRai6Mezq6Y+/4ceW5TW1MUcR4VISaAbfGl6Z0qIygwKgP Eu6tmY1GbZ9iD+dlWXEiYx7FWILihhfioCmOkmhyShoSW05l8jl4Fw3w7qxfTqhp5suL Bq8eS8ZdU/OzcmNRpQFP6nEhO3V5vzxXXstw1+dgdYmEvO/fxnNQ1f/lQXBJFMoLoAoG OsU0zTEFkDSc0VQoCiWdEC41ZVMpOh1+CPHAVyS9dsWtWICJGljJEwZ6zx2XKcujeqrp dvP2tYZbFKrhX6IYfYJIKdJh/ctABXbHpQIXGjH0M+poRoGy7bYvtHHaJtwPmSa+XC/e GPqg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: ALKqPwcXhgvT2D1WxnpppUx5sCs3RgdiGteLProJj4brqgCvra/1xcJy gvq0EIv0wPv/kHG1Q2O67IQnVjCmDUeHKcf46bZRhw7sQjvF9suTpzLCuq84P42xJlzHhmUdm4e 1siO18VDYRgtx4OXYK3NAROmI8MHhVyxrb3ftgsw2WfpeZnJB1dkGgxoSRDz58bviQg== X-Received: by 2002:a0c:9de6:: with SMTP id p38-v6mr604309qvf.101.1527220081927; Thu, 24 May 2018 20:48:01 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLvq44QIyQ3gKyURx6tcjgwu2gzEethcFBBb8u91yxq5AcFWCe0p+qb59x+wJ1WF1p+Tddq X-Received: by 2002:a0c:9de6:: with SMTP id p38-v6mr604283qvf.101.1527220081221; Thu, 24 May 2018 20:48:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527220081; cv=none; d=google.com; s=arc-20160816; b=S21WARW2wNe8bG2QsrR5gQYjhq8etaeQ1AAVSo3nkidjhqXoDHyxO5zkJVSQsMI55V 9rBm3UNNOomgWRSVbDAk2FNmqd4peZHiOc73yWu5UX07ROL35Xyo4tDCOrEZtYA30jO4 4p5kPyPd+hpufz/SaKbJ4I3Zag4+D0nTC+lIgkVghHIpaJPo/GZXvghO8GvaOCHBfZdr uQ6GTfJ9J/Kn65i6plLPqW8RbHPoI1PwURxsIWiOdQDkJ6rNFsRSukTXjT14c7zxQRjh gnNmy09zoOqYIMOZZD8tYsikDFHl2UiYBAJgzKipnvlJwhKZZKMWajaKY+mE8SPkJaC4 00jQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=WPuo4v9Jht5MsSI7ZWhB8yaGTKi430/G8fCv3eW7y6M=; b=vql1xqETcQ1vr3tnEKTF5q3H22KAwv0Qoh0PLnLH3Jcn2gEKybNJrLFx8dET9lbGBa tQBKKmT1eS0Hwnyc7TjnUtBbgM2ZgC9DLpb8yJOXSpZ9CNDaPW1yabJXhfLeobE/YF6S /6l7ostuKcgxT2pUB1NErsH1e9N/T03tn8q1TlYUkg/pu/XpuvidtjiAgXCyrZwk3/+3 6nZSHYz1VIkhimpEG6BD47s2BE+dpKqVu+Sw+z+/TXBhrYahYJ8E3D6BUNa+tI4HOUx3 iZrL/XMKhUTQKrE9AwTlY0+z+wyFhYnEtYks+vkkCyINFl674hryyazCXmKYhRd341bD wz6w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx3-rdu2.redhat.com. [66.187.233.73]) by mx.google.com with ESMTPS id d7-v6si54293qka.61.2018.05.24.20.48.01 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 24 May 2018 20:48:01 -0700 (PDT) Received-SPF: pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) client-ip=66.187.233.73; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C1DD740203F4; Fri, 25 May 2018 03:48:00 +0000 (UTC) Received: from localhost (ovpn-12-30.pek2.redhat.com [10.72.12.30]) by smtp.corp.redhat.com (Postfix) with ESMTP id 73A752022EF1; Fri, 25 May 2018 03:47:52 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Alexander Viro , Kent Overstreet Cc: David Sterba , Huang Ying , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , "Darrick J . Wong" , Coly Li , Filipe Manana , Ming Lei Subject: [RESEND PATCH V5 06/33] block: use bio_for_each_segment() to compute segments count Date: Fri, 25 May 2018 11:45:54 +0800 Message-Id: <20180525034621.31147-7-ming.lei@redhat.com> In-Reply-To: <20180525034621.31147-1-ming.lei@redhat.com> References: <20180525034621.31147-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Fri, 25 May 2018 03:48:00 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Fri, 25 May 2018 03:48:00 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Firstly it is more efficient to use bio_for_each_segment() in both blk_bio_segment_split() and __blk_recalc_rq_segments() to compute how many segments there are in the bio. Secondaly once bio_for_each_segment() is used, the bvec may need to be splitted because its length can be very longer than max segment size, so we have to split the big bvec into several segments. Thirdly during splitting multipage bvec into segments, max segment number may be reached, then the bio need to be splitted when this happens. Signed-off-by: Ming Lei --- block/blk-merge.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 76 insertions(+), 14 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 545609fc4905..d157b752d965 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -97,6 +97,62 @@ static inline unsigned get_max_io_size(struct request_queue *q, return sectors; } +/* + * Split the bvec @bv into segments, and update all kinds of + * variables. + */ +static bool bvec_split_segs(struct request_queue *q, struct bio_vec *bv, + unsigned *nsegs, unsigned *last_seg_size, + unsigned *front_seg_size, unsigned *sectors) +{ + bool need_split = false; + unsigned len = bv->bv_len; + unsigned total_len = 0; + unsigned new_nsegs = 0, seg_size = 0; + + if ((*nsegs >= queue_max_segments(q)) || !len) + return need_split; + + /* + * Multipage bvec may be too big to hold in one segment, + * so the current bvec has to be splitted as multiple + * segments. + */ + while (new_nsegs + *nsegs < queue_max_segments(q)) { + seg_size = min(queue_max_segment_size(q), len); + + new_nsegs++; + total_len += seg_size; + len -= seg_size; + + if ((queue_virt_boundary(q) && ((bv->bv_offset + + total_len) & queue_virt_boundary(q))) || !len) + break; + } + + /* split in the middle of the bvec */ + if (len) + need_split = true; + + /* update front segment size */ + if (!*nsegs) { + unsigned first_seg_size = seg_size; + + if (new_nsegs > 1) + first_seg_size = queue_max_segment_size(q); + if (*front_seg_size < first_seg_size) + *front_seg_size = first_seg_size; + } + + /* update other varibles */ + *last_seg_size = seg_size; + *nsegs += new_nsegs; + if (sectors) + *sectors += total_len >> 9; + + return need_split; +} + static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio *bio, struct bio_set *bs, @@ -110,7 +166,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio *new = NULL; const unsigned max_sectors = get_max_io_size(q, bio); - bio_for_each_page(bv, bio, iter) { + bio_for_each_segment(bv, bio, iter) { /* * If the queue doesn't support SG gaps and adding this * offset would create a gap, disallow it. @@ -125,8 +181,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, */ if (nsegs < queue_max_segments(q) && sectors < max_sectors) { - nsegs++; - sectors = max_sectors; + /* split in the middle of bvec */ + bv.bv_len = (max_sectors - sectors) << 9; + bvec_split_segs(q, &bv, &nsegs, + &seg_size, + &front_seg_size, + §ors); } goto split; } @@ -153,11 +213,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, if (nsegs == 1 && seg_size > front_seg_size) front_seg_size = seg_size; - nsegs++; bvprv = bv; bvprvp = &bvprv; - seg_size = bv.bv_len; - sectors += bv.bv_len >> 9; + + if (bvec_split_segs(q, &bv, &nsegs, &seg_size, + &front_seg_size, §ors)) + goto split; } @@ -225,6 +286,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, struct bio_vec bv, bvprv = { NULL }; int cluster, prev = 0; unsigned int seg_size, nr_phys_segs; + unsigned front_seg_size = bio->bi_seg_front_size; struct bio *fbio, *bbio; struct bvec_iter iter; @@ -245,7 +307,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, seg_size = 0; nr_phys_segs = 0; for_each_bio(bio) { - bio_for_each_page(bv, bio, iter) { + bio_for_each_segment(bv, bio, iter) { /* * If SG merging is disabled, each bio vector is * a segment @@ -267,20 +329,20 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, continue; } new_segment: - if (nr_phys_segs == 1 && seg_size > - fbio->bi_seg_front_size) - fbio->bi_seg_front_size = seg_size; + if (nr_phys_segs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; - nr_phys_segs++; bvprv = bv; prev = 1; - seg_size = bv.bv_len; + bvec_split_segs(q, &bv, &nr_phys_segs, &seg_size, + &front_seg_size, NULL); } bbio = bio; } - if (nr_phys_segs == 1 && seg_size > fbio->bi_seg_front_size) - fbio->bi_seg_front_size = seg_size; + if (nr_phys_segs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; + fbio->bi_seg_front_size = front_seg_size; if (seg_size > bbio->bi_seg_back_size) bbio->bi_seg_back_size = seg_size;