From patchwork Fri May 25 03:46:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10426145 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 506B9602D6 for ; Fri, 25 May 2018 03:52:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3E54C295A6 for ; Fri, 25 May 2018 03:52:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 31B99295B3; Fri, 25 May 2018 03:52:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BBAF4295A6 for ; Fri, 25 May 2018 03:52:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B60016B000C; Thu, 24 May 2018 23:52:31 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B36696B02CA; Thu, 24 May 2018 23:52:31 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A00416B02CB; Thu, 24 May 2018 23:52:31 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt0-f199.google.com (mail-qt0-f199.google.com [209.85.216.199]) by kanga.kvack.org (Postfix) with ESMTP id 734D56B000C for ; Thu, 24 May 2018 23:52:31 -0400 (EDT) Received: by mail-qt0-f199.google.com with SMTP id e8-v6so2945017qtj.0 for ; Thu, 24 May 2018 20:52:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=sVYpRbW8lrr6rc4sxAGw+Dn6nibKCW/mLtx0tIiuMv0=; b=IjUA89Ls5U+wafP43lLUg/+jcboqE5ySFJZA6AASImY5z3xBZoEMjuoBN7gFaGhD73 VyazAlpNwZ4PBJV+vfrN0OFXtne0/JoZ5jO93il3UWmsFKZoA6H+srha60BoibRfwh/b p35ibwSPMzYXspmTKbYLYN/SfVxMbZxRboimskjlw+mDAT/Iv2n4nmMtvIIkDEbG9dMO b3jBtBS7ULuK0fRRmhCKahTLgFjYQh1s6jmPuZ5hlPeqG9AqDKX2Y3iBG1GIWKr7w1Gu HvxBXKOJ4wZaY4oxISlsZU6kex2GipGQjVM+Aw3pKCe3SoqoThGLe7+lyAVIr2w1aVjY UCIA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: ALKqPwc07/gjrGAQqYhCGQE+qY9PkgI7LjITpX5Yp59eg+56DD/LDqhm g6VLpRG2LfZpOMTSRAhoszExKdC7Hd+jgY6xp/bwWeJGR672tCei2bf/aBC/Y2BR56B5BZPuedc NENWKTQyGTB5nWoC5ucMpIsGtmXw3LkZZ5MRoKfH02kjx/ofjT2eHBo4ysfRBvpaAuw== X-Received: by 2002:a37:be42:: with SMTP id o63-v6mr584752qkf.102.1527220351248; Thu, 24 May 2018 20:52:31 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJK2kQbC/aoKKIDRe+9vozS9gC73CkjHqNiRCo8Sdvdt56PwahMKZQZIwzZ4g7sWA9+cw9C X-Received: by 2002:a37:be42:: with SMTP id o63-v6mr584735qkf.102.1527220350595; Thu, 24 May 2018 20:52:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527220350; cv=none; d=google.com; s=arc-20160816; b=BPKWxv/aImKoT47VuEfmfbBoc4H8Ct0qSPRuL3TRrfQxQBfXLNXLANe/bvawnq/l2S 5dMFIGHyR5dGbfVSYsrljCK13dtGqqSyhqn3hte1QdV0NsOUDGrc3pE9U/8bilfGS//R CBJBBiMjeCd+XiGzOPRWje01siFa3x8BWMM77a9TQgyatrY+3rx6+43Dork3ewC6Qvdi c+cOw5q1gMAuwkWNEx8F1MWZ/6IBZT6t9kznSTQFe7bGzRC+kt58KOXnt6w0/aCd0U1C sLNUUIGaJ/BJ7I3xjGPH9c0xokB0Imv5BevoCqmhRWsBCGe3ZlvegjoCpxs6mIdP5jvq 3Whw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=sVYpRbW8lrr6rc4sxAGw+Dn6nibKCW/mLtx0tIiuMv0=; b=0ttYMoF05mhk/MWkMcaFtWRZRuBi5CjMVxLskIDAAbGLPwZhXzH/Y6CophKLX/ZSwz TMC21qh1MeezSGdAbC6IQLYY6D/vO77/eR2RiN/l7gtrtynZLWcyPwxr8wKLKGaEa0fC Gbu1V89EiNOXXvxHt0AZ4wmhoDGpH23iXJrWWTwansG3FBnYfQjX5MCAk9azPB3NTEzR kA7EhGItCCyiFmi31aGs/QHYv7hGXydNH/SBeJXMCW6wK0NC7ax86B3Pik4+ugFf6SNB rO4QDFjynRaacenyVjOQKmKE/12QLRXGmackBhQzSi9ciGYt+fDEAU6v8+RW/lbV8kxc NR+w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx3-rdu2.redhat.com. [66.187.233.73]) by mx.google.com with ESMTPS id j123-v6si7354771qkd.369.2018.05.24.20.52.30 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 24 May 2018 20:52:30 -0700 (PDT) Received-SPF: pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) client-ip=66.187.233.73; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 320B640201B7; Fri, 25 May 2018 03:52:30 +0000 (UTC) Received: from localhost (ovpn-12-30.pek2.redhat.com [10.72.12.30]) by smtp.corp.redhat.com (Postfix) with ESMTP id 853349457F; Fri, 25 May 2018 03:52:23 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Alexander Viro , Kent Overstreet Cc: David Sterba , Huang Ying , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , "Darrick J . Wong" , Coly Li , Filipe Manana , Ming Lei Subject: [RESEND PATCH V5 31/33] block: bio: pass segments to bio if bio_add_page() is bypassed Date: Fri, 25 May 2018 11:46:19 +0800 Message-Id: <20180525034621.31147-32-ming.lei@redhat.com> In-Reply-To: <20180525034621.31147-1-ming.lei@redhat.com> References: <20180525034621.31147-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Fri, 25 May 2018 03:52:30 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Fri, 25 May 2018 03:52:30 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Under some situations, such as block direct I/O, we can't use bio_add_page() for merging pages into multipage bvec, so a new function is implemented for converting page array into one segment array, then these cases can benefit from multipage bvec too. Signed-off-by: Ming Lei --- block/bio.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 48 insertions(+), 6 deletions(-) diff --git a/block/bio.c b/block/bio.c index bc3992f52fe8..b7d9089cb28f 100644 --- a/block/bio.c +++ b/block/bio.c @@ -913,6 +913,41 @@ int bio_add_page(struct bio *bio, struct page *page, } EXPORT_SYMBOL(bio_add_page); +static unsigned convert_to_segs(struct bio* bio, struct page **pages, + unsigned char *page_cnt, + unsigned nr_pages) +{ + + unsigned idx; + unsigned nr_seg = 0; + struct request_queue *q = NULL; + + if (bio->bi_disk) + q = bio->bi_disk->queue; + + if (!q || !blk_queue_cluster(q)) { + memset(page_cnt, 0, nr_pages); + return nr_pages; + } + + page_cnt[nr_seg] = 0; + for (idx = 1; idx < nr_pages; idx++) { + struct page *pg_s = pages[nr_seg]; + struct page *pg = pages[idx]; + + if (page_to_pfn(pg_s) + page_cnt[nr_seg] + 1 == + page_to_pfn(pg)) { + page_cnt[nr_seg]++; + } else { + page_cnt[++nr_seg] = 0; + if (nr_seg < idx) + pages[nr_seg] = pg; + } + } + + return nr_seg + 1; +} + /** * bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio * @bio: bio to add pages to @@ -928,6 +963,8 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) struct page **pages = (struct page **)bv; size_t offset, diff; ssize_t size; + unsigned short nr_segs; + unsigned char page_cnt[nr_pages]; /* at most 256 pages */ size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset); if (unlikely(size <= 0)) @@ -943,13 +980,18 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) * need to be reflected here as well. */ bio->bi_iter.bi_size += size; - bio->bi_vcnt += nr_pages; - diff = (nr_pages * PAGE_SIZE - offset) - size; - while (nr_pages--) { - bv[nr_pages].bv_page = pages[nr_pages]; - bv[nr_pages].bv_len = PAGE_SIZE; - bv[nr_pages].bv_offset = 0; + + /* convert into segments */ + nr_segs = convert_to_segs(bio, pages, page_cnt, nr_pages); + bio->bi_vcnt += nr_segs; + + while (nr_segs--) { + unsigned cnt = (unsigned)page_cnt[nr_segs] + 1; + + bv[nr_segs].bv_page = pages[nr_segs]; + bv[nr_segs].bv_len = PAGE_SIZE * cnt; + bv[nr_segs].bv_offset = 0; } bv[0].bv_offset += offset;