From patchwork Sat Jun 9 12:29:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10455523 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BB4C3601F7 for ; Sat, 9 Jun 2018 12:31:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AC468223A4 for ; Sat, 9 Jun 2018 12:31:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9F915223A6; Sat, 9 Jun 2018 12:31:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 23F03223A4 for ; Sat, 9 Jun 2018 12:31:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 37C656B000A; Sat, 9 Jun 2018 08:31:17 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 305D86B000C; Sat, 9 Jun 2018 08:31:17 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A68E6B000D; Sat, 9 Jun 2018 08:31:17 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk0-f199.google.com (mail-qk0-f199.google.com [209.85.220.199]) by kanga.kvack.org (Postfix) with ESMTP id E246C6B000A for ; Sat, 9 Jun 2018 08:31:16 -0400 (EDT) Received: by mail-qk0-f199.google.com with SMTP id m65-v6so15129565qkh.11 for ; Sat, 09 Jun 2018 05:31:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=81mczVtn00hD9w+dgYHcMS4Q5Ne/CFIadF/G0izGzeA=; b=DslImmOanvSLJBv/kUi69xMFNN9qUFSfp57yon/oVS4qGkqwiODSSRl4BNt+f7t4WX BEfyz6wkws0Y52xc0KDoP+060qMD4eWYLBmCL2v+dy3Cb4xsSOt0xzYnYRDN0uJFgJrU cVw0SxIB28dM7BYH+0XcGoF/NpkbIUj/qWrK+DVULgxCnFybAVwObFUHK7o0/5zAfrik fbpxveHWCMEmN0E55kvagtAEKNt9YagtA9s6GbhNsF4sp8MpFoH5SzSX2WKQODmaWjnX u5kTlMwA8KgrYDexgjyYdVmCVcSuvs7XfgQ99zrEWbJsvV1+D7TTFkC9sW36dCczTfdI tdPA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: APt69E3sthA6JE6GfWDQv2nR/I3lSHyaaXtuoymJAtrHfAKO0iK39rkx J/8KqXcP9BZzvRsULGgndNEwbA8Vo//FXTObDJcJHR7g7k3+ZbW23rqRE7D5Jy3PmYgJJqBSHT0 QAis4sMU00MVknyS9fMi2v18P9wcz4nX10ZE5D5JfyfNCzoxdxfOuE7q3yq70eYHU6Q== X-Received: by 2002:ac8:2b46:: with SMTP id 6-v6mr4869186qtv.302.1528547476691; Sat, 09 Jun 2018 05:31:16 -0700 (PDT) X-Google-Smtp-Source: ADUXVKK6NilA8xg4hSSIfn0aFDIccmf68INXJfUfkOAMN0zurcc3OnQGFF+voKKXkU4Lxtxay7W+ X-Received: by 2002:ac8:2b46:: with SMTP id 6-v6mr4869135qtv.302.1528547475928; Sat, 09 Jun 2018 05:31:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528547475; cv=none; d=google.com; s=arc-20160816; b=ac/OTQq06VM1cNzWJ/CqoVlBfjgB7AZVAZnRgmmF62jnFoiySM1vpf+ok+0AEhuIv3 1qpyx3d2BawpMm28qBRqs2SDb4O8c9OfI+eiplJCSA09lhjAS/4t3hUnDCh5ZGiL/xlR AgUHym0+4B/Cl8MAGdRK1JPhaVl5uVn/vDv6/Pk0d7ZVW4TVrZXdrBMvM845KceevyAP fUoXOhqwcZUeGrpID4YBA/kSQrDePAcftxt5vgquGy/XpoEH5hg9HkYd2Z1hsERjZv2a iXsI4u4R1802+yQm1e2/kAHxDod0oOG1j2q9fdmxQOnhj/AGEBSDml6xkFwB8x6lnuOV vS3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=81mczVtn00hD9w+dgYHcMS4Q5Ne/CFIadF/G0izGzeA=; b=rxmlAKbhj2Xbip7Ejx+XjJ+icJ+0DPd8qP7jMr3FKsU/r/FU1tvEIhmNCYvZp7y7d7 TvDCCxePjbQaqxkHFmIDcL2ofuhPj01sXnyJNzCnoIAiqGXhLjiZ5Emeb4r+f7uOXtJ8 VS+IxKmsOZ6pkYsRn2JdTRrPPty71R4KiyqCY6lOOw9ZShw9XLYJaOhVZfqlSEUV4f07 BaYvb28mDdIpX6TTHoMhDcyKA/MveGJLkjfaLHB/AORkMGnm1lQLJfYZyJx538Np0RRB JSu755TyJVgwGYz4K3T3SK3k1O1SnFaMt/p0z4BC/Zm1fwJkgr3QozEA0KHSyCgAfC67 cUaA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx3-rdu2.redhat.com. [66.187.233.73]) by mx.google.com with ESMTPS id t7-v6si8168731qvn.81.2018.06.09.05.31.15 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 09 Jun 2018 05:31:15 -0700 (PDT) Received-SPF: pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) client-ip=66.187.233.73; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8A1BD40122A6; Sat, 9 Jun 2018 12:31:15 +0000 (UTC) Received: from localhost (ovpn-12-40.pek2.redhat.com [10.72.12.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 06C3E2166BB2; Sat, 9 Jun 2018 12:31:06 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Alexander Viro , Kent Overstreet Cc: David Sterba , Huang Ying , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , "Darrick J . Wong" , Coly Li , Filipe Manana , Randy Dunlap , Christoph Hellwig Subject: [PATCH V6 03/30] block: use bio_add_page in bio_iov_iter_get_pages Date: Sat, 9 Jun 2018 20:29:47 +0800 Message-Id: <20180609123014.8861-4-ming.lei@redhat.com> In-Reply-To: <20180609123014.8861-1-ming.lei@redhat.com> References: <20180609123014.8861-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Sat, 09 Jun 2018 12:31:15 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Sat, 09 Jun 2018 12:31:15 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Christoph Hellwig Replace a nasty hack with a different nasty hack to prepare for multipage bio_vecs. By moving the temporary page array as far up as possible in the space allocated for the bio_vec array we can iterate forward over it and thus use bio_add_page. Using bio_add_page means we'll be able to merge physically contiguous pages once support for multipath bio_vecs is merged. Reviewed-by: Ming Lei Signed-off-by: Christoph Hellwig --- block/bio.c | 45 +++++++++++++++++++++------------------------ 1 file changed, 21 insertions(+), 24 deletions(-) diff --git a/block/bio.c b/block/bio.c index ebd3ca62e037..cb0f46e2752b 100644 --- a/block/bio.c +++ b/block/bio.c @@ -902,6 +902,8 @@ int bio_add_page(struct bio *bio, struct page *page, } EXPORT_SYMBOL(bio_add_page); +#define PAGE_PTRS_PER_BVEC (sizeof(struct bio_vec) / sizeof(struct page *)) + /** * bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio * @bio: bio to add pages to @@ -913,38 +915,33 @@ EXPORT_SYMBOL(bio_add_page); int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) { unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt; + unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt; struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt; struct page **pages = (struct page **)bv; - size_t offset, diff; - ssize_t size; + ssize_t size, left; + unsigned len, i; + size_t offset; + + /* + * Move page array up in the allocated memory for the bio vecs as + * far as possible so that we can start filling biovecs from the + * beginning without overwriting the temporary page array. + */ + BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2); + pages += entries_left * (PAGE_PTRS_PER_BVEC - 1); size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset); if (unlikely(size <= 0)) return size ? size : -EFAULT; - nr_pages = (size + offset + PAGE_SIZE - 1) / PAGE_SIZE; - /* - * Deep magic below: We need to walk the pinned pages backwards - * because we are abusing the space allocated for the bio_vecs - * for the page array. Because the bio_vecs are larger than the - * page pointers by definition this will always work. But it also - * means we can't use bio_add_page, so any changes to it's semantics - * need to be reflected here as well. - */ - bio->bi_iter.bi_size += size; - bio->bi_vcnt += nr_pages; - - diff = (nr_pages * PAGE_SIZE - offset) - size; - while (nr_pages--) { - bv[nr_pages].bv_page = pages[nr_pages]; - bv[nr_pages].bv_len = PAGE_SIZE; - bv[nr_pages].bv_offset = 0; - } + for (left = size, i = 0; left > 0; left -= len, i++) { + struct page *page = pages[i]; - bv[0].bv_offset += offset; - bv[0].bv_len -= offset; - if (diff) - bv[bio->bi_vcnt - 1].bv_len -= diff; + len = min_t(size_t, PAGE_SIZE - offset, left); + if (WARN_ON_ONCE(bio_add_page(bio, page, len, offset) != len)) + return -EINVAL; + offset = 0; + } iov_iter_advance(iter, size); return 0;