From patchwork Wed Feb 27 20:20:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 10832291 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8F83E1575 for ; Wed, 27 Feb 2019 20:20:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 814252E6D6 for ; Wed, 27 Feb 2019 20:20:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 75F962E6DA; Wed, 27 Feb 2019 20:20:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C90842E6F0 for ; Wed, 27 Feb 2019 20:20:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730190AbfB0UUT (ORCPT ); Wed, 27 Feb 2019 15:20:19 -0500 Received: from mail-pf1-f195.google.com ([209.85.210.195]:34680 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730179AbfB0UUT (ORCPT ); Wed, 27 Feb 2019 15:20:19 -0500 Received: by mail-pf1-f195.google.com with SMTP id u9so8548582pfn.1 for ; Wed, 27 Feb 2019 12:20:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=gN840LcjsGueYPKzT8V5G3GCmhFsVue5RDrc94TYwF8=; b=M6munFcz+irF6uSMpXEGyQ1FS0IZOhYwHpnKF6fTcirYKq9rMB44Q4W3ZQsVzEi1VO /+TK3+7uHaLd9QuEu4R/YtL1uvdZ1aHS8Br8oUf7/7NKvojwoo2giADtdCGw9/8LSjD3 7VfN8nk1usadNipAn/CEeT8A/DBsFijk9nsTKo8HkfmJcIjGx8U3te0EWlEU7ZmRnYZR vQ6zOj0TxZo37RCmEWz8L2933/wSayrUu8duQY84hkO0gcw2/rTjo6cSuOuIFVBvgGsV r0PGz7vJ+jC71o8F1i2mHcOD9W3nQDzDNEeJETjEjmIE8crdwoJGwaHJ20MqOBQlJjI7 lM4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=gN840LcjsGueYPKzT8V5G3GCmhFsVue5RDrc94TYwF8=; b=cWmqkX51AX7n5gkcyAFLgq3Ms9ep643cgKS66q+9CYjmFbHCRlsco8xq2Wc6ZqJDSn wa3Uj6PeafVykYXuROWJBEvdCovmBAnwI11sJmh4n86zUZ90X73APB1QFsMF+FZhGvCc rNdvns7IQMrLCDfOHRXMshkdFj20YDXS/OBuq9FaminjD6NM8AP/ynrUuLto1lrjhu60 35vZtgUa8cqbJQYknmgk06t6IlCm8TrKdb8Ujrs61QPTmKvl9q0zOuBKsLvdwy6R1eIc OQUjTlZA+4CdZE4+uV5AT1JkFIZKKhAoCNC0zCRDca9lbS6aqlDjQEvU9Ij3SxxrW0yB opow== X-Gm-Message-State: AHQUAuY4hyjHLSjTXTT+reoNGPlhoB7VYYoNNJQKO6U6W0oKWf4imkrg Q21nSwEY0J6NN/MHwr7nw2sX2rCjyvDEVw== X-Google-Smtp-Source: AHgI3IazxjycYKNie2+E3IE5nKnv6fE3RWvn4cQ+ltomW4wsUE8r91kaZVvAlYnzLw/fFz9nbG3dXw== X-Received: by 2002:a63:f07:: with SMTP id e7mr4753621pgl.173.1551298816799; Wed, 27 Feb 2019 12:20:16 -0800 (PST) Received: from x1.localdomain (66.29.188.166.static.utbb.net. [66.29.188.166]) by smtp.gmail.com with ESMTPSA id d23sm22945959pfn.180.2019.02.27.12.20.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 27 Feb 2019 12:20:15 -0800 (PST) From: Jens Axboe To: linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org Cc: hch@lst.de, viro@ZenIV.linux.org.uk, Jens Axboe Subject: [PATCH 2/2] block: add BIO_NO_PAGE_REF flag Date: Wed, 27 Feb 2019 13:20:06 -0700 Message-Id: <20190227202006.18844-3-axboe@kernel.dk> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190227202006.18844-1-axboe@kernel.dk> References: <20190227202006.18844-1-axboe@kernel.dk> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If bio_iov_iter_get_pages() is called on an iov_iter that is flagged with NO_REF, then we don't need to add a page reference for the pages that we add. Add BIO_NO_PAGE_REF to track this in the bio, so IO completion knows not to drop a reference to these pages. Signed-off-by: Jens Axboe --- block/bio.c | 43 ++++++++++++++++++++++----------------- fs/block_dev.c | 12 ++++++----- fs/iomap.c | 12 ++++++----- include/linux/blk_types.h | 1 + 4 files changed, 39 insertions(+), 29 deletions(-) diff --git a/block/bio.c b/block/bio.c index 71a78d9fb8b7..b64cedc7f87c 100644 --- a/block/bio.c +++ b/block/bio.c @@ -849,20 +849,14 @@ static int __bio_iov_bvec_add_pages(struct bio *bio, struct iov_iter *iter) size = bio_add_page(bio, bv->bv_page, len, bv->bv_offset + iter->iov_offset); if (size == len) { - struct page *page; - int i; + if (!bio_flagged(bio, BIO_NO_PAGE_REF)) { + struct page *page; + int i; + + mp_bvec_for_each_page(page, bv, i) + get_page(page); + } - /* - * For the normal O_DIRECT case, we could skip grabbing this - * reference and then not have to put them again when IO - * completes. But this breaks some in-kernel users, like - * splicing to/from a loop device, where we release the pipe - * pages unconditionally. If we can fix that case, we can - * get rid of the get here and the need to call - * bio_release_pages() at IO completion time. - */ - mp_bvec_for_each_page(page, bv, i) - get_page(page); iov_iter_advance(iter, size); return 0; } @@ -925,10 +919,12 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) * This takes either an iterator pointing to user memory, or one pointing to * kernel pages (BVEC iterator). If we're adding user pages, we pin them and * map them into the kernel. On IO completion, the caller should put those - * pages. For now, when adding kernel pages, we still grab a reference to the - * page. This isn't strictly needed for the common case, but some call paths - * end up releasing pages from eg a pipe and we can't easily control these. - * See comment in __bio_iov_bvec_add_pages(). + * pages. If we're adding kernel pages, and the caller told us it's safe to + * do so, we just have to add the pages to the bio directly. We don't grab an + * extra reference to those pages (the user should already have that), and we + * don't put the page on IO completion. The caller needs to check if the bio is + * flagged BIO_NO_PAGE_REF on IO completion. If it isn't, then pages should be + * released. * * The function tries, but does not guarantee, to pin as many pages as * fit into the bio, or are requested in *iter, whatever is smaller. If @@ -940,6 +936,13 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) const bool is_bvec = iov_iter_is_bvec(iter); unsigned short orig_vcnt = bio->bi_vcnt; + /* + * If this is a BVEC iter, then the pages are kernel pages. Don't + * release them on IO completion, if the caller asked us to. + */ + if (is_bvec && iov_iter_bvec_no_ref(iter)) + bio_set_flag(bio, BIO_NO_PAGE_REF); + do { int ret; @@ -1696,7 +1699,8 @@ static void bio_dirty_fn(struct work_struct *work) next = bio->bi_private; bio_set_pages_dirty(bio); - bio_release_pages(bio); + if (!bio_flagged(bio, BIO_NO_PAGE_REF)) + bio_release_pages(bio); bio_put(bio); } } @@ -1713,7 +1717,8 @@ void bio_check_pages_dirty(struct bio *bio) goto defer; } - bio_release_pages(bio); + if (!bio_flagged(bio, BIO_NO_PAGE_REF)) + bio_release_pages(bio); bio_put(bio); return; defer: diff --git a/fs/block_dev.c b/fs/block_dev.c index e9faa52bb489..78d3257435c0 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -336,12 +336,14 @@ static void blkdev_bio_end_io(struct bio *bio) if (should_dirty) { bio_check_pages_dirty(bio); } else { - struct bio_vec *bvec; - int i; - struct bvec_iter_all iter_all; + if (!bio_flagged(bio, BIO_NO_PAGE_REF)) { + struct bvec_iter_all iter_all; + struct bio_vec *bvec; + int i; - bio_for_each_segment_all(bvec, bio, i, iter_all) - put_page(bvec->bv_page); + bio_for_each_segment_all(bvec, bio, i, iter_all) + put_page(bvec->bv_page); + } bio_put(bio); } } diff --git a/fs/iomap.c b/fs/iomap.c index 97cb9d486a7d..abdd18e404f8 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -1589,12 +1589,14 @@ static void iomap_dio_bio_end_io(struct bio *bio) if (should_dirty) { bio_check_pages_dirty(bio); } else { - struct bio_vec *bvec; - int i; - struct bvec_iter_all iter_all; + if (!bio_flagged(bio, BIO_NO_PAGE_REF)) { + struct bvec_iter_all iter_all; + struct bio_vec *bvec; + int i; - bio_for_each_segment_all(bvec, bio, i, iter_all) - put_page(bvec->bv_page); + bio_for_each_segment_all(bvec, bio, i, iter_all) + put_page(bvec->bv_page); + } bio_put(bio); } } diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index d66bf5f32610..791fee35df88 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -215,6 +215,7 @@ struct bio { /* * bio flags */ +#define BIO_NO_PAGE_REF 0 /* don't put release vec pages */ #define BIO_SEG_VALID 1 /* bi_phys_segments valid */ #define BIO_CLONED 2 /* doesn't own data */ #define BIO_BOUNCED 3 /* bio is a bounce bio */