From patchwork Fri May 25 03:46:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10426095 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1432C602D8 for ; Fri, 25 May 2018 03:50:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 00A1B294F9 for ; Fri, 25 May 2018 03:50:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E8087295A4; Fri, 25 May 2018 03:50:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5516D294F9 for ; Fri, 25 May 2018 03:50:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 597CF6B02B0; Thu, 24 May 2018 23:50:26 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 51E946B02B2; Thu, 24 May 2018 23:50:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C0D76B02B3; Thu, 24 May 2018 23:50:26 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk0-f197.google.com (mail-qk0-f197.google.com [209.85.220.197]) by kanga.kvack.org (Postfix) with ESMTP id 0DFE36B02B0 for ; Thu, 24 May 2018 23:50:26 -0400 (EDT) Received: by mail-qk0-f197.google.com with SMTP id y124-v6so2905421qkc.8 for ; Thu, 24 May 2018 20:50:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=3AUsHOooHNwoL0o9QqAgIr94ph95Czey8iyXDoljYyE=; b=BrswPtuOhENw5Brc5hCr5MVmLt+TztUsBlk2/EZuLCnRmb1MJ1bqe8PkIrwQb6cWrw Xctvq8Qc9eEPsq/1j2l8+RyYZJtxzSZrPeBFOCAZh0Dgnyc1zb98FvupfTXWH0DBEe3V RyYG0crwNGJvs58ieQCm27JUuQNccWmdphA5LR38wZbEyJ7cDD5x7VdnHcCaoi7eZHcN Ys1rnYqDQiXwL2oQQxa8JlFyfKchbCgLGgoM1JVyEzpLa8OZJt73Iaxc608vaW90roB7 n1+JUoJJCn6fSj+peVVbQjUISsZE7O5euWCKgHbGk2RgeQPAFivtH8n8AYUQP5qPBb1b 8GNw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: ALKqPwcC3zrvubwQMwmkZcBeYV/cgAMrNi62NP6ijMt7VS3jCbKMsCDJ AJs5X0kbF5ThgOwE6y3mK/C2VhkaygClX0zUZ+Mid2CfYWB92EkJ0z8H8k4JfOdqpOPsau+JrTL foZI8pzmecaIkrxAgUBeYtWFZAvaEIOKWR5BtPqd+qTRL8u1sKrZ3IZH3sh6eydPnaQ== X-Received: by 2002:a0c:8521:: with SMTP id n30-v6mr603875qva.224.1527220225836; Thu, 24 May 2018 20:50:25 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLlVrtO1jf+c7B4VoXSQqr+XfdsJGu5y4IfMjjr1if93jm/lHPA3XkZ+s0vmKs8KuM3tlDE X-Received: by 2002:a0c:8521:: with SMTP id n30-v6mr603853qva.224.1527220225133; Thu, 24 May 2018 20:50:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527220225; cv=none; d=google.com; s=arc-20160816; b=oIp1lDKNsS3AQLs783xckjlqOFWr7QMJ1O3nPdFIFmVI0YtJCIepzcFsD0e9nrx84U Y/IwlrJKjfbLpeS6rqGK+q+5Qh8jsepyNG/HKWCHXv123rRn0nyYj02U5JLbF7lavVys pR0A3xUMGVD/RsLkyTS1cvTgn/jQiTtqJ0/AF3sRl/fEnqq8Qv4ROnkd/3fzOvnZ+0S4 xeIxCY/lHtGU5G+U2XahTQ1RUa6bMDCT4fIetPP1mlV3H6XpALN/p6YD6V4AXv7B+1WP 3nuoW7yshcAhbI57E6+HRdCUXuseEl9tJKzK7I8+N6A+eDdEXCqo22XlS/L2mMRNOuv3 szgw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=3AUsHOooHNwoL0o9QqAgIr94ph95Czey8iyXDoljYyE=; b=eDxx1u91mkTtt6G6Djo/q8apu/chrmvsHRU1KbwtPg9CeJGTlXNgckL92x0WujxTka zpK8h8fWP5Nt0cti54OweFu6Dn+3Niz2SB+cZoAJRYZkrMHvgjYzhNzLzEbLP1IQr6yB p7oVWARMzYZaGsMLgn9y3eSFX5CBKuML8F6gy8hHPT2ekA2rNe+mQ+QxzHWMPF/j8hoY qK9H/2SzJoDhERlL9LMVKIwxYnGRzCNfQng+SAXxBW+6q6a5X8RkGgt0y7mweQ9Br+Cg +QltmKWljuapnmli6K39KWBe6BpQEP4YH6KQ22F+/Qb5Eove52tCO7wn82KZkVUpfrbi O7Fw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx3-rdu2.redhat.com. [66.187.233.73]) by mx.google.com with ESMTPS id k3-v6si2507991qvj.206.2018.05.24.20.50.24 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 24 May 2018 20:50:25 -0700 (PDT) Received-SPF: pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) client-ip=66.187.233.73; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A30B740203F4; Fri, 25 May 2018 03:50:24 +0000 (UTC) Received: from localhost (ovpn-12-30.pek2.redhat.com [10.72.12.30]) by smtp.corp.redhat.com (Postfix) with ESMTP id F07622026980; Fri, 25 May 2018 03:50:16 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Alexander Viro , Kent Overstreet Cc: David Sterba , Huang Ying , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , "Darrick J . Wong" , Coly Li , Filipe Manana , Ming Lei Subject: [RESEND PATCH V5 19/33] block: convert to bio_for_each_page_all2() Date: Fri, 25 May 2018 11:46:07 +0800 Message-Id: <20180525034621.31147-20-ming.lei@redhat.com> In-Reply-To: <20180525034621.31147-1-ming.lei@redhat.com> References: <20180525034621.31147-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Fri, 25 May 2018 03:50:24 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Fri, 25 May 2018 03:50:24 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP We have to convert to bio_for_each_page_all2() for iterating page by page. bio_for_each_page_all() can't be used any more after multipage bvec is enabled. Signed-off-by: Ming Lei --- block/bio.c | 18 ++++++++++++------ block/blk-zoned.c | 5 +++-- block/bounce.c | 6 ++++-- include/linux/bio.h | 3 ++- 4 files changed, 21 insertions(+), 11 deletions(-) diff --git a/block/bio.c b/block/bio.c index a200c42e55dc..a14c854b9111 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1119,8 +1119,9 @@ static int bio_copy_from_iter(struct bio *bio, struct iov_iter *iter) { int i; struct bio_vec *bvec; + struct bvec_iter_all bia; - bio_for_each_page_all(bvec, bio, i) { + bio_for_each_page_all2(bvec, bio, i, bia) { ssize_t ret; ret = copy_page_from_iter(bvec->bv_page, @@ -1150,8 +1151,9 @@ static int bio_copy_to_iter(struct bio *bio, struct iov_iter iter) { int i; struct bio_vec *bvec; + struct bvec_iter_all bia; - bio_for_each_page_all(bvec, bio, i) { + bio_for_each_page_all2(bvec, bio, i, bia) { ssize_t ret; ret = copy_page_to_iter(bvec->bv_page, @@ -1173,8 +1175,9 @@ void bio_free_pages(struct bio *bio) { struct bio_vec *bvec; int i; + struct bvec_iter_all bia; - bio_for_each_page_all(bvec, bio, i) + bio_for_each_page_all2(bvec, bio, i, bia) __free_page(bvec->bv_page); } EXPORT_SYMBOL(bio_free_pages); @@ -1340,6 +1343,7 @@ struct bio *bio_map_user_iov(struct request_queue *q, struct bio *bio; int ret; struct bio_vec *bvec; + struct bvec_iter_all bia; if (!iov_iter_count(iter)) return ERR_PTR(-EINVAL); @@ -1413,7 +1417,7 @@ struct bio *bio_map_user_iov(struct request_queue *q, return bio; out_unmap: - bio_for_each_page_all(bvec, bio, j) { + bio_for_each_page_all2(bvec, bio, j, bia) { put_page(bvec->bv_page); } bio_put(bio); @@ -1424,11 +1428,12 @@ static void __bio_unmap_user(struct bio *bio) { struct bio_vec *bvec; int i; + struct bvec_iter_all bia; /* * make sure we dirty pages we wrote to */ - bio_for_each_page_all(bvec, bio, i) { + bio_for_each_page_all2(bvec, bio, i, bia) { if (bio_data_dir(bio) == READ) set_page_dirty_lock(bvec->bv_page); @@ -1520,8 +1525,9 @@ static void bio_copy_kern_endio_read(struct bio *bio) char *p = bio->bi_private; struct bio_vec *bvec; int i; + struct bvec_iter_all bia; - bio_for_each_page_all(bvec, bio, i) { + bio_for_each_page_all2(bvec, bio, i, bia) { memcpy(p, page_address(bvec->bv_page), bvec->bv_len); p += bvec->bv_len; } diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 77f3cecfaa7d..a76053d6fd6c 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -123,6 +123,7 @@ int blkdev_report_zones(struct block_device *bdev, unsigned int ofst; void *addr; int ret; + struct bvec_iter_all bia; if (!q) return -ENXIO; @@ -190,7 +191,7 @@ int blkdev_report_zones(struct block_device *bdev, n = 0; nz = 0; nr_rep = 0; - bio_for_each_page_all(bv, bio, i) { + bio_for_each_page_all2(bv, bio, i, bia) { if (!bv->bv_page) break; @@ -223,7 +224,7 @@ int blkdev_report_zones(struct block_device *bdev, *nr_zones = nz; out: - bio_for_each_page_all(bv, bio, i) + bio_for_each_page_all2(bv, bio, i, bia) __free_page(bv->bv_page); bio_put(bio); diff --git a/block/bounce.c b/block/bounce.c index f4ee4b81f7a2..8b14683f4061 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -143,11 +143,12 @@ static void bounce_end_io(struct bio *bio, mempool_t *pool) struct bio_vec *bvec, orig_vec; int i; struct bvec_iter orig_iter = bio_orig->bi_iter; + struct bvec_iter_all bia; /* * free up bounce indirect pages used */ - bio_for_each_page_all(bvec, bio, i) { + bio_for_each_page_all2(bvec, bio, i, bia) { orig_vec = bio_iter_iovec(bio_orig, orig_iter); if (bvec->bv_page != orig_vec.bv_page) { dec_zone_page_state(bvec->bv_page, NR_BOUNCE); @@ -203,6 +204,7 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig, bool bounce = false; int sectors = 0; bool passthrough = bio_is_passthrough(*bio_orig); + struct bvec_iter_all bia; bio_for_each_page(from, *bio_orig, iter) { if (i++ < BIO_MAX_PAGES) @@ -222,7 +224,7 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig, bio = bio_clone_bioset(*bio_orig, GFP_NOIO, passthrough ? NULL : bounce_bio_set); - bio_for_each_page_all(to, bio, i) { + bio_for_each_page_all2(to, bio, i, bia) { struct page *page = to->bv_page; if (page_to_pfn(page) <= q->limits.bounce_pfn) diff --git a/include/linux/bio.h b/include/linux/bio.h index 75baad77d9a8..5ae2bc876295 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -369,10 +369,11 @@ static inline unsigned bio_pages_all(struct bio *bio) { unsigned i; struct bio_vec *bv; + struct bvec_iter_all bia; WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED)); - bio_for_each_page_all(bv, bio, i) + bio_for_each_page_all2(bv, bio, i, bia) ; return i; }