From patchwork Sat Jun 9 12:30:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10455599 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5DEDB601F7 for ; Sat, 9 Jun 2018 12:34:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4DE951FF83 for ; Sat, 9 Jun 2018 12:34:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 42C5A223A6; Sat, 9 Jun 2018 12:34:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A20921FF83 for ; Sat, 9 Jun 2018 12:34:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A30B46B0284; Sat, 9 Jun 2018 08:34:13 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9E23B6B0286; Sat, 9 Jun 2018 08:34:13 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B9B86B0287; Sat, 9 Jun 2018 08:34:13 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk0-f197.google.com (mail-qk0-f197.google.com [209.85.220.197]) by kanga.kvack.org (Postfix) with ESMTP id 5B7F76B0284 for ; Sat, 9 Jun 2018 08:34:13 -0400 (EDT) Received: by mail-qk0-f197.google.com with SMTP id f2-v6so15156091qkm.10 for ; Sat, 09 Jun 2018 05:34:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=aswoFjYeQLDs5Q20LxTJWDNTMj57wX1slKsm4sVy1aM=; b=lTeXvqlYZwVA5nR1Ft1dHxgIZ3pgprsH8h43mJiwwsNq61gmWG+/kFPEBHJ+HXDXDR urjCCkgc7gezZ5weOhteHwbqCypulYaVmaWF3j1lm6lxshopRkHckXE/hci6Vw0Vs+H6 supvPRnPRREJmPfDon0wJ/eSmAk2R9Px+dH1Z7AgaTe+vZ3Iwt2GQ1Arwq8LKm5g8tBc XfJ7eKY83Iisd6/DjnR7JAz6ZrimdOU/aElXiHHtKuzHbPBiPpQ66eiFAsyX5xp9ZL/Q 8R0iCeVwyEktOLuM7CYFMk3+k1iEI7gcHKEgIRgNF0lmi+Hl20Jk8Ym28FU3sMaubiph KmWg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: APt69E33FyJpAPmBmDl24FPE5ITZ30Oq4zSRxY3PdHT7UIfKFcl8o2ko /s0SBFqZGPXUzGZopiqPEscgvqiVyUWQY/tz8UpyIBLB4a7zf22cy50xYy2xadE9JGauXgd6BW0 KCk+PPKhWeJF8+40hqM4sPKNAkqNFin0noAORaK5A26gjwq3PIpimbYxClxnDwBjs+g== X-Received: by 2002:ac8:2cf1:: with SMTP id 46-v6mr9647961qtx.63.1528547653174; Sat, 09 Jun 2018 05:34:13 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKs3UgrEjn48bzLJ46gFT5sWKKNe31vaj4gf9RAGusTxRDKCAy27Ekxn+jM8DxHevHa0Vs8 X-Received: by 2002:ac8:2cf1:: with SMTP id 46-v6mr9647934qtx.63.1528547652512; Sat, 09 Jun 2018 05:34:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528547652; cv=none; d=google.com; s=arc-20160816; b=bfnPi6u0qHl+Fo0q4yvoF5qLlF0n2ibg3cc58zGrjm/WYvRKtVw3faJr3WrTd9i4uc jsChLq610ghWHcDthli8lTywBzOK4rDaxOsCMVXqjcs3AHs1B8HkozKcazyi8omJiHq0 EJQseXkm/oAYqCpDQN3+rKmpGh4e2jcWrmOKHF+jvYHCVLOdyMxm9ZnpogZXBx+rE7JX by5YiEUIZIdypWKhXwqcWg1dB4cQk2r7f9Z/SRRgrBRWmT0kY+zfBviuC7D7YNY9Wdhf uoE1gQAFzuifmDzY498q6Cn9//4uY0CvoPse9nYuG28dEe/JHS6vkwdTOFju8dDugC1i qX8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=aswoFjYeQLDs5Q20LxTJWDNTMj57wX1slKsm4sVy1aM=; b=QGF4bhqMlW9cLfe5n19weopnplYlq+tNT7FLixNYZGicu5w57rSNKethUtzS5J/OgS EfFcxO7MBrHWPYNU4Alfx4vEenU8KQcJ3DlOhenWNE7n/Y7JtXZnfdsnjc/ZzvT1OByH aOO+rO/IaMQs4+Lz2QvxRTF/RIErVBRdO7EI+e4ssRXoZRQJ4vFur3k0LaXBe8W6u7i2 0paHbPBSacBja++H7NVqW2oOvSwUIEg2Z56Oq242kUkNi5Vi+dgDn9HGJxQYirqfTmCD 3crEse3et3neAFmAnO1jNUGFiN9XE10ybugwvE7+fRCGToXysngm7e4CWE6B1NedwIMW vWUQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx3-rdu2.redhat.com. [66.187.233.73]) by mx.google.com with ESMTPS id o19-v6si1602207qki.30.2018.06.09.05.34.12 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 09 Jun 2018 05:34:12 -0700 (PDT) Received-SPF: pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) client-ip=66.187.233.73; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2536B401EF22; Sat, 9 Jun 2018 12:34:12 +0000 (UTC) Received: from localhost (ovpn-12-40.pek2.redhat.com [10.72.12.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 36DE01116709; Sat, 9 Jun 2018 12:34:02 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Alexander Viro , Kent Overstreet Cc: David Sterba , Huang Ying , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , "Darrick J . Wong" , Coly Li , Filipe Manana , Randy Dunlap , Ming Lei Subject: [PATCH V6 18/30] block: convert to bio_for_each_chunk_segment_all() Date: Sat, 9 Jun 2018 20:30:02 +0800 Message-Id: <20180609123014.8861-19-ming.lei@redhat.com> In-Reply-To: <20180609123014.8861-1-ming.lei@redhat.com> References: <20180609123014.8861-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Sat, 09 Jun 2018 12:34:12 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Sat, 09 Jun 2018 12:34:12 +0000 (UTC) for IP:'10.11.54.3' DOMAIN:'int-mx03.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP We have to convert to bio_for_each_chunk_segment_all() for iterating page by page. bio_for_each_segment_all() can't be used any more after multipage bvec is enabled. Signed-off-by: Ming Lei --- block/bio.c | 27 ++++++++++++++++++--------- block/blk-zoned.c | 5 +++-- block/bounce.c | 6 ++++-- include/linux/bio.h | 3 ++- 4 files changed, 27 insertions(+), 14 deletions(-) diff --git a/block/bio.c b/block/bio.c index 60219f82ddab..276fc35ec559 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1146,8 +1146,9 @@ static int bio_copy_from_iter(struct bio *bio, struct iov_iter *iter) { int i; struct bio_vec *bvec; + struct bvec_chunk_iter citer; - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_chunk_segment_all(bvec, bio, i, citer) { ssize_t ret; ret = copy_page_from_iter(bvec->bv_page, @@ -1177,8 +1178,9 @@ static int bio_copy_to_iter(struct bio *bio, struct iov_iter iter) { int i; struct bio_vec *bvec; + struct bvec_chunk_iter citer; - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_chunk_segment_all(bvec, bio, i, citer) { ssize_t ret; ret = copy_page_to_iter(bvec->bv_page, @@ -1200,8 +1202,9 @@ void bio_free_pages(struct bio *bio) { struct bio_vec *bvec; int i; + struct bvec_chunk_iter citer; - bio_for_each_segment_all(bvec, bio, i) + bio_for_each_chunk_segment_all(bvec, bio, i, citer) __free_page(bvec->bv_page); } EXPORT_SYMBOL(bio_free_pages); @@ -1367,6 +1370,7 @@ struct bio *bio_map_user_iov(struct request_queue *q, struct bio *bio; int ret; struct bio_vec *bvec; + struct bvec_chunk_iter citer; if (!iov_iter_count(iter)) return ERR_PTR(-EINVAL); @@ -1440,7 +1444,7 @@ struct bio *bio_map_user_iov(struct request_queue *q, return bio; out_unmap: - bio_for_each_segment_all(bvec, bio, j) { + bio_for_each_chunk_segment_all(bvec, bio, j, citer) { put_page(bvec->bv_page); } bio_put(bio); @@ -1451,11 +1455,12 @@ static void __bio_unmap_user(struct bio *bio) { struct bio_vec *bvec; int i; + struct bvec_chunk_iter citer; /* * make sure we dirty pages we wrote to */ - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_chunk_segment_all(bvec, bio, i, citer) { if (bio_data_dir(bio) == READ) set_page_dirty_lock(bvec->bv_page); @@ -1547,8 +1552,9 @@ static void bio_copy_kern_endio_read(struct bio *bio) char *p = bio->bi_private; struct bio_vec *bvec; int i; + struct bvec_chunk_iter citer; - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_chunk_segment_all(bvec, bio, i, citer) { memcpy(p, page_address(bvec->bv_page), bvec->bv_len); p += bvec->bv_len; } @@ -1657,8 +1663,9 @@ void bio_set_pages_dirty(struct bio *bio) { struct bio_vec *bvec; int i; + struct bvec_chunk_iter citer; - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_chunk_segment_all(bvec, bio, i, citer) { if (!PageCompound(bvec->bv_page)) set_page_dirty_lock(bvec->bv_page); } @@ -1669,8 +1676,9 @@ static void bio_release_pages(struct bio *bio) { struct bio_vec *bvec; int i; + struct bvec_chunk_iter citer; - bio_for_each_segment_all(bvec, bio, i) + bio_for_each_chunk_segment_all(bvec, bio, i, citer) put_page(bvec->bv_page); } @@ -1717,8 +1725,9 @@ void bio_check_pages_dirty(struct bio *bio) struct bio_vec *bvec; unsigned long flags; int i; + struct bvec_chunk_iter citer; - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_chunk_segment_all(bvec, bio, i, citer) { if (!PageDirty(bvec->bv_page) && !PageCompound(bvec->bv_page)) goto defer; } diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 3d08dc84db16..9223666c845d 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -123,6 +123,7 @@ int blkdev_report_zones(struct block_device *bdev, unsigned int ofst; void *addr; int ret; + struct bvec_chunk_iter citer; if (!q) return -ENXIO; @@ -190,7 +191,7 @@ int blkdev_report_zones(struct block_device *bdev, n = 0; nz = 0; nr_rep = 0; - bio_for_each_segment_all(bv, bio, i) { + bio_for_each_chunk_segment_all(bv, bio, i, citer) { if (!bv->bv_page) break; @@ -223,7 +224,7 @@ int blkdev_report_zones(struct block_device *bdev, *nr_zones = nz; out: - bio_for_each_segment_all(bv, bio, i) + bio_for_each_chunk_segment_all(bv, bio, i, citer) __free_page(bv->bv_page); bio_put(bio); diff --git a/block/bounce.c b/block/bounce.c index fd31347b7836..c6af0bd29ec9 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -146,11 +146,12 @@ static void bounce_end_io(struct bio *bio, mempool_t *pool) struct bio_vec *bvec, orig_vec; int i; struct bvec_iter orig_iter = bio_orig->bi_iter; + struct bvec_chunk_iter citer; /* * free up bounce indirect pages used */ - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_chunk_segment_all(bvec, bio, i, citer) { orig_vec = bio_iter_iovec(bio_orig, orig_iter); if (bvec->bv_page != orig_vec.bv_page) { dec_zone_page_state(bvec->bv_page, NR_BOUNCE); @@ -206,6 +207,7 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig, bool bounce = false; int sectors = 0; bool passthrough = bio_is_passthrough(*bio_orig); + struct bvec_chunk_iter citer; bio_for_each_segment(from, *bio_orig, iter) { if (i++ < BIO_MAX_PAGES) @@ -225,7 +227,7 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig, bio = bio_clone_bioset(*bio_orig, GFP_NOIO, passthrough ? NULL : &bounce_bio_set); - bio_for_each_segment_all(to, bio, i) { + bio_for_each_chunk_segment_all(to, bio, i, citer) { struct page *page = to->bv_page; if (page_to_pfn(page) <= q->limits.bounce_pfn) diff --git a/include/linux/bio.h b/include/linux/bio.h index f21384be9b51..c22b8be961ce 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -374,10 +374,11 @@ static inline unsigned bio_pages_all(struct bio *bio) { unsigned i; struct bio_vec *bv; + struct bvec_chunk_iter citer; WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED)); - bio_for_each_segment_all(bv, bio, i) + bio_for_each_chunk_segment_all(bv, bio, i, citer) ; return i; }