From patchwork Mon Jun 26 12:09:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 9809203 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DFE66603F3 for ; Mon, 26 Jun 2017 12:14:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E657928584 for ; Mon, 26 Jun 2017 12:14:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D944E2858D; Mon, 26 Jun 2017 12:14:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 77FB928583 for ; Mon, 26 Jun 2017 12:14:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751804AbdFZMOA (ORCPT ); Mon, 26 Jun 2017 08:14:00 -0400 Received: from mx1.redhat.com ([209.132.183.28]:46272 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752167AbdFZMN4 (ORCPT ); Mon, 26 Jun 2017 08:13:56 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 37802C045745; Mon, 26 Jun 2017 12:13:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 37802C045745 Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=ming.lei@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 37802C045745 Received: from localhost (ovpn-12-86.pek2.redhat.com [10.72.12.86]) by smtp.corp.redhat.com (Postfix) with ESMTP id 95C3E78913; Mon, 26 Jun 2017 12:13:41 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Huang Ying , Andrew Morton , Alexander Viro Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ming Lei , linux-bcache@vger.kernel.org Subject: [PATCH v2 07/51] bcache: comment on direct access to bvec table Date: Mon, 26 Jun 2017 20:09:50 +0800 Message-Id: <20170626121034.3051-8-ming.lei@redhat.com> In-Reply-To: <20170626121034.3051-1-ming.lei@redhat.com> References: <20170626121034.3051-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Mon, 26 Jun 2017 12:13:55 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Looks all are safe after multipage bvec is supported. Cc: linux-bcache@vger.kernel.org Signed-off-by: Ming Lei --- drivers/md/bcache/btree.c | 1 + drivers/md/bcache/super.c | 6 ++++++ drivers/md/bcache/util.c | 7 +++++++ 3 files changed, 14 insertions(+) diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c index 866dcf78ff8e..3da595ae565b 100644 --- a/drivers/md/bcache/btree.c +++ b/drivers/md/bcache/btree.c @@ -431,6 +431,7 @@ static void do_btree_node_write(struct btree *b) continue_at(cl, btree_node_write_done, NULL); } else { + /* No harm for multipage bvec since the new is just allocated */ b->bio->bi_vcnt = 0; bch_bio_map(b->bio, i); diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index 8352fad765f6..6808f548cd13 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -208,6 +208,7 @@ static void write_bdev_super_endio(struct bio *bio) static void __write_super(struct cache_sb *sb, struct bio *bio) { + /* single page bio, safe for multipage bvec */ struct cache_sb *out = page_address(bio->bi_io_vec[0].bv_page); unsigned i; @@ -1154,6 +1155,8 @@ static void register_bdev(struct cache_sb *sb, struct page *sb_page, dc->bdev->bd_holder = dc; bio_init(&dc->sb_bio, dc->sb_bio.bi_inline_vecs, 1); + + /* single page bio, safe for multipage bvec */ dc->sb_bio.bi_io_vec[0].bv_page = sb_page; get_page(sb_page); @@ -1799,6 +1802,7 @@ void bch_cache_release(struct kobject *kobj) for (i = 0; i < RESERVE_NR; i++) free_fifo(&ca->free[i]); + /* single page bio, safe for multipage bvec */ if (ca->sb_bio.bi_inline_vecs[0].bv_page) put_page(ca->sb_bio.bi_io_vec[0].bv_page); @@ -1854,6 +1858,8 @@ static int register_cache(struct cache_sb *sb, struct page *sb_page, ca->bdev->bd_holder = ca; bio_init(&ca->sb_bio, ca->sb_bio.bi_inline_vecs, 1); + + /* single page bio, safe for multipage bvec */ ca->sb_bio.bi_io_vec[0].bv_page = sb_page; get_page(sb_page); diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c index 8c3a938f4bf0..11b4230ea6ad 100644 --- a/drivers/md/bcache/util.c +++ b/drivers/md/bcache/util.c @@ -223,6 +223,13 @@ uint64_t bch_next_delay(struct bch_ratelimit *d, uint64_t done) : 0; } +/* + * Generally it isn't good to access .bi_io_vec and .bi_vcnt + * directly, the preferred way is bio_add_page, but in + * this case, bch_bio_map() supposes that the bvec table + * is empty, so it is safe to access .bi_vcnt & .bi_io_vec + * in this way even after multipage bvec is supported. + */ void bch_bio_map(struct bio *bio, void *base) { size_t size = bio->bi_iter.bi_size;