From patchwork Wed May 3 15:24:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13230272 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F6ACC7EE23 for ; Wed, 3 May 2023 15:25:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230453AbjECPZa (ORCPT ); Wed, 3 May 2023 11:25:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51608 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230430AbjECPZ3 (ORCPT ); Wed, 3 May 2023 11:25:29 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 761684EEE for ; Wed, 3 May 2023 08:25:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=7ozSF95z7SQoPsEz6RikEiADhslVBhqvBpFG1ZD2lvc=; b=ReqjE0KSz8TqfW96o7z7PPGi6i ACvEGK8tZaPDLPOZwMIr+c4NcBLwjyhVMWGvkqu74ON+e3Bmf6/WedrfV4RHTz1e6Pj7FsL1T94xL yBrl3jg9/4VVkhOdJmt1657NgZcichHh5m0fz+asMQloKlT3jNBn0Gj76R3cNv7Chi+jpCL2Gq9si EANM3sW9hKeZuRMTNgzYOzD9Hi34ttR8LqO02SrMYxFwEQbW66RI3aRPER3Le89gNBqHMDGr4iq55 wq6pXsVIRoDYXQk7WetgtNcjM6fAsA7XOo7Ar5UB75k6Pa8nTI7+sjC+TfR4jDenhzcoBrPCtl51k n0gQ/ZDg==; Received: from [2001:4bb8:181:617f:7279:c4cd:ae56:e444] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1puEMA-004xnc-2r; Wed, 03 May 2023 15:25:27 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org, Johannes Thumshirn Subject: [PATCH 16/21] btrfs: remove the io_pages field in struct extent_buffer Date: Wed, 3 May 2023 17:24:36 +0200 Message-Id: <20230503152441.1141019-17-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230503152441.1141019-1-hch@lst.de> References: <20230503152441.1141019-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org No need to track the number of pages under I/O now that each extent_buffer is read and written using a single bio. For the read side we need to grab an extra reference for the duration of the I/O to prevent eviction, though. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/extent_io.c | 17 +++++------------ fs/btrfs/extent_io.h | 1 - 2 files changed, 5 insertions(+), 13 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 28088b751f8021..83e8d71e2c4d44 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1767,8 +1767,6 @@ static void extent_buffer_write_end_io(struct btrfs_bio *bbio) struct page *page = bvec->bv_page; u32 len = bvec->bv_len; - atomic_dec(&eb->io_pages); - if (!uptodate) { btrfs_page_clear_uptodate(fs_info, page, start, len); btrfs_page_set_error(fs_info, page, start, len); @@ -1791,7 +1789,6 @@ static void prepare_eb_write(struct extent_buffer *eb) unsigned long end; clear_bit(EXTENT_BUFFER_WRITE_ERR, &eb->bflags); - atomic_set(&eb->io_pages, num_extent_pages(eb)); /* Set btree blocks beyond nritems with 0 to avoid stale content */ nritems = btrfs_header_nritems(eb); @@ -3231,8 +3228,7 @@ static void __free_extent_buffer(struct extent_buffer *eb) static int extent_buffer_under_io(const struct extent_buffer *eb) { - return (atomic_read(&eb->io_pages) || - test_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags) || + return (test_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags) || test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)); } @@ -3369,7 +3365,6 @@ __alloc_extent_buffer(struct btrfs_fs_info *fs_info, u64 start, spin_lock_init(&eb->refs_lock); atomic_set(&eb->refs, 1); - atomic_set(&eb->io_pages, 0); ASSERT(len <= BTRFS_MAX_METADATA_BLOCKSIZE); @@ -3486,9 +3481,9 @@ static void check_buffer_tree_ref(struct extent_buffer *eb) * adequately protected by the refcount, but the TREE_REF bit and * its corresponding reference are not. To protect against this * class of races, we call check_buffer_tree_ref from the codepaths - * which trigger io after they set eb->io_pages. Note that once io is - * initiated, TREE_REF can no longer be cleared, so that is the - * moment at which any such race is best fixed. + * which trigger io. Note that once io is initiated, TREE_REF can no + * longer be cleared, so that is the moment at which any such race is + * best fixed. */ refs = atomic_read(&eb->refs); if (refs >= 2 && test_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags)) @@ -4058,7 +4053,6 @@ static void extent_buffer_read_end_io(struct btrfs_bio *bbio) struct bio_vec *bvec; u32 bio_offset = 0; - atomic_inc(&eb->refs); eb->read_mirror = bbio->mirror_num; if (uptodate && @@ -4073,7 +4067,6 @@ static void extent_buffer_read_end_io(struct btrfs_bio *bbio) } bio_for_each_segment_all(bvec, &bbio->bio, iter_all) { - atomic_dec(&eb->io_pages); end_page_read(bvec->bv_page, uptodate, eb->start + bio_offset, bvec->bv_len); bio_offset += bvec->bv_len; @@ -4096,8 +4089,8 @@ static void __read_extent_buffer_pages(struct extent_buffer *eb, int mirror_num, clear_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags); eb->read_mirror = 0; - atomic_set(&eb->io_pages, num_pages); check_buffer_tree_ref(eb); + atomic_inc(&eb->refs); bbio = btrfs_bio_alloc(INLINE_EXTENT_BUFFER_PAGES, REQ_OP_READ | REQ_META, eb->fs_info, diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index 342412d37a7b4b..12854a2b48f060 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -79,7 +79,6 @@ struct extent_buffer { struct btrfs_fs_info *fs_info; spinlock_t refs_lock; atomic_t refs; - atomic_t io_pages; int read_mirror; struct rcu_head rcu_head; pid_t lock_owner;