From patchwork Thu Feb 16 15:07:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13143336 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5D77C636CC for ; Thu, 16 Feb 2023 15:07:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3CFED6B0074; Thu, 16 Feb 2023 10:07:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 380176B0075; Thu, 16 Feb 2023 10:07:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2215B6B007B; Thu, 16 Feb 2023 10:07:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 135566B0074 for ; Thu, 16 Feb 2023 10:07:17 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 77E80160219 for ; Thu, 16 Feb 2023 15:07:16 +0000 (UTC) X-FDA: 80473483272.06.2EDEA6D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf01.hostedemail.com (Postfix) with ESMTP id 929DA4000C for ; Thu, 16 Feb 2023 15:07:14 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=SjnulXlH; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf01.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676560034; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TMTMNPHQNijU8uUEBNxhNrrsQnlnvW6SaQn8r5zH6Oo=; b=XynXjwCRUdOXIV2BBzozXc/Cm0FHYNIQZmQTyDk1iGc+TObBvs8AAazTuyhpKihUrh4ASe CJuR5URlqy8RrBFddjXUtiwxztH4mipq73mQm/LjbW90fiXNJ+kjWLZli7jKyp3ea0TO7t j3ZOGlobUJOl+Wt+VZw2Hcbz5N5z7Ck= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=SjnulXlH; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf01.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676560034; a=rsa-sha256; cv=none; b=U/OjjjjVuwjjXtxE+x6KE6qdlTJ24HmuJ8+OJJfcDyHxPsWhpGDQ3FFE61vVnWPJ6tQhgn 1EOxJzhGB3xNPQR4rmQO2tZYtLe7y8pQ1PHrbEDOi5MUTOsIW7UUGqBFjXVtuWoXlWWZAy nk/Zja7SmctWBShRT6cBSl2lEu5+jKU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1676560033; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TMTMNPHQNijU8uUEBNxhNrrsQnlnvW6SaQn8r5zH6Oo=; b=SjnulXlHa8AB0+nKbSLW/1D51Mi6EJEWCCpnuZvH6WWFrS1Dzy+/1Aagk3D0pj+UbCWAuJ doPPvgsxuedzN/FatSOsJXlcUp1Y+Fu7/zeGHXDhCcvg6IICnKRH2lmYRHGlZC4T7/xzkv alrlbalkoc8XfICwTa0UTPnXci8LBDk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-177-VDIE36s1OfyspGOi3TOcRw-1; Thu, 16 Feb 2023 10:07:12 -0500 X-MC-Unique: VDIE36s1OfyspGOi3TOcRw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 98DF485D06A; Thu, 16 Feb 2023 15:07:10 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id BAD9E2166B30; Thu, 16 Feb 2023 15:07:07 +0000 (UTC) From: David Howells To: Matthew Wilcox Cc: David Howells , Linus Torvalds , Jeff Layton , Christoph Hellwig , linux-afs@lists.infradead.org, linux-nfs@vger.kernel.org, linux-cifs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs-developer@lists.sourceforge.net, linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org, linux-cachefs@redhat.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Rohith Surabattula , Steve French , Shyam Prasad N , Dave Wysochanski , Dominique Martinet , Ilya Dryomov , "Theodore Ts'o" , Andreas Dilger , linux-mm@kvack.org Subject: [PATCH v6 1/2] mm: Merge folio_has_private()/filemap_release_folio() call pairs Date: Thu, 16 Feb 2023 15:07:00 +0000 Message-Id: <20230216150701.3654894-2-dhowells@redhat.com> In-Reply-To: <20230216150701.3654894-1-dhowells@redhat.com> References: <20230216150701.3654894-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 929DA4000C X-Stat-Signature: y57jxuji8i4iq3xjteo66mqsgk9t5asz X-HE-Tag: 1676560034-199982 X-HE-Meta: U2FsdGVkX190Rkcz02XkpomUmwZ6g0eo9QpIf4NgQHAAfLLyvDE/cARytBtiE4wzQbwpGT/N7W0AfuxOnubW4IvRtmf8NjZFebPThTkjWyQaSHOg108/YU5wNmw8XGj+N1WVQPpWfZtIdiTT1VsF3fAlk62x2Gpa6/NZni7hVai4QcNOLOKMhGRoieFXfXyn2TUoHREmOv/K37I7MD3WlNnGEL+gOV0E+sLTj3kOPfqyQdwq+a4J4dEDi/i+MkuYGRd1DyTCl6JmE9GIokZlHaoadw9Ac/NTwRUmEDtiYpSe0gdU9sA+dRJzEqMIoRTipoH1PQa0x9cwhWW7EJLOHdlEsgRNh2HYmLq9olv95Ahpd2wPo29TGBtOb1GU/pSgNCnZ3WARs2+nV4tUafkuKeNVNLtqpveS8vhs/vv5stasi9g5yaj4wzQfmvWx9QRgCEQ8LhfYN7/1N+fg0PWdp3rPCsOnfazxDOa/LajZ7teUPG1/9WU9w5xsI3f0BO5E8RyrWU9LvNCUn/aeELDxReuPlxcbc86i4TxlYGpkZaOPgnYjjvXLv9TJ1CCa+F0FQrpLpjgzGXCHy4JCeoNK0xjcHwEBsqlntrn8bqCsRztY4sjPAZ+gPD0TyyTZ8DlxRyNd5Pj4fOORVPDi7FtcqN4DHZc4aeVhpM4H6YCjF4ezcSyGdurtTIYKqDWYqb20exPgrqu4ICEvoN2IzHUPIBFTc4gQn0nIJaIJbFMKGYCQ7dE3k+LmJPB6X6Ugnm7I5hDFe18fON2+M+a2Rhk/waD+WWxexwFxO42Eg1lSu7TcLuEU0SzDE9QMwdNuAhtVSp6vx4bx8lQqH+5v1jvbNDXQgU3LWO3ezekV22nyWJF5UQrrT0dbM1phQ/uXRIWJBb1d33cGsN2MmPeNesNgOihyIuE+ywPxlFPRtg+d+nYjbJTzmn0iUPpzZIq0zQ1jWqIKWv4FH8g5GP5SJUW 4YlwSxlv EbifLuk1TUoHPL2jtyd3qDFrQDqrXaPJsobmE+/n7y1NaA+6U4P1QuudV2i/wW62QWSQb6NsxNvLtWyp8AsxZOApRuTG6TUWRMV/nZKKE6gedHoHpgT9Y1+T4qfYc2Om+PHkxhkRCQbOq4Ebv8H922/nkmKIIONnR3hSqRaCq9JYq98W8zCgzqtX3V2A6+NpelnIjmtwCHQ/3cpw6QRSe4YAb1oNtJ8fBOwLRrp4j/krT3t+ymH0rbJ8IENqnetG9OEuUfe6iDW4RH6zXUGGGN000tjwGNmmBQ0pIVVXEkj9JBZ8eUYnBvZ3gO/5+4NXQKoaz0Kk92j6lT3KmmM3St4YkbGRnZMUpSJud9hO3H8mayFcQmzes4OqOowisAR8VqCtAoSGfJoZ6i96yF7BWhdq3edbALRnraHUgAqth7lD7rW5So4qPcypCiBi7neeu3gDmwu0TPgTsnUV2yukC+O7wcliqz1rWVEW3Onc6UFvtRQDFaAcefyMM4SFjAOTTOwWUEWyyEJpAE1MhrByRmX9mU/tp7efQ1pH3QFN7a9RHXaQ+z+Ldl6LRDP/C/xtxDVBfJC+4Nh51Y2dLpAmYWrAwtG5N/jj3FAMUlJKP/TlmEGcf2F5KyCwtQJMlUlmModU8amvdZfBIFgAL8G6XtWBEmKI+65tn4Ttyt+GawKhbtDoGblT9FNs72oxrqpH6v63aMXOQD/1b9kbkKK0N3SES4kZbT8hd2MbyeTT3UapjIYvFVBSdJzqkHJbYsn6GbQXIHxdFRB5P0t0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make filemap_release_folio() check folio_has_private(). Then, in most cases, where a call to folio_has_private() is immediately followed by a call to filemap_release_folio(), we can get rid of the test in the pair. There are a couple of sites in mm/vscan.c that this can't so easily be done. In shrink_folio_list(), there are actually three cases (something different is done for incompletely invalidated buffers), but filemap_release_folio() elides two of them. In shrink_active_list(), we don't have have the folio lock yet, so the check allows us to avoid locking the page unnecessarily. A wrapper function to check if a folio needs release is provided for those places that still need to do it in the mm/ directory. This will acquire additional parts to the condition in a future patch. After this, the only remaining caller of folio_has_private() outside of mm/ is a check in fuse. Reported-by: Rohith Surabattula Suggested-by: Matthew Wilcox Signed-off-by: David Howells cc: Matthew Wilcox cc: Linus Torvalds cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Dave Wysochanski cc: Dominique Martinet cc: Ilya Dryomov cc: "Theodore Ts'o" cc: Andreas Dilger cc: linux-cachefs@redhat.com cc: linux-cifs@vger.kernel.org cc: linux-afs@lists.infradead.org cc: v9fs-developer@lists.sourceforge.net cc: ceph-devel@vger.kernel.org cc: linux-nfs@vger.kernel.org cc: linux-ext4@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- Notes: ver #5) - Rebased on linus/master. try_to_release_page() has now been entirely replaced by filemap_release_folio(), barring one comment. - Cleaned up some pairs in ext4. ver #4) - Split from fscache fix. - Moved folio_needs_release() to mm/internal.h and removed open-coded version from filemap_release_folio(). ver #3) - Fixed mapping_clear_release_always() to use clear_bit() not set_bit(). - Moved a '&&' to the correct line. ver #2) - Rewrote entirely according to Willy's suggestion[1]. fs/ext4/move_extent.c | 12 ++++-------- fs/splice.c | 3 +-- mm/filemap.c | 2 ++ mm/huge_memory.c | 3 +-- mm/internal.h | 8 ++++++++ mm/khugepaged.c | 3 +-- mm/memory-failure.c | 8 +++----- mm/migrate.c | 3 +-- mm/truncate.c | 6 ++---- mm/vmscan.c | 8 ++++---- 10 files changed, 27 insertions(+), 29 deletions(-) diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c index 8dbb87edf24c..dedc9d445f24 100644 --- a/fs/ext4/move_extent.c +++ b/fs/ext4/move_extent.c @@ -339,10 +339,8 @@ move_extent_per_page(struct file *o_filp, struct inode *donor_inode, ext4_double_up_write_data_sem(orig_inode, donor_inode); goto data_copy; } - if ((folio_has_private(folio[0]) && - !filemap_release_folio(folio[0], 0)) || - (folio_has_private(folio[1]) && - !filemap_release_folio(folio[1], 0))) { + if (!filemap_release_folio(folio[0], 0) || + !filemap_release_folio(folio[1], 0)) { *err = -EBUSY; goto drop_data_sem; } @@ -361,10 +359,8 @@ move_extent_per_page(struct file *o_filp, struct inode *donor_inode, /* At this point all buffers in range are uptodate, old mapping layout * is no longer required, try to drop it now. */ - if ((folio_has_private(folio[0]) && - !filemap_release_folio(folio[0], 0)) || - (folio_has_private(folio[1]) && - !filemap_release_folio(folio[1], 0))) { + if (!filemap_release_folio(folio[0], 0) || + !filemap_release_folio(folio[1], 0)) { *err = -EBUSY; goto unlock_folios; } diff --git a/fs/splice.c b/fs/splice.c index 5969b7a1d353..e69eddaf9d7c 100644 --- a/fs/splice.c +++ b/fs/splice.c @@ -65,8 +65,7 @@ static bool page_cache_pipe_buf_try_steal(struct pipe_inode_info *pipe, */ folio_wait_writeback(folio); - if (folio_has_private(folio) && - !filemap_release_folio(folio, GFP_KERNEL)) + if (!filemap_release_folio(folio, GFP_KERNEL)) goto out_unlock; /* diff --git a/mm/filemap.c b/mm/filemap.c index c4d4ace9cc70..344146c170b0 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3960,6 +3960,8 @@ bool filemap_release_folio(struct folio *folio, gfp_t gfp) struct address_space * const mapping = folio->mapping; BUG_ON(!folio_test_locked(folio)); + if (!folio_needs_release(folio)) + return true; if (folio_test_writeback(folio)) return false; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index abe6cfd92ffa..8490c42dedb3 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2702,8 +2702,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) gfp = current_gfp_context(mapping_gfp_mask(mapping) & GFP_RECLAIM_MASK); - if (folio_test_private(folio) && - !filemap_release_folio(folio, gfp)) { + if (!filemap_release_folio(folio, gfp)) { ret = -EBUSY; goto out; } diff --git a/mm/internal.h b/mm/internal.h index bcf75a8b032d..c4c8e58e1d12 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -163,6 +163,14 @@ static inline void set_page_refcounted(struct page *page) set_page_count(page, 1); } +/* + * Return true if a folio needs ->release_folio() calling upon it. + */ +static inline bool folio_needs_release(struct folio *folio) +{ + return folio_has_private(folio); +} + extern unsigned long highest_memmap_pfn; /* diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 90acfea40c13..e257e0a13ad1 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1944,8 +1944,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, goto out_unlock; } - if (folio_has_private(folio) && - !filemap_release_folio(folio, GFP_KERNEL)) { + if (!filemap_release_folio(folio, GFP_KERNEL)) { result = SCAN_PAGE_HAS_PRIVATE; folio_putback_lru(folio); goto out_unlock; diff --git a/mm/memory-failure.c b/mm/memory-failure.c index c77a9e37e27e..a4f809c11ae9 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -843,14 +843,12 @@ static int truncate_error_page(struct page *p, unsigned long pfn, struct folio *folio = page_folio(p); int err = mapping->a_ops->error_remove_page(mapping, p); - if (err != 0) { + if (err != 0) pr_info("%#lx: Failed to punch page: %d\n", pfn, err); - } else if (folio_has_private(folio) && - !filemap_release_folio(folio, GFP_NOIO)) { + else if (!filemap_release_folio(folio, GFP_NOIO)) pr_info("%#lx: failed to release buffers\n", pfn); - } else { + else ret = MF_RECOVERED; - } } else { /* * If the file system doesn't support it just invalidate diff --git a/mm/migrate.c b/mm/migrate.c index a4d3fc65085f..db867bb80128 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -915,8 +915,7 @@ static int fallback_migrate_folio(struct address_space *mapping, * Buffers may be managed in a filesystem specific way. * We must have no buffers or drop them. */ - if (folio_test_private(src) && - !filemap_release_folio(src, GFP_KERNEL)) + if (!filemap_release_folio(src, GFP_KERNEL)) return mode == MIGRATE_SYNC ? -EAGAIN : -EBUSY; return migrate_folio(mapping, dst, src, mode); diff --git a/mm/truncate.c b/mm/truncate.c index 7b4ea4c4a46b..8378aabb5294 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -19,7 +19,6 @@ #include #include #include -#include /* grr. try_to_release_page */ #include #include #include "internal.h" @@ -276,7 +275,7 @@ static long mapping_evict_folio(struct address_space *mapping, if (folio_ref_count(folio) > folio_nr_pages(folio) + folio_has_private(folio) + 1) return 0; - if (folio_has_private(folio) && !filemap_release_folio(folio, 0)) + if (!filemap_release_folio(folio, 0)) return 0; return remove_mapping(mapping, folio); @@ -573,8 +572,7 @@ static int invalidate_complete_folio2(struct address_space *mapping, if (folio->mapping != mapping) return 0; - if (folio_has_private(folio) && - !filemap_release_folio(folio, GFP_KERNEL)) + if (!filemap_release_folio(folio, GFP_KERNEL)) return 0; spin_lock(&mapping->host->i_lock); diff --git a/mm/vmscan.c b/mm/vmscan.c index bf3eedf0209c..0fd6623adccb 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1996,7 +1996,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * (refcount == 1) it can be freed. Otherwise, leave * the folio on the LRU so it is swappable. */ - if (folio_has_private(folio)) { + if (folio_needs_release(folio)) { if (!filemap_release_folio(folio, sc->gfp_mask)) goto activate_locked; if (!mapping && folio_ref_count(folio) == 1) { @@ -2641,9 +2641,9 @@ static void shrink_active_list(unsigned long nr_to_scan, } if (unlikely(buffer_heads_over_limit)) { - if (folio_test_private(folio) && folio_trylock(folio)) { - if (folio_test_private(folio)) - filemap_release_folio(folio, 0); + if (folio_needs_release(folio) && + folio_trylock(folio)) { + filemap_release_folio(folio, 0); folio_unlock(folio); } } From patchwork Thu Feb 16 15:07:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13143337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4411BC636CC for ; Thu, 16 Feb 2023 15:07:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C0C476B0075; Thu, 16 Feb 2023 10:07:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BBD466B007B; Thu, 16 Feb 2023 10:07:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A5DF16B007D; Thu, 16 Feb 2023 10:07:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 992846B0075 for ; Thu, 16 Feb 2023 10:07:23 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 49FAA14035C for ; Thu, 16 Feb 2023 15:07:23 +0000 (UTC) X-FDA: 80473483566.07.20AEE4C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf18.hostedemail.com (Postfix) with ESMTP id 681E21C0027 for ; Thu, 16 Feb 2023 15:07:21 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=aZtWnjuU; spf=pass (imf18.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676560041; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dEvVR7mNrJtJrzwBvJMyNMWCqmrGhsk9d3FtHweO8aQ=; b=ng0qhMWogx7l+c6sUtZScGzLEJPgsAiggK6/E5B296Lyuf6yrw28532lRBVcO8bYNo1VDv x/II8GGDQj1yayi/DEwclYMlkZrxTDBiCnC2hmBCTg+asMPZLVuLEuGpE+TDeNq/5y4N1n wWYGeG2HKnSIAUkQKxVNSImRRnW44CU= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=aZtWnjuU; spf=pass (imf18.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676560041; a=rsa-sha256; cv=none; b=PeL899y5xcP7YAj/XC+x2JPS2jghaMoXHoaQdsse37MJYM0mPChRnIWFButNSeql1DafJO 0ZFtq5nhtg6OvSImnM5bw0eKQ0NUeMbb/ROKDnlxTjEhcShMg58ZjSZvjDD8vxN9vddY5t 504EE0qHFCzOhsT0f8WyjAxyVj1EkvE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1676560040; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dEvVR7mNrJtJrzwBvJMyNMWCqmrGhsk9d3FtHweO8aQ=; b=aZtWnjuU0bNaBYTko6n3vkNq/sAlZTWssKRyTCr9/serrSqVPGRgZZwjBKPgYp9RqWaqyG DwR+lqpmw0FlB0JhGAxV3vMk6Ien+v8w1kS2EM+6NJY1PW8ttxmJnaVMMjdpkVkEUV5gZH e8Qc+q5qdSAcpck+bqkRqf666cHwlFU= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-518-h1RdFIDIPg6lnrRYr2xXyw-1; Thu, 16 Feb 2023 10:07:16 -0500 X-MC-Unique: h1RdFIDIPg6lnrRYr2xXyw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CFC4B3C1486A; Thu, 16 Feb 2023 15:07:13 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5A3572026D4B; Thu, 16 Feb 2023 15:07:11 +0000 (UTC) From: David Howells To: Matthew Wilcox Cc: David Howells , Linus Torvalds , Jeff Layton , Christoph Hellwig , linux-afs@lists.infradead.org, linux-nfs@vger.kernel.org, linux-cifs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs-developer@lists.sourceforge.net, linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org, linux-cachefs@redhat.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Rohith Surabattula , Steve French , Shyam Prasad N , Dave Wysochanski , Dominique Martinet , Ilya Dryomov , linux-mm@kvack.org Subject: [PATCH v6 2/2] mm, netfs, fscache: Stop read optimisation when folio removed from pagecache Date: Thu, 16 Feb 2023 15:07:01 +0000 Message-Id: <20230216150701.3654894-3-dhowells@redhat.com> In-Reply-To: <20230216150701.3654894-1-dhowells@redhat.com> References: <20230216150701.3654894-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 681E21C0027 X-Rspam-User: X-Stat-Signature: w5767s8kbyaqjnwz1csbzhyy8x68ucye X-HE-Tag: 1676560041-596099 X-HE-Meta: U2FsdGVkX1/GWGcuoLCeuEn4copbZ5GlaBbjBLtZOVZXsksjZ+HiiUSIhAiDy9howcnZ97/M7M2z5H/pIdIYKmDJhHieYrxoHEJZKYFvIh9kvpIqbF897GtSxQ763uZZ3vETtyOO8B6Kgbwsup7yc6ZGs0IcItt9BlxJrlH9ma4rM3ncTnawXnNuHWak+Zavy39qZNMjkRafVdzGq3VvK0/7x8Wu5UPPggXjvv0mLOeaev9O2Xql/lhwpi05V9q1yDDpgBglHz/WkZMMNnnMmQW0rSU4gIgNzxzQZP2pbqszylcbGpSKZrD+AdCpId1ZobxcoJHpRSVmN/ZLU49cslJj3znOb7xQ4YSWuHTnuXvtgnnRWhLEMa32kpWSj2QVnbDM0VACax454VfowPfW4gKxRMHnvNxu4qknBscfmJeMERzYTqtv3b9eOrNWefLuSjx+8Ii4Nm6BJO6eJaPb9tf7QDhfWQeibH+YbbG99jlzk/g65BMsb1smz073uGo/AR9JHQlRjiJTB0Aw8d9Fzob5D3pdtPGGti9MhtQt3nUs9LITVKhTPgfkoy5OGAbmd+YwyvPKgXeDeT489U3iWdiE4PwWZCJeiRaWJbtGDhUqLHgvMLgJX5KAhrdCOuxln3sNSdGlhLWErEct6OfE6gyBhLyi2mj09L3mX065D8MmLG0+oE6BjZCKeXPYzb4C3hgoLDXtKdzXSK/nqAbU/TLyC2KLqwSZVKk3jFYvBY2lmbdskaw5XQprR8t5gPR49j2u39E6SLtTNC8hPzpN8lG0J/+FN/Xn8AjgWQbVFSPdhILqfVIz20cm9pk1wF5Q2GFvZR9XxbxRRJTwJCJxGiMZPWxFksuDNriyg7IrKLyM67IQSAWDuT3oijlAQBQByGrDn5bi7WkEybUQturwmakiEdxV1iQIoY6kM9bX220T4lrhbOhOYs+++GCMQwyr+EYzXADbES8AxXuM2ng 7Fuohq6E Hsa6Hpsluigkrs3hP24HhqZPB/L5biq/I4M6vBYHffjfnFVmV3M8FtKo8ox/jUeOAFpah4Pz5EAh4h8GCyLCENQkgGvQSiNglp7xZ5sMCQawrsuMeabHRGYlH1aRnkegp3GwUseK2E38mwJL6kJ27uVJxTlCFqTyTSTWJYei0LzYXFZZVM2jh2IKp8ksmDcsLBGJmL+sGZTz+2m9Vpd8Fs/UNpsqmnZkwu7Jw8Le+81laMY9lB3QDFS0dMfljcAOxEkv0lp6HbUKZvQwzUoOUOW5du0TY/OpAOEa8Ktj/g2fQ2/DRgmQYR2LLuplt0NtvUhCXTNENE3znLIV8f2oB7JuTZjwWI8mlN95eXKNo8bIkV1LEqrG+5M4ickp0iwauiOP3nDLn01IF/XhZ2N6Obm+NqbXBAzwZNtgYo4Y10T8rOqFdyG9v//E8YA8y1QKevwmjBvG1OdhOJwhSfWl88irQQvq00HTH/6yZYfgDvTN/JIcN2kz5kxbU62HY0+8p5L2H4AFL8kv00L05yZtXqq59itqt8a0TcElE0wdhEENLzMvTzcUv6CLXqcbYfkJtCkGJGX2P/H3/R9cWm2BUR8cmDXene70FLk/ctc8+4Up0ziu2djcpYVeJ172hH62nCBeMLqxMdkUtdqRiH+wF3gRAWU2eDJhjp3Fm X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Fscache has an optimisation by which reads from the cache are skipped until we know that (a) there's data there to be read and (b) that data isn't entirely covered by pages resident in the netfs pagecache. This is done with two flags manipulated by fscache_note_page_release(): if (... test_bit(FSCACHE_COOKIE_HAVE_DATA, &cookie->flags) && test_bit(FSCACHE_COOKIE_NO_DATA_TO_READ, &cookie->flags)) clear_bit(FSCACHE_COOKIE_NO_DATA_TO_READ, &cookie->flags); where the NO_DATA_TO_READ flag causes cachefiles_prepare_read() to indicate that netfslib should download from the server or clear the page instead. The fscache_note_page_release() function is intended to be called from ->releasepage() - but that only gets called if PG_private or PG_private_2 is set - and currently the former is at the discretion of the network filesystem and the latter is only set whilst a page is being written to the cache, so sometimes we miss clearing the optimisation. Fix this by following Willy's suggestion[1] and adding an address_space flag, AS_RELEASE_ALWAYS, that causes filemap_release_folio() to always call ->release_folio() if it's set, even if PG_private or PG_private_2 aren't set. Note that this would require folio_test_private() and page_has_private() to become more complicated. To avoid that, in the places[*] where these are used to conditionalise calls to filemap_release_folio() and try_to_release_page(), the tests are removed the those functions just jumped to unconditionally and the test is performed there. [*] There are some exceptions in vmscan.c where the check guards more than just a call to the releaser. I've added a function, folio_needs_release() to wrap all the checks for that. AS_RELEASE_ALWAYS should be set if a non-NULL cookie is obtained from fscache and cleared in ->evict_inode() before truncate_inode_pages_final() is called. Additionally, the FSCACHE_COOKIE_NO_DATA_TO_READ flag needs to be cleared and the optimisation cancelled if a cachefiles object already contains data when we open it. Reported-by: Rohith Surabattula Suggested-by: Matthew Wilcox Signed-off-by: David Howells cc: Matthew Wilcox cc: Linus Torvalds cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Dave Wysochanski cc: Dominique Martinet cc: Ilya Dryomov cc: linux-cachefs@redhat.com cc: linux-cifs@vger.kernel.org cc: linux-afs@lists.infradead.org cc: v9fs-developer@lists.sourceforge.net cc: ceph-devel@vger.kernel.org cc: linux-nfs@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- Notes: ver #4) - Split out merging of folio_has_private()/filemap_release_folio() call pairs into a preceding patch. - Don't need to clear AS_RELEASE_ALWAYS in ->evict_inode(). ver #3) - Fixed mapping_clear_release_always() to use clear_bit() not set_bit(). - Moved a '&&' to the correct line. ver #2) - Rewrote entirely according to Willy's suggestion[1]. fs/9p/cache.c | 2 ++ fs/afs/internal.h | 2 ++ fs/cachefiles/namei.c | 2 ++ fs/ceph/cache.c | 2 ++ fs/cifs/fscache.c | 2 ++ include/linux/pagemap.h | 16 ++++++++++++++++ mm/internal.h | 5 ++++- 7 files changed, 30 insertions(+), 1 deletion(-) diff --git a/fs/9p/cache.c b/fs/9p/cache.c index cebba4eaa0b5..12c0ae29f185 100644 --- a/fs/9p/cache.c +++ b/fs/9p/cache.c @@ -68,6 +68,8 @@ void v9fs_cache_inode_get_cookie(struct inode *inode) &path, sizeof(path), &version, sizeof(version), i_size_read(&v9inode->netfs.inode)); + if (v9inode->netfs.cache) + mapping_set_release_always(inode->i_mapping); p9_debug(P9_DEBUG_FSC, "inode %p get cookie %p\n", inode, v9fs_inode_cookie(v9inode)); diff --git a/fs/afs/internal.h b/fs/afs/internal.h index fd8567b98e2b..2d7e06fcb77f 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -680,6 +680,8 @@ static inline void afs_vnode_set_cache(struct afs_vnode *vnode, { #ifdef CONFIG_AFS_FSCACHE vnode->netfs.cache = cookie; + if (cookie) + mapping_set_release_always(vnode->netfs.inode.i_mapping); #endif } diff --git a/fs/cachefiles/namei.c b/fs/cachefiles/namei.c index 03ca8f2f657a..50b2ee163af6 100644 --- a/fs/cachefiles/namei.c +++ b/fs/cachefiles/namei.c @@ -584,6 +584,8 @@ static bool cachefiles_open_file(struct cachefiles_object *object, if (ret < 0) goto check_failed; + clear_bit(FSCACHE_COOKIE_NO_DATA_TO_READ, &object->cookie->flags); + object->file = file; /* Always update the atime on an object we've just looked up (this is diff --git a/fs/ceph/cache.c b/fs/ceph/cache.c index 177d8e8d73fe..de1dee46d3df 100644 --- a/fs/ceph/cache.c +++ b/fs/ceph/cache.c @@ -36,6 +36,8 @@ void ceph_fscache_register_inode_cookie(struct inode *inode) &ci->i_vino, sizeof(ci->i_vino), &ci->i_version, sizeof(ci->i_version), i_size_read(inode)); + if (ci->netfs.cache) + mapping_set_release_always(inode->i_mapping); } void ceph_fscache_unregister_inode_cookie(struct ceph_inode_info *ci) diff --git a/fs/cifs/fscache.c b/fs/cifs/fscache.c index f6f3a6b75601..79e9665dfc90 100644 --- a/fs/cifs/fscache.c +++ b/fs/cifs/fscache.c @@ -108,6 +108,8 @@ void cifs_fscache_get_inode_cookie(struct inode *inode) &cifsi->uniqueid, sizeof(cifsi->uniqueid), &cd, sizeof(cd), i_size_read(&cifsi->netfs.inode)); + if (cifsi->netfs.cache) + mapping_set_release_always(inode->i_mapping); } void cifs_fscache_unuse_inode_cookie(struct inode *inode, bool update) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 29e1f9e76eb6..a0d433e0addd 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -199,6 +199,7 @@ enum mapping_flags { /* writeback related tags are not used */ AS_NO_WRITEBACK_TAGS = 5, AS_LARGE_FOLIO_SUPPORT = 6, + AS_RELEASE_ALWAYS, /* Call ->release_folio(), even if no private data */ }; /** @@ -269,6 +270,21 @@ static inline int mapping_use_writeback_tags(struct address_space *mapping) return !test_bit(AS_NO_WRITEBACK_TAGS, &mapping->flags); } +static inline bool mapping_release_always(const struct address_space *mapping) +{ + return test_bit(AS_RELEASE_ALWAYS, &mapping->flags); +} + +static inline void mapping_set_release_always(struct address_space *mapping) +{ + set_bit(AS_RELEASE_ALWAYS, &mapping->flags); +} + +static inline void mapping_clear_release_always(struct address_space *mapping) +{ + clear_bit(AS_RELEASE_ALWAYS, &mapping->flags); +} + static inline gfp_t mapping_gfp_mask(struct address_space * mapping) { return mapping->gfp_mask; diff --git a/mm/internal.h b/mm/internal.h index c4c8e58e1d12..5421ce8661fa 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -168,7 +168,10 @@ static inline void set_page_refcounted(struct page *page) */ static inline bool folio_needs_release(struct folio *folio) { - return folio_has_private(folio); + struct address_space *mapping = folio->mapping; + + return folio_has_private(folio) || + (mapping && mapping_release_always(mapping)); } extern unsigned long highest_memmap_pfn;