From patchwork Wed Mar 27 15:04:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13606763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BA9DC54E67 for ; Wed, 27 Mar 2024 15:04:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 830F06B0092; Wed, 27 Mar 2024 11:04:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7E0B36B0093; Wed, 27 Mar 2024 11:04:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A8666B0095; Wed, 27 Mar 2024 11:04:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4BC086B0092 for ; Wed, 27 Mar 2024 11:04:27 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 8EE0C160E3E for ; Wed, 27 Mar 2024 15:04:26 +0000 (UTC) X-FDA: 81943140132.17.5641A7F Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf11.hostedemail.com (Postfix) with ESMTP id 89FD54003D for ; Wed, 27 Mar 2024 15:04:16 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=g6fM3g9X; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711551856; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=bWNSBg8at2ybCmMWCY4Y8Gsbxal/bw2ZagDjU70FTzA=; b=R2UTJ5VPd0d92i/zXA7dDjjmtvmhvWyIvDGnAs+NCVcxHrQFVifdkSJ0md88367KxVqyNV Z3XjH2CofOzNUf548OXGf8RcGnULTBV3Pbcokwa8nqs0Pxrnuud2vieAEj5G9ZKfUzXbOw zDpPKODKt8vSkHVWLtDVJBV5IKLhqaE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711551856; a=rsa-sha256; cv=none; b=gRi34SvjwS8fNpNGmHFZiMZ5wvc4/dz8LHe6fvsM3fuIoHenw4vd760BwTL7/w+oFni3bc QCzQn2L7dox1ejQoasoZrhz0NN9RIkgUEgRGrMONEPbC3Cn3uKUkIu8bu2DrdbElJFA5Ow xgua4ns8JUBccu6n4wZlQy1eU7a2mYw= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=g6fM3g9X; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711551855; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=bWNSBg8at2ybCmMWCY4Y8Gsbxal/bw2ZagDjU70FTzA=; b=g6fM3g9X3JnKG7ENbqN6LAkz+26yF6s31eyLqPo5Vo5bwe+e4V0uc5RpIr2hIAFxxJhxxw bHr+Zo9VhnrPovSGggRrfaoReyqxWZXjagrUctf3rUZcDjw5sP4CUWUvCvZPKd+Rs2dI6P ggqc4iFdwnkMSccsvgNWmxpCAxYpl/Q= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-196-hpxMHw0oNR6KhiHzPfvrXw-1; Wed, 27 Mar 2024 11:04:12 -0400 X-MC-Unique: hpxMHw0oNR6KhiHzPfvrXw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 87AEC801900; Wed, 27 Mar 2024 15:04:11 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.146]) by smtp.corp.redhat.com (Postfix) with ESMTP id B0A18492BC9; Wed, 27 Mar 2024 15:04:08 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 From: David Howells To: Matthew Wilcox , Miklos Szeredi , Trond Myklebust , Christoph Hellwig cc: dhowells@redhat.com, Andrew Morton , Alexander Viro , Christian Brauner , Jeff Layton , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, netfs@lists.linux.dev, v9fs@lists.linux.dev, linux-afs@lists.infradead.org, ceph-devel@vger.kernel.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, devel@lists.orangefs.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH] mm, netfs: Provide a means of invalidation without using launder_folio MIME-Version: 1.0 Content-ID: <2318297.1711551844.1@warthog.procyon.org.uk> Date: Wed, 27 Mar 2024 15:04:04 +0000 Message-ID: <2318298.1711551844@warthog.procyon.org.uk> X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 X-Rspamd-Queue-Id: 89FD54003D X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: ex5rbe48gizwqkjmy6ceocqmicmecyzb X-HE-Tag: 1711551856-692425 X-HE-Meta: U2FsdGVkX180r3VJxhekA1OxW2WeBNO+W0v54InaquXgCJmkqw9FpsAMpILj6IyJbKxnenFGZ0DuRtkd3lcxkASjgDPj3c3A2nnZFN9EXx9iW62gaVSQ/OFyutQubRnUuLxAQU47RqJxgVff01B7p/ufqNbq++Pl4CJinxbZfjovx2jr2oPmDfWVx9XUR5oFc6lnI/WEGH65i+z5TrH0n7Pf4jq1Iv9dRfUq32r9OF/nna3YKZlm/9nurq9ZS6hna8pwHKXiGH4gZZWovh2+gjhB3AcqMXW+tfdr6i+JnUyQ3RWKB+e5lWaX0vUAihbmy1if4hDbGhzY4FPegKJoLWXyR5m3bO31r3BNqhB85PjXzrnOkbuusBJVJrUf8oMxZYz2QJ5s3rn9RliowJ3XXfGX2Rlw1d6QHpa6zBbKoCrGhgPhwymU+g3cLz2exn/EZR/yp7ZoaiUPGUuhKn0EK6yklO4CKGMWXSbZeCQ8yq2fnqActRHBSUrlp4SMFgtHg7HfKI2kaFQqS+l/4ZgWbZ3S4UhGmKE1dRTM1fWuN0sLrR3rDBPSjruBFmmhIE0SjslXBYLEs0FDnjPgKOqtAnpydNrfgBN7gSXX0LsCQ1Vejysn27atKxkHGU1ttVrOeJ/mt/m2+Z5TqNntP5ifVzgKOC7hY4zfGC6EW4vLSgg8+TKYgIaFOseIQFHAuxlEhQjfDjWgTOobiGhAYXQZ5onLPBTP0hYVUBp23E+5XP+mqFQ25008UfR1BCB4z+v6TCZU8PPhkJZRgOzaIXt4bidio0HmtLjRXWi45dcFu2XtKfRyRwIPDqvlvL+4K7rkjzFEIj/lfNkTQTbwtmqFwsyU811cEnmIR11fQEJDHiJd/fji4Eb4gX3o+5L8AExBxFWXSc7iMwoLzam++nKuTg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Implement a replacement for launder_folio[1]. The key feature of invalidate_inode_pages2() is that it locks each folio individually, unmaps it to prevent mmap'd accesses interfering and calls the ->launder_folio() address_space op to flush it. This has problems: firstly, each folio is written individually as one or more small writes; secondly, adjacent folios cannot be added so easily into the laundry; thirdly, it's yet another op to implement. Here's a bit of a hacked together solution which should probably be moved to mm/: Use the mmap lock to cause future faulting to wait, then unmap all the folios if we have mmaps, then, conditionally, use ->writepages() to flush any dirty data back and then discard all pages. The caller needs to hold a lock to prevent ->write_iter() getting underfoot. Note that this does not prevent ->read_iter() from accessing the file whilst we do this since that may operate without locking. We also have the writeback_control available and so have the opportunity to set a flag in it to tell the filesystem that we're doing an invalidation. Signed-off-by: David Howells cc: Matthew Wilcox cc: Miklos Szeredi cc: Trond Myklebust cc: Christoph Hellwig cc: Andrew Morton cc: Alexander Viro cc: Christian Brauner cc: Jeff Layton cc: linux-mm@kvack.org cc: linux-fsdevel@vger.kernel.org cc: netfs@lists.linux.dev cc: v9fs@lists.linux.dev cc: linux-afs@lists.infradead.org cc: ceph-devel@vger.kernel.org cc: linux-cifs@vger.kernel.org cc: linux-nfs@vger.kernel.org cc: devel@lists.orangefs.org Link: https://lore.kernel.org/r/1668172.1709764777@warthog.procyon.org.uk/ [1] --- fs/netfs/misc.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 3 ++ mm/memory.c | 3 +- 3 files changed, 61 insertions(+), 1 deletion(-) diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index bc1fc54fb724..774ce825fbec 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -250,3 +250,59 @@ bool netfs_release_folio(struct folio *folio, gfp_t gfp) return true; } EXPORT_SYMBOL(netfs_release_folio); + +extern void unmap_mapping_range_tree(struct rb_root_cached *root, + pgoff_t first_index, + pgoff_t last_index, + struct zap_details *details); + +/** + * netfs_invalidate_inode - Invalidate/forcibly write back an inode's pagecache + * @inode: The inode to flush + * @flush: Set to write back rather than simply invalidate. + * + * Invalidate all the folios on an inode, possibly writing them back first. + * Whilst the operation is undertaken, the mmap lock is held to prevent + * ->fault() from reinstalling the folios. The caller must hold a lock on the + * inode sufficient to prevent ->write_iter() from dirtying more folios. + */ +int netfs_invalidate_inode(struct inode *inode, bool flush) +{ + struct address_space *mapping = inode->i_mapping; + + if (!mapping || !mapping->nrpages) + goto out; + + /* Prevent folios from being faulted in. */ + i_mmap_lock_write(mapping); + + if (!mapping->nrpages) + goto unlock; + + /* Assume there are probably PTEs only if there are mmaps. */ + if (unlikely(!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root))) { + struct zap_details details = { }; + + unmap_mapping_range_tree(&mapping->i_mmap, 0, LLONG_MAX, &details); + } + + /* Write back the data if we're asked to. */ + if (flush) { + struct writeback_control wbc = { + .sync_mode = WB_SYNC_ALL, + .nr_to_write = LONG_MAX, + .range_start = 0, + .range_end = LLONG_MAX, + }; + + filemap_fdatawrite_wbc(mapping, &wbc); + } + + /* Wait for writeback to complete on all folios and discard. */ + truncate_inode_pages_range(mapping, 0, LLONG_MAX); + +unlock: + i_mmap_unlock_write(mapping); +out: + return filemap_check_errors(mapping); +} diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 298552f5122c..40dc34ee291d 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -400,6 +400,9 @@ ssize_t netfs_buffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *fr ssize_t netfs_unbuffered_write_iter(struct kiocb *iocb, struct iov_iter *from); ssize_t netfs_file_write_iter(struct kiocb *iocb, struct iov_iter *from); +/* High-level invalidation API */ +int netfs_invalidate_inode(struct inode *inode, bool flush); + /* Address operations API */ struct readahead_control; void netfs_readahead(struct readahead_control *); diff --git a/mm/memory.c b/mm/memory.c index f2bc6dd15eb8..106f32c7d7fb 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3665,7 +3665,7 @@ static void unmap_mapping_range_vma(struct vm_area_struct *vma, zap_page_range_single(vma, start_addr, end_addr - start_addr, details); } -static inline void unmap_mapping_range_tree(struct rb_root_cached *root, +inline void unmap_mapping_range_tree(struct rb_root_cached *root, pgoff_t first_index, pgoff_t last_index, struct zap_details *details) @@ -3685,6 +3685,7 @@ static inline void unmap_mapping_range_tree(struct rb_root_cached *root, details); } } +EXPORT_SYMBOL_GPL(unmap_mapping_range_tree); /** * unmap_mapping_folio() - Unmap single folio from processes.