From patchwork Wed Dec 13 15:23:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13491241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8456FC4167D for ; Wed, 13 Dec 2023 15:26:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 163156B040B; Wed, 13 Dec 2023 10:26:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 112B76B040C; Wed, 13 Dec 2023 10:26:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ECE4A6B040D; Wed, 13 Dec 2023 10:26:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DD7F66B040B for ; Wed, 13 Dec 2023 10:26:43 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id BFDC01A0151 for ; Wed, 13 Dec 2023 15:26:43 +0000 (UTC) X-FDA: 81562172286.26.D6E366D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf11.hostedemail.com (Postfix) with ESMTP id 112744000E for ; Wed, 13 Dec 2023 15:26:41 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=fKazqaXf; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702481202; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rsXVY8tpr/E9yMFTv2Ypl/L7pV1HZSNyLiZDFqk/Fm8=; b=W53GNVBeeY+Q2gLKj9fWJeqFpHaMnMQP32RM/wg5mVtER8oeYp2FyDdxDCCaBp0Oc1rZfg w4+su1LzfiWJBEsW22rEjW8KtFBQnRni9bIsrXBf50Py66x8owXXm6XG5a4G4oyDOSNuGd mhygUfkpqxIjrNSopKHjmRPSONeZ/hE= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=fKazqaXf; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702481202; a=rsa-sha256; cv=none; b=zRwAV1Dc5VF2zl2Ioh55SIyyrsLBXqzBFk6ZFAebTHUbfIBwHQcVSHt/BbXXFAb1t5ADjs YL35exovnugekxDEl3L6t6e2VTn64jBllLdQ4BMJc7iHHtBevq+FBopoGiEGwv6ObGk0Ly 3MyV27EWuGLC5Ua9DelWkM1XWPTHIlo= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702481201; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rsXVY8tpr/E9yMFTv2Ypl/L7pV1HZSNyLiZDFqk/Fm8=; b=fKazqaXfIBPaDL3e0yEaKU177JtyVWPZugPQYqnywlEflMXWmoXRj+GMW0Pnwwcw9VoGLF Q7h2HtE6otPpI7PsJ0yo/AwJeVKT1wtfTrk1e0JXtKOT9hh6dVr2bFXeNYUQCBHJWBvMPn FZ8f5zYckVQ27Ql2xCbiZNHmK4IfS/A= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-658-jGM-5vUJMrmT8-w3P6yTPg-1; Wed, 13 Dec 2023 10:26:35 -0500 X-MC-Unique: jGM-5vUJMrmT8-w3P6yTPg-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 70A3485A58A; Wed, 13 Dec 2023 15:26:34 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.2]) by smtp.corp.redhat.com (Postfix) with ESMTP id 31312492BC6; Wed, 13 Dec 2023 15:26:31 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Eric Van Hensbergen , Ilya Dryomov , Christian Brauner , linux-cachefs@redhat.com, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 35/39] netfs: Provide a launder_folio implementation Date: Wed, 13 Dec 2023 15:23:45 +0000 Message-ID: <20231213152350.431591-36-dhowells@redhat.com> In-Reply-To: <20231213152350.431591-1-dhowells@redhat.com> References: <20231213152350.431591-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 X-Rspamd-Queue-Id: 112744000E X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 8jzp7abgngammsn5je9ox7ib84bob496 X-HE-Tag: 1702481201-281046 X-HE-Meta: U2FsdGVkX19318Pv/9BuhdkH1ppwn07x8MXsMTiTH6Os0ySWA5nvzA8HqhLbEuOv0D02AZhmw0jnQEZGuaXRAXtfFOWTIqgFuefSlTY2uJhpTpRTHA6uRQAnj+gJQMyaX0WrojXK/04rjhdxiNLCCTjTOX/A8tcGDV73VuSbDZkijTzGWRfou2oHRnTjDNfbGbx8wn9cHoQSn/ihlFNACeJgPViJerb/hE6cph9f6DXhnxMWE9z/jSiIXav4wGSF386I7AgJtNXaVo9lQDvlkfKkYU6zLm3tNahN5kb7FFRh5FQbiPy+6EeZtv9UL1qFyBCoITQyv4fnyMkENdJDgIaOQuO8Bc/eowLpByX/lB4Oet+/0HeqAvFVrocCHuAEsSX35YZ1f5QTysGQDesPbUNHchHcKyOSFNPnbeNgNUjRL0Yfx7cT18yMctvxHmRoiaejGLbbBoea3O5MqPZsEOBqY/QM5ZePrdVgvUpT8vfRFIg08f7ZF4Nv1+WXsdkoaXBHrSd4lORXbPweeB/GcUewN75JMLklwDhC7zZ+mlKLLCkf5XynmzWOJSCQHB9TzW1dEdrqHlhSVQW75OCCSD802aoTU9IlSrsHHSYVrpTk+cVhCvgWIMFSuUr2hZ0yr1TElTwRYcZ6PQAPhi1C/XJcUrtN9XyKh1i3eCC2ab91xocDCnHlWboh2LgyI2L9XvjLp20XUGb9Sln9YLQN9H9GZ8RUjlpfeNeOkTyIdTYtBGFHcbj2tKJqdcLG60q3Tm0v/CiVttwRNFLrGDEW0/1wp2RK2M5G9UnsokNG+tNzDwAlFIZQuL6g5tLhC9z8PjW8DZo0EDGykaucnyv4MgTHxyv8Q+iNtNR6Minq0ZUdnDowq5zYDc1h7FRi3BKFpSBDglscf+v87bkkceWl8yJJKT8d/bGj6TRiJ422Kp1ussaJt7xxJt3YaP4szNYyNuIJkNKlFDi6TMunZLZ P3y7aqbK BJfzcA6e48zcEz5W7O5KOMItVDFDo7vld80nZ5PCFQSeFyDkin6ekhKLf6UDLq0R7qFcE1eU6oKr4q1yz0/gwIRUO1ThCHfDjQvXXCZ+4PMEYmztDcjjER6Ollgac4fTzRdDP7ho/KbeAT9jK2F+LVr5y0ch9WLFFJcr9ZJmG7XMxwvOfEUT2Ffxs5UiZSFFQD3GUZ20y9VlgHz3PCtH2dNnHOYoIv6kpj5PE+CmMbdJQK51czGmF5NwMs2fsCf1j01WK3Ykra8dtshkms/mo0n7o2Q5zf3IB+PdJbrAmMZ0VOKv+pE0qkG+zHFcQwDMRW40I0M7j6HFWAIAjZEmiL0X+tA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Provide a launder_folio implementation for netfslib. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_write.c | 74 ++++++++++++++++++++++++++++++++++++ fs/netfs/main.c | 1 + include/linux/netfs.h | 2 + include/trace/events/netfs.h | 3 ++ 4 files changed, 80 insertions(+) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index 42f89f8ea8af..8e0ebb7175a4 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -1122,3 +1122,77 @@ int netfs_writepages(struct address_space *mapping, return ret; } EXPORT_SYMBOL(netfs_writepages); + +/* + * Deal with the disposition of a laundered folio. + */ +static void netfs_cleanup_launder_folio(struct netfs_io_request *wreq) +{ + if (wreq->error) { + pr_notice("R=%08x Laundering error %d\n", wreq->debug_id, wreq->error); + mapping_set_error(wreq->mapping, wreq->error); + } +} + +/** + * netfs_launder_folio - Clean up a dirty folio that's being invalidated + * @folio: The folio to clean + * + * This is called to write back a folio that's being invalidated when an inode + * is getting torn down. Ideally, writepages would be used instead. + */ +int netfs_launder_folio(struct folio *folio) +{ + struct netfs_io_request *wreq; + struct address_space *mapping = folio->mapping; + struct netfs_folio *finfo = netfs_folio_info(folio); + struct netfs_group *group = netfs_folio_group(folio); + struct bio_vec bvec; + unsigned long long i_size = i_size_read(mapping->host); + unsigned long long start = folio_pos(folio); + size_t offset = 0, len; + int ret = 0; + + if (finfo) { + offset = finfo->dirty_offset; + start += offset; + len = finfo->dirty_len; + } else { + len = folio_size(folio); + } + len = min_t(unsigned long long, len, i_size - start); + + wreq = netfs_alloc_request(mapping, NULL, start, len, NETFS_LAUNDER_WRITE); + if (IS_ERR(wreq)) { + ret = PTR_ERR(wreq); + goto out; + } + + if (!folio_clear_dirty_for_io(folio)) + goto out_put; + + trace_netfs_folio(folio, netfs_folio_trace_launder); + + _debug("launder %llx-%llx", start, start + len - 1); + + /* Speculatively write to the cache. We have to fix this up later if + * the store fails. + */ + wreq->cleanup = netfs_cleanup_launder_folio; + + bvec_set_folio(&bvec, folio, len, offset); + iov_iter_bvec(&wreq->iter, ITER_SOURCE, &bvec, 1, len); + __set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags); + ret = netfs_begin_write(wreq, true, netfs_write_trace_launder); + +out_put: + folio_detach_private(folio); + netfs_put_group(group); + kfree(finfo); + netfs_put_request(wreq, false, netfs_rreq_trace_put_return); +out: + folio_wait_fscache(folio); + _leave(" = %d", ret); + return ret; +} +EXPORT_SYMBOL(netfs_launder_folio); diff --git a/fs/netfs/main.c b/fs/netfs/main.c index 9fe96de6960e..8d5ee0f56f28 100644 --- a/fs/netfs/main.c +++ b/fs/netfs/main.c @@ -33,6 +33,7 @@ static const char *netfs_origins[nr__netfs_io_origin] = { [NETFS_READPAGE] = "RP", [NETFS_READ_FOR_WRITE] = "RW", [NETFS_WRITEBACK] = "WB", + [NETFS_LAUNDER_WRITE] = "LW", [NETFS_UNBUFFERED_WRITE] = "UW", [NETFS_DIO_READ] = "DR", [NETFS_DIO_WRITE] = "DW", diff --git a/include/linux/netfs.h b/include/linux/netfs.h index ef17d94a2fbd..a7c2cb856e81 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -227,6 +227,7 @@ enum netfs_io_origin { NETFS_READPAGE, /* This read is a synchronous read */ NETFS_READ_FOR_WRITE, /* This read is to prepare a write */ NETFS_WRITEBACK, /* This write was triggered by writepages */ + NETFS_LAUNDER_WRITE, /* This is triggered by ->launder_folio() */ NETFS_UNBUFFERED_WRITE, /* This is an unbuffered write */ NETFS_DIO_READ, /* This is a direct I/O read */ NETFS_DIO_WRITE, /* This is a direct I/O write */ @@ -407,6 +408,7 @@ int netfs_unpin_writeback(struct inode *inode, struct writeback_control *wbc); void netfs_clear_inode_writeback(struct inode *inode, const void *aux); void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length); bool netfs_release_folio(struct folio *folio, gfp_t gfp); +int netfs_launder_folio(struct folio *folio); /* VMA operations API. */ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_group); diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 914a24b03d08..cc998798e20a 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -25,6 +25,7 @@ #define netfs_write_traces \ EM(netfs_write_trace_dio_write, "DIO-WRITE") \ + EM(netfs_write_trace_launder, "LAUNDER ") \ EM(netfs_write_trace_unbuffered_write, "UNB-WRITE") \ E_(netfs_write_trace_writeback, "WRITEBACK") @@ -33,6 +34,7 @@ EM(NETFS_READPAGE, "RP") \ EM(NETFS_READ_FOR_WRITE, "RW") \ EM(NETFS_WRITEBACK, "WB") \ + EM(NETFS_LAUNDER_WRITE, "LW") \ EM(NETFS_UNBUFFERED_WRITE, "UW") \ EM(NETFS_DIO_READ, "DR") \ E_(NETFS_DIO_WRITE, "DW") @@ -127,6 +129,7 @@ EM(netfs_folio_trace_end_copy, "end-copy") \ EM(netfs_folio_trace_filled_gaps, "filled-gaps") \ EM(netfs_folio_trace_kill, "kill") \ + EM(netfs_folio_trace_launder, "launder") \ EM(netfs_folio_trace_mkwrite, "mkwrite") \ EM(netfs_folio_trace_mkwrite_plus, "mkwrite+") \ EM(netfs_folio_trace_read_gaps, "read-gaps") \