From patchwork Fri Nov 17 21:15:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13459595 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6D57C072A2 for ; Fri, 17 Nov 2023 21:18:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7A92C8D005D; Fri, 17 Nov 2023 16:18:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 72F978D005B; Fri, 17 Nov 2023 16:18:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5844E8D005D; Fri, 17 Nov 2023 16:18:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 453B48D005B for ; Fri, 17 Nov 2023 16:18:24 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 1F6171A02B1 for ; Fri, 17 Nov 2023 21:18:24 +0000 (UTC) X-FDA: 81468709728.04.62973D0 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf26.hostedemail.com (Postfix) with ESMTP id 3E9EB140028 for ; Fri, 17 Nov 2023 21:18:22 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ZIDhH7m7; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf26.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700255902; a=rsa-sha256; cv=none; b=LK275ru3O1i00yTU6/CKQ54pN/XnITKM0SWkSmh9/4Tii8VyIrH0QXk9oqTJIfhrZ/gXLj chCkI4Kcu9Qivczt4zIQ9TXAvZuSvNaifT6iLTt8Q6ZRquaZEk5cfiGdwv/KfSgDkrgD1a F6xW9jCMK3cb4rbcVRN9s2oqNnhW8xM= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ZIDhH7m7; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf26.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700255902; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PC5znrgVdeDTK8h747pYH7QIEJHuL7LxC8/q4NHF5Co=; b=IJTvIyzal04Bn1kw3hk+4ClZCn81qbh2RxP3VNGhSL83mGJOFbbUAJusw59PvpQG8Srk4Z MNe/07cRZJ1aVw4unhUNRhiHJ/lH40a9VZrYlUmv9VkQNGJ6Z7MJN4tdRfRBrmqrSlJnyb ozN7j1g51EgfnQeb7NsH92a30ATE+yU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1700255901; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PC5znrgVdeDTK8h747pYH7QIEJHuL7LxC8/q4NHF5Co=; b=ZIDhH7m7jkMJr6vg3mfF32+GcEjjuE4Cv71sfbpTZgsVxK+Iw3CSLU8l5P1BjAyiXYlsQ+ rphiL/y9NSCePCBwiaQGEAP3R6htn0vYW5r3/QXAVwODln32J5JTTvvP5DhIy9996N5TEE il9tyQKV2WXEjG5mOzrKO3Rm4n/KQVE= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-594-LBX5ZUC7Puy96fdFuc260Q-1; Fri, 17 Nov 2023 16:18:18 -0500 X-MC-Unique: LBX5ZUC7Puy96fdFuc260Q-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0A302382255A; Fri, 17 Nov 2023 21:18:17 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.16]) by smtp.corp.redhat.com (Postfix) with ESMTP id 52FD6492BFE; Fri, 17 Nov 2023 21:18:14 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-cachefs@redhat.com, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 38/51] netfs: Implement a write-through caching option Date: Fri, 17 Nov 2023 21:15:30 +0000 Message-ID: <20231117211544.1740466-39-dhowells@redhat.com> In-Reply-To: <20231117211544.1740466-1-dhowells@redhat.com> References: <20231117211544.1740466-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 3E9EB140028 X-Stat-Signature: x3arap3n6dumguthbqdwx6yknrecku47 X-HE-Tag: 1700255902-536034 X-HE-Meta: U2FsdGVkX1/h/q/zlkT0Ysv96ck9razQRDtzr9kTWLlI+hgDWRgXMtbJWScL3TEI/Jkvo1yKccItoTsw6FP1+9Dtb8Eff+y7jvt+Lpyfvhta/E/Cwg5RgFiMcco4zf6p3Uq1A2ldcwyiA9V+MoYodtgCRIJHbofF0noDgLZEXX6KXaBZ5Y3q26wRyezArIV6zLDWh9bvFZjvWsWFtSP7qSpn2rnIEpw1oRu46ZUZ8DjFLWlH9xchMuuFSl+H+PLDDiIMgXiY/lDPMd3CXgGsDsDsAhEo5oDneGYXT4Ijq3zHl2qVGE01cjYWq9WqxPMfJ2qfW6rFz4htdK3Hm+8+OnHU4jcYq0xKbdEfu28qrZcImYJ2C5Ck8ypUmwksRaATVkCFRtEJV54nVN7aNWglRrOVN/ux5IRPvBqiSTOpYgvqkTH2E3OL1lRYoZKURoA3ANQNuwEFpND88FOb8+6w2lYHiTpTE6kNi4glCtbmvjy+sIX9xZDo2gb1A7JqKPW+dy1JVOS90ssFABFbgXUZtG4lF5YIC1soqsHTEB0L74S6K2ETLHqHuPtXrcDVYgOLnn4Y7waYBZbeUAtUT6hdR0YUkkbi/wikw8rl+pqz/c+HW0Qgh7tvN//vyF2l9YLzf87R9fe6Fc7py9H92JMxkfBxFIY+4MNIvlwCZaKcM2Lhs0Ty58uK+RwcI66QQn65ljYUjjiUy+0ARZxViMCCErXAgkXMadmRULxyaNwhlSXi58oCtJok/CxZEtrzaeVC+YG1WZI9dSnu75T1xxTKJneN4Ew/MLwhYnOgkgFAp4/Df4LuWHIq+0UmYrJc4SisO2vDeEU5stD2JL3lB9N+b1RyQ+uU7HmFQFtjFvY9P3i1D/n2sI8JUXxq4UyEwvehzQa4QCfQrTk6afeucWzBRPTy/GL+MRThhxBGa0eOFtDkmD4jqp3CKjf/JbLNxs19SqOhZBaqMvcVK+GvYA4 jAtLVMSM r/VEoYCxs4f7KKY9za3edj+JfQHFVcXCqnWDYmGLaHP/1SfF/DUxclQCZihAXJQAVE5bJ5WuIuGFOP7b0a+Og+U9fYLMg3SrN2HYAY9VukR25T0gmrbKw6gWlLcpYMR3RDTS8WMv2QA1Eoq8AArT7HNIULhpBIHGbuzHhHDD8pfIbLRgz/E+MCuDHSOjW8oz7nIDxWpWBJV5WDiLM7YG6Au7rRrqMiO8ghfrmYSq7nwBf2UyPlaw2XUWjXCTO/i5nKxDdC7sK2Te3CCnCscskn10onEjLQd/vKUSfigxof5j6xY0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Provide a flag whereby a filesystem may request that cifs_perform_write() perform write-through caching. This involves putting pages directly into writeback rather than dirty and attaching them to a write operation as we go. Further, the writes being made are limited to the byte range being written rather than whole folios being written. This can be used by cifs, for example, to deal with strict byte-range locking. This can't be used with content encryption as that may require expansion of the write RPC beyond the write being made. This doesn't affect writes via mmap - those are written back in the normal way; similarly failed writethrough writes are marked dirty and left to writeback to retry. Another option would be to simply invalidate them, but the contents can be simultaneously accessed by read() and through mmap. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_write.c | 67 +++++++++++++++++++++++---- fs/netfs/internal.h | 3 ++ fs/netfs/main.c | 1 + fs/netfs/objects.c | 1 + fs/netfs/output.c | 90 ++++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 2 + include/trace/events/netfs.h | 8 +++- 7 files changed, 160 insertions(+), 12 deletions(-) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index 43e825b882ff..1e8829ad2cbf 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -26,6 +26,8 @@ enum netfs_how_to_modify { NETFS_FLUSH_CONTENT, /* Flush incompatible content. */ }; +static void netfs_cleanup_buffered_write(struct netfs_io_request *wreq); + static void netfs_set_group(struct folio *folio, struct netfs_group *netfs_group) { if (netfs_group && !folio_get_private(folio)) @@ -135,6 +137,14 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, struct inode *inode = file_inode(file); struct address_space *mapping = inode->i_mapping; struct netfs_inode *ctx = netfs_inode(inode); + struct writeback_control wbc = { + .sync_mode = WB_SYNC_NONE, + .for_sync = true, + .nr_to_write = LONG_MAX, + .range_start = iocb->ki_pos, + .range_end = iocb->ki_pos + iter->count, + }; + struct netfs_io_request *wreq = NULL; struct netfs_folio *finfo; struct folio *folio; enum netfs_how_to_modify howto; @@ -145,6 +155,30 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, size_t max_chunk = PAGE_SIZE << MAX_PAGECACHE_ORDER; bool maybe_trouble = false; + if (unlikely(test_bit(NETFS_ICTX_WRITETHROUGH, &ctx->flags) || + iocb->ki_flags & (IOCB_DSYNC | IOCB_SYNC)) + ) { + if (pos < i_size_read(inode)) { + ret = filemap_write_and_wait_range(mapping, pos, pos + iter->count); + if (ret < 0) { + goto out; + } + } + + wbc_attach_fdatawrite_inode(&wbc, mapping->host); + + wreq = netfs_begin_writethrough(iocb, iter->count); + if (IS_ERR(wreq)) { + wbc_detach_inode(&wbc); + ret = PTR_ERR(wreq); + wreq = NULL; + goto out; + } + if (!is_sync_kiocb(iocb)) + wreq->iocb = iocb; + wreq->cleanup = netfs_cleanup_buffered_write; + } + do { size_t flen; size_t offset; /* Offset into pagecache folio */ @@ -314,7 +348,23 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, } written += copied; - folio_mark_dirty(folio); + if (likely(!wreq)) { + folio_mark_dirty(folio); + } else { + if (folio_test_dirty(folio)) + /* Sigh. mmap. */ + folio_clear_dirty_for_io(folio); + /* We make multiple writes to the folio... */ + if (!folio_test_writeback(folio)) { + folio_start_writeback(folio); + if (wreq->iter.count == 0) + trace_netfs_folio(folio, netfs_folio_trace_wthru); + else + trace_netfs_folio(folio, netfs_folio_trace_wthru_plus); + } + netfs_advance_writethrough(wreq, copied, + offset + copied == flen); + } retry: folio_unlock(folio); folio_put(folio); @@ -324,17 +374,14 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, } while (iov_iter_count(iter)); out: - if (likely(written)) { - /* Flush and wait for a write that requires immediate synchronisation. */ - if (iocb->ki_flags & (IOCB_DSYNC | IOCB_SYNC)) { - _debug("dsync"); - ret = filemap_fdatawait_range(mapping, iocb->ki_pos, - iocb->ki_pos + written); - } - - iocb->ki_pos += written; + if (unlikely(wreq)) { + ret = netfs_end_writethrough(wreq, iocb); + wbc_detach_inode(&wbc); + if (ret == -EIOCBQUEUED) + return ret; } + iocb->ki_pos += written; _leave(" = %zd [%zd]", written, ret); return written ? written : ret; diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 782b73b1f5a7..d7f31e660645 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -111,6 +111,9 @@ static inline void netfs_see_request(struct netfs_io_request *rreq, */ int netfs_begin_write(struct netfs_io_request *wreq, bool may_wait, enum netfs_write_trace what); +struct netfs_io_request *netfs_begin_writethrough(struct kiocb *iocb, size_t len); +int netfs_advance_writethrough(struct netfs_io_request *wreq, size_t copied, bool to_page_end); +int netfs_end_writethrough(struct netfs_io_request *wreq, struct kiocb *iocb); /* * stats.c diff --git a/fs/netfs/main.c b/fs/netfs/main.c index 577c8a9fc0f2..ed540c5dec8d 100644 --- a/fs/netfs/main.c +++ b/fs/netfs/main.c @@ -33,6 +33,7 @@ static const char *netfs_origins[nr__netfs_io_origin] = { [NETFS_READPAGE] = "RP", [NETFS_READ_FOR_WRITE] = "RW", [NETFS_WRITEBACK] = "WB", + [NETFS_WRITETHROUGH] = "WT", [NETFS_LAUNDER_WRITE] = "LW", [NETFS_RMW_READ] = "RM", [NETFS_UNBUFFERED_WRITE] = "UW", diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index 6bf3b3f51499..14cdf34e767e 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -41,6 +41,7 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, rreq->debug_id = atomic_inc_return(&debug_ids); xa_init(&rreq->bounce); INIT_LIST_HEAD(&rreq->subrequests); + INIT_WORK(&rreq->work, NULL); refcount_set(&rreq->ref, 1); __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags); diff --git a/fs/netfs/output.c b/fs/netfs/output.c index 2d2530dc9507..255027b472d7 100644 --- a/fs/netfs/output.c +++ b/fs/netfs/output.c @@ -393,3 +393,93 @@ int netfs_begin_write(struct netfs_io_request *wreq, bool may_wait, TASK_UNINTERRUPTIBLE); return wreq->error; } + +/* + * Begin a write operation for writing through the pagecache. + */ +struct netfs_io_request *netfs_begin_writethrough(struct kiocb *iocb, size_t len) +{ + struct netfs_io_request *wreq; + struct file *file = iocb->ki_filp; + + wreq = netfs_alloc_request(file->f_mapping, file, iocb->ki_pos, len, + NETFS_WRITETHROUGH); + if (IS_ERR(wreq)) + return wreq; + + trace_netfs_write(wreq, netfs_write_trace_writethrough); + + __set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags); + iov_iter_xarray(&wreq->iter, ITER_SOURCE, &wreq->mapping->i_pages, wreq->start, 0); + wreq->io_iter = wreq->iter; + + /* ->outstanding > 0 carries a ref */ + netfs_get_request(wreq, netfs_rreq_trace_get_for_outstanding); + atomic_set(&wreq->nr_outstanding, 1); + return wreq; +} + +static void netfs_submit_writethrough(struct netfs_io_request *wreq, bool final) +{ + struct netfs_inode *ictx = netfs_inode(wreq->inode); + unsigned long long start; + size_t len; + + if (!test_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags)) + return; + + start = wreq->start + wreq->submitted; + len = wreq->iter.count - wreq->submitted; + if (!final) { + len /= wreq->wsize; /* Round to number of maximum packets */ + len *= wreq->wsize; + } + + ictx->ops->create_write_requests(wreq, start, len); + wreq->submitted += len; +} + +/* + * Advance the state of the write operation used when writing through the + * pagecache. Data has been copied into the pagecache that we need to append + * to the request. If we've added more than wsize then we need to create a new + * subrequest. + */ +int netfs_advance_writethrough(struct netfs_io_request *wreq, size_t copied, bool to_page_end) +{ + _enter("ic=%zu sb=%zu ws=%u cp=%zu tp=%u", + wreq->iter.count, wreq->submitted, wreq->wsize, copied, to_page_end); + + wreq->iter.count += copied; + wreq->io_iter.count += copied; + if (to_page_end && wreq->io_iter.count - wreq->submitted >= wreq->wsize) + netfs_submit_writethrough(wreq, false); + + return wreq->error; +} + +/* + * End a write operation used when writing through the pagecache. + */ +int netfs_end_writethrough(struct netfs_io_request *wreq, struct kiocb *iocb) +{ + int ret = -EIOCBQUEUED; + + _enter("ic=%zu sb=%zu ws=%u", + wreq->iter.count, wreq->submitted, wreq->wsize); + + if (wreq->submitted < wreq->io_iter.count) + netfs_submit_writethrough(wreq, true); + + if (atomic_dec_and_test(&wreq->nr_outstanding)) + netfs_write_terminated(wreq, false); + + if (is_sync_kiocb(iocb)) { + wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS, + TASK_UNINTERRUPTIBLE); + ret = wreq->error; + } + + netfs_put_request(wreq, false, netfs_rreq_trace_put_return); + return ret; +} diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 474b3a0f202a..39f885fea383 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -143,6 +143,7 @@ struct netfs_inode { #define NETFS_ICTX_ODIRECT 0 /* The file has DIO in progress */ #define NETFS_ICTX_UNBUFFERED 1 /* I/O should not use the pagecache */ #define NETFS_ICTX_ENCRYPTED 2 /* The file contents are encrypted */ +#define NETFS_ICTX_WRITETHROUGH 3 /* Write-through caching */ unsigned char min_bshift; /* log2 min block size for bounding box or 0 */ unsigned char crypto_bshift; /* log2 of crypto block size */ unsigned char crypto_trailer; /* Size of crypto trailer */ @@ -234,6 +235,7 @@ enum netfs_io_origin { NETFS_READPAGE, /* This read is a synchronous read */ NETFS_READ_FOR_WRITE, /* This read is to prepare a write */ NETFS_WRITEBACK, /* This write was triggered by writepages */ + NETFS_WRITETHROUGH, /* This write was made by netfs_perform_write() */ NETFS_LAUNDER_WRITE, /* This is triggered by ->launder_folio() */ NETFS_RMW_READ, /* This is an unbuffered read for RMW */ NETFS_UNBUFFERED_WRITE, /* This is an unbuffered write */ diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 54b2d781d3a9..04cbe803c251 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -27,13 +27,15 @@ EM(netfs_write_trace_dio_write, "DIO-WRITE") \ EM(netfs_write_trace_launder, "LAUNDER ") \ EM(netfs_write_trace_unbuffered_write, "UNB-WRITE") \ - E_(netfs_write_trace_writeback, "WRITEBACK") + EM(netfs_write_trace_writeback, "WRITEBACK") \ + E_(netfs_write_trace_writethrough, "WRITETHRU") #define netfs_rreq_origins \ EM(NETFS_READAHEAD, "RA") \ EM(NETFS_READPAGE, "RP") \ EM(NETFS_READ_FOR_WRITE, "RW") \ EM(NETFS_WRITEBACK, "WB") \ + EM(NETFS_WRITETHROUGH, "WT") \ EM(NETFS_LAUNDER_WRITE, "LW") \ EM(NETFS_RMW_READ, "RM") \ EM(NETFS_UNBUFFERED_WRITE, "UW") \ @@ -138,7 +140,9 @@ EM(netfs_folio_trace_redirty, "redirty") \ EM(netfs_folio_trace_redirtied, "redirtied") \ EM(netfs_folio_trace_store, "store") \ - E_(netfs_folio_trace_store_plus, "store+") + EM(netfs_folio_trace_store_plus, "store+") \ + EM(netfs_folio_trace_wthru, "wthru") \ + E_(netfs_folio_trace_wthru_plus, "wthru+") #ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY #define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY