From patchwork Mon Feb 20 13:43:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 13146460 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8A03C64EC4 for ; Mon, 20 Feb 2023 13:44:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232221AbjBTNoI (ORCPT ); Mon, 20 Feb 2023 08:44:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232222AbjBTNoG (ORCPT ); Mon, 20 Feb 2023 08:44:06 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45D901C585 for ; Mon, 20 Feb 2023 05:43:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1676900593; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Y3EjzB0OOl0TQdFJ8eoQ91vfL3E1hDWFQA6V6T6kY+k=; b=PRI3kBvubnvGdFJfSMIBE66RkPEsUfIXsPsib7k6+8dMXCdF9WyxBEafnV394jGZ/v+YHa 7410q3QpMaUKrtgfuZ6jB+DIBRFYh0C4JQaqLw4BsZb9lMtvsvKCFHqVDMYCN1Qn4X00eM szqvujwSYTDrx4PHOcgNIGcHhuiRJZU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-278-aKVzqjynPfCh65w_jFmJ1w-1; Mon, 20 Feb 2023 08:43:10 -0500 X-MC-Unique: aKVzqjynPfCh65w_jFmJ1w-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 044C1802D32; Mon, 20 Feb 2023 13:43:10 +0000 (UTC) Received: from dwysocha.rdu.csb (unknown [10.22.8.11]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B9207404CD84; Mon, 20 Feb 2023 13:43:09 +0000 (UTC) From: Dave Wysochanski To: Anna Schumaker , Trond Myklebust , David Howells , Benjamin Maynard , Daire Byrne Cc: linux-nfs@vger.kernel.org, linux-cachefs@redhat.com Subject: [PATCH v11 1/5] NFS: Rename readpage_async_filler to nfs_read_add_folio Date: Mon, 20 Feb 2023 08:43:04 -0500 Message-Id: <20230220134308.1193219-2-dwysocha@redhat.com> In-Reply-To: <20230220134308.1193219-1-dwysocha@redhat.com> References: <20230220134308.1193219-1-dwysocha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Rename readpage_async_filler to nfs_read_add_folio to better reflect what this function does (add a folio to the nfs_pageio_descriptor), and simplify arguments to this function by removing struct nfs_readdesc. Signed-off-by: Dave Wysochanski --- fs/nfs/read.c | 54 +++++++++++++++++++++++++-------------------------- 1 file changed, 27 insertions(+), 27 deletions(-) diff --git a/fs/nfs/read.c b/fs/nfs/read.c index c380cff4108e..4cb3991c4735 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -126,11 +126,6 @@ static void nfs_readpage_release(struct nfs_page *req, int error) nfs_release_request(req); } -struct nfs_readdesc { - struct nfs_pageio_descriptor pgio; - struct nfs_open_context *ctx; -}; - static void nfs_page_group_set_uptodate(struct nfs_page *req) { if (nfs_page_group_sync_on_bit(req, PG_UPTODATE)) @@ -152,7 +147,8 @@ static void nfs_read_completion(struct nfs_pgio_header *hdr) if (test_bit(NFS_IOHDR_EOF, &hdr->flags)) { /* note: regions of the page not covered by a - * request are zeroed in readpage_async_filler */ + * request are zeroed in nfs_read_add_folio + */ if (bytes > hdr->good_bytes) { /* nothing in this request was good, so zero * the full extent of the request */ @@ -280,7 +276,9 @@ static void nfs_readpage_result(struct rpc_task *task, nfs_readpage_retry(task, hdr); } -static int readpage_async_filler(struct nfs_readdesc *desc, struct folio *folio) +static int nfs_read_add_folio(struct nfs_pageio_descriptor *pgio, + struct nfs_open_context *ctx, + struct folio *folio) { struct inode *inode = folio_file_mapping(folio)->host; struct nfs_server *server = NFS_SERVER(inode); @@ -302,15 +300,15 @@ static int readpage_async_filler(struct nfs_readdesc *desc, struct folio *folio) goto out_unlock; } - new = nfs_page_create_from_folio(desc->ctx, folio, 0, aligned_len); + new = nfs_page_create_from_folio(ctx, folio, 0, aligned_len); if (IS_ERR(new)) goto out_error; if (len < fsize) folio_zero_segment(folio, len, fsize); - if (!nfs_pageio_add_request(&desc->pgio, new)) { + if (!nfs_pageio_add_request(pgio, new)) { nfs_list_remove_request(new); - error = desc->pgio.pg_error; + error = pgio->pg_error; nfs_readpage_release(new, error); goto out; } @@ -331,8 +329,9 @@ static int readpage_async_filler(struct nfs_readdesc *desc, struct folio *folio) */ int nfs_read_folio(struct file *file, struct folio *folio) { - struct nfs_readdesc desc; struct inode *inode = file_inode(file); + struct nfs_pageio_descriptor pgio; + struct nfs_open_context *ctx; int ret; trace_nfs_aop_readpage(inode, folio); @@ -355,25 +354,25 @@ int nfs_read_folio(struct file *file, struct folio *folio) if (NFS_STALE(inode)) goto out_unlock; - desc.ctx = get_nfs_open_context(nfs_file_open_context(file)); + ctx = get_nfs_open_context(nfs_file_open_context(file)); - xchg(&desc.ctx->error, 0); - nfs_pageio_init_read(&desc.pgio, inode, false, + xchg(&ctx->error, 0); + nfs_pageio_init_read(&pgio, inode, false, &nfs_async_read_completion_ops); - ret = readpage_async_filler(&desc, folio); + ret = nfs_read_add_folio(&pgio, ctx, folio); if (ret) goto out; - nfs_pageio_complete_read(&desc.pgio); - ret = desc.pgio.pg_error < 0 ? desc.pgio.pg_error : 0; + nfs_pageio_complete_read(&pgio); + ret = pgio.pg_error < 0 ? pgio.pg_error : 0; if (!ret) { ret = folio_wait_locked_killable(folio); if (!folio_test_uptodate(folio) && !ret) - ret = xchg(&desc.ctx->error, 0); + ret = xchg(&ctx->error, 0); } out: - put_nfs_open_context(desc.ctx); + put_nfs_open_context(ctx); trace_nfs_aop_readpage_done(inode, folio, ret); return ret; out_unlock: @@ -384,9 +383,10 @@ int nfs_read_folio(struct file *file, struct folio *folio) void nfs_readahead(struct readahead_control *ractl) { + struct nfs_pageio_descriptor pgio; + struct nfs_open_context *ctx; unsigned int nr_pages = readahead_count(ractl); struct file *file = ractl->file; - struct nfs_readdesc desc; struct inode *inode = ractl->mapping->host; struct folio *folio; int ret; @@ -400,24 +400,24 @@ void nfs_readahead(struct readahead_control *ractl) if (file == NULL) { ret = -EBADF; - desc.ctx = nfs_find_open_context(inode, NULL, FMODE_READ); - if (desc.ctx == NULL) + ctx = nfs_find_open_context(inode, NULL, FMODE_READ); + if (ctx == NULL) goto out; } else - desc.ctx = get_nfs_open_context(nfs_file_open_context(file)); + ctx = get_nfs_open_context(nfs_file_open_context(file)); - nfs_pageio_init_read(&desc.pgio, inode, false, + nfs_pageio_init_read(&pgio, inode, false, &nfs_async_read_completion_ops); while ((folio = readahead_folio(ractl)) != NULL) { - ret = readpage_async_filler(&desc, folio); + ret = nfs_read_add_folio(&pgio, ctx, folio); if (ret) break; } - nfs_pageio_complete_read(&desc.pgio); + nfs_pageio_complete_read(&pgio); - put_nfs_open_context(desc.ctx); + put_nfs_open_context(ctx); out: trace_nfs_aop_readahead_done(inode, nr_pages, ret); } From patchwork Mon Feb 20 13:43:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 13146462 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96424C05027 for ; Mon, 20 Feb 2023 13:44:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232248AbjBTNoK (ORCPT ); Mon, 20 Feb 2023 08:44:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232234AbjBTNoH (ORCPT ); Mon, 20 Feb 2023 08:44:07 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8399F93F8 for ; Mon, 20 Feb 2023 05:43:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1676900591; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hEDbCVpvz4HlHLs8NXjBlLOrSIxl7uOH070emD8jg1I=; b=Kdbh/v9aV4FoGQ7OdYWWCGBpkk8Fp7f5+pPqjqQ/fy5D1up09l229RjkPxt9f6KVIzwIpU 61Mwf2opvA+K+DbBic/rdXvguYk8bTTX/n661T/Z+PfnrT2fMyK+nIiqXBFMpPHaN7/bFz ow9Hcr7ST0IFW/y36jaOySFXGVhgSe0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-288-hgyg4flMMc-jC-NHrwLBog-1; Mon, 20 Feb 2023 08:43:10 -0500 X-MC-Unique: hgyg4flMMc-jC-NHrwLBog-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 50767100F907; Mon, 20 Feb 2023 13:43:10 +0000 (UTC) Received: from dwysocha.rdu.csb (unknown [10.22.8.11]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 0F22D404CD84; Mon, 20 Feb 2023 13:43:10 +0000 (UTC) From: Dave Wysochanski To: Anna Schumaker , Trond Myklebust , David Howells , Benjamin Maynard , Daire Byrne Cc: linux-nfs@vger.kernel.org, linux-cachefs@redhat.com Subject: [PATCH v11 2/5] NFS: Configure support for netfs when NFS fscache is configured Date: Mon, 20 Feb 2023 08:43:05 -0500 Message-Id: <20230220134308.1193219-3-dwysocha@redhat.com> In-Reply-To: <20230220134308.1193219-1-dwysocha@redhat.com> References: <20230220134308.1193219-1-dwysocha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org As first steps for support of the netfs library when NFS_FSCACHE is configured, add NETFS_SUPPORT to Kconfig and add the required netfs_inode into struct nfs_inode. Using netfs requires we move the VFS inode structure to be stored inside struct netfs_inode, along with the fscache_cookie. Thus, if NFS_FSCACHE is configured, place netfs_inode inside an anonymous union so the vfs_inode memory is the same and we do not need to modify other non-fscache areas of NFS. In addition, inside the NFS fscache code, use the new helpers, netfs_inode() and netfs_i_cookie() helpers, and remove our own helper, nfs_i_fscache(). Later patches will convert NFS fscache to fully use netfs. Signed-off-by: Dave Wysochanski --- fs/nfs/Kconfig | 1 + fs/nfs/fscache.c | 20 +++++++++----------- fs/nfs/fscache.h | 15 ++++++--------- include/linux/nfs_fs.h | 24 ++++++++++-------------- 4 files changed, 26 insertions(+), 34 deletions(-) diff --git a/fs/nfs/Kconfig b/fs/nfs/Kconfig index 1ead5bd740c2..a7942c77415d 100644 --- a/fs/nfs/Kconfig +++ b/fs/nfs/Kconfig @@ -171,6 +171,7 @@ config ROOT_NFS config NFS_FSCACHE bool "Provide NFS client caching support" depends on NFS_FS=m && FSCACHE || NFS_FS=y && FSCACHE=y + select NETFS_SUPPORT help Say Y here if you want NFS data to be cached locally on disc through the general filesystem cache manager diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c index e731c00a9fcb..313c1519510c 100644 --- a/fs/nfs/fscache.c +++ b/fs/nfs/fscache.c @@ -163,13 +163,14 @@ void nfs_fscache_init_inode(struct inode *inode) struct nfs_server *nfss = NFS_SERVER(inode); struct nfs_inode *nfsi = NFS_I(inode); - nfsi->fscache = NULL; + netfs_inode(inode)->cache = NULL; if (!(nfss->fscache && S_ISREG(inode->i_mode))) return; nfs_fscache_update_auxdata(&auxdata, inode); - nfsi->fscache = fscache_acquire_cookie(NFS_SB(inode->i_sb)->fscache, + netfs_inode(inode)->cache = fscache_acquire_cookie( + nfss->fscache, 0, nfsi->fh.data, /* index_key */ nfsi->fh.size, @@ -183,11 +184,8 @@ void nfs_fscache_init_inode(struct inode *inode) */ void nfs_fscache_clear_inode(struct inode *inode) { - struct nfs_inode *nfsi = NFS_I(inode); - struct fscache_cookie *cookie = nfs_i_fscache(inode); - - fscache_relinquish_cookie(cookie, false); - nfsi->fscache = NULL; + fscache_relinquish_cookie(netfs_i_cookie(netfs_inode(inode)), false); + netfs_inode(inode)->cache = NULL; } /* @@ -212,7 +210,7 @@ void nfs_fscache_clear_inode(struct inode *inode) void nfs_fscache_open_file(struct inode *inode, struct file *filp) { struct nfs_fscache_inode_auxdata auxdata; - struct fscache_cookie *cookie = nfs_i_fscache(inode); + struct fscache_cookie *cookie = netfs_i_cookie(netfs_inode(inode)); bool open_for_write = inode_is_open_for_write(inode); if (!fscache_cookie_valid(cookie)) @@ -230,7 +228,7 @@ EXPORT_SYMBOL_GPL(nfs_fscache_open_file); void nfs_fscache_release_file(struct inode *inode, struct file *filp) { struct nfs_fscache_inode_auxdata auxdata; - struct fscache_cookie *cookie = nfs_i_fscache(inode); + struct fscache_cookie *cookie = netfs_i_cookie(netfs_inode(inode)); loff_t i_size = i_size_read(inode); nfs_fscache_update_auxdata(&auxdata, inode); @@ -243,7 +241,7 @@ void nfs_fscache_release_file(struct inode *inode, struct file *filp) static int fscache_fallback_read_page(struct inode *inode, struct page *page) { struct netfs_cache_resources cres; - struct fscache_cookie *cookie = nfs_i_fscache(inode); + struct fscache_cookie *cookie = netfs_i_cookie(&NFS_I(inode)->netfs); struct iov_iter iter; struct bio_vec bvec[1]; int ret; @@ -271,7 +269,7 @@ static int fscache_fallback_write_page(struct inode *inode, struct page *page, bool no_space_allocated_yet) { struct netfs_cache_resources cres; - struct fscache_cookie *cookie = nfs_i_fscache(inode); + struct fscache_cookie *cookie = netfs_i_cookie(&NFS_I(inode)->netfs); struct iov_iter iter; struct bio_vec bvec[1]; loff_t start = page_offset(page); diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h index 2a37af880978..38614ed8f951 100644 --- a/fs/nfs/fscache.h +++ b/fs/nfs/fscache.h @@ -54,7 +54,7 @@ static inline bool nfs_fscache_release_folio(struct folio *folio, gfp_t gfp) if (current_is_kswapd() || !(gfp & __GFP_FS)) return false; folio_wait_fscache(folio); - fscache_note_page_release(nfs_i_fscache(folio->mapping->host)); + fscache_note_page_release(netfs_i_cookie(&NFS_I(folio->mapping->host)->netfs)); nfs_inc_fscache_stats(folio->mapping->host, NFSIOS_FSCACHE_PAGES_UNCACHED); } @@ -66,7 +66,7 @@ static inline bool nfs_fscache_release_folio(struct folio *folio, gfp_t gfp) */ static inline int nfs_fscache_read_page(struct inode *inode, struct page *page) { - if (nfs_i_fscache(inode)) + if (netfs_inode(inode)->cache) return __nfs_fscache_read_page(inode, page); return -ENOBUFS; } @@ -78,7 +78,7 @@ static inline int nfs_fscache_read_page(struct inode *inode, struct page *page) static inline void nfs_fscache_write_page(struct inode *inode, struct page *page) { - if (nfs_i_fscache(inode)) + if (netfs_inode(inode)->cache) __nfs_fscache_write_page(inode, page); } @@ -101,13 +101,10 @@ static inline void nfs_fscache_update_auxdata(struct nfs_fscache_inode_auxdata * static inline void nfs_fscache_invalidate(struct inode *inode, int flags) { struct nfs_fscache_inode_auxdata auxdata; - struct nfs_inode *nfsi = NFS_I(inode); + struct fscache_cookie *cookie = netfs_i_cookie(&NFS_I(inode)->netfs); - if (nfsi->fscache) { - nfs_fscache_update_auxdata(&auxdata, inode); - fscache_invalidate(nfsi->fscache, &auxdata, - i_size_read(inode), flags); - } + nfs_fscache_update_auxdata(&auxdata, inode); + fscache_invalidate(cookie, &auxdata, i_size_read(inode), flags); } /* diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h index 45c44211e50e..580847c70fec 100644 --- a/include/linux/nfs_fs.h +++ b/include/linux/nfs_fs.h @@ -31,6 +31,10 @@ #include #include +#ifdef CONFIG_NFS_FSCACHE +#include +#endif + #include #include #include @@ -204,14 +208,15 @@ struct nfs_inode { /* how many bytes have been written/read and how many bytes queued up */ __u64 write_io; __u64 read_io; -#ifdef CONFIG_NFS_FSCACHE - struct fscache_cookie *fscache; -#endif - struct inode vfs_inode; - #ifdef CONFIG_NFS_V4_2 struct nfs4_xattr_cache *xattr_cache; #endif + union { + struct inode vfs_inode; +#ifdef CONFIG_NFS_FSCACHE + struct netfs_inode netfs; /* netfs context and VFS inode */ +#endif + }; }; struct nfs4_copy_state { @@ -329,15 +334,6 @@ static inline int NFS_STALE(const struct inode *inode) return test_bit(NFS_INO_STALE, &NFS_I(inode)->flags); } -static inline struct fscache_cookie *nfs_i_fscache(struct inode *inode) -{ -#ifdef CONFIG_NFS_FSCACHE - return NFS_I(inode)->fscache; -#else - return NULL; -#endif -} - static inline __u64 NFS_FILEID(const struct inode *inode) { return NFS_I(inode)->fileid; From patchwork Mon Feb 20 13:43:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 13146465 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34B62C05027 for ; Mon, 20 Feb 2023 13:44:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231865AbjBTNoP (ORCPT ); Mon, 20 Feb 2023 08:44:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232249AbjBTNoL (ORCPT ); Mon, 20 Feb 2023 08:44:11 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F16184C1E for ; Mon, 20 Feb 2023 05:43:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1676900594; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+7dbP7vUo3IC4HUGwaYqQjeBgBTP6K7gEUPfLtLdGsI=; b=QlIEzd5R+ldmlLNyec3Q9nQPCzZ3qwgYcD11JZKGyahr1lC5yfq7tDfkt0/Xyfg7s9bmxU ab78IYQ1jzBTni51Kwpmp7iww6pfEMqk5KNaSMYezVv7kr0GgDm26n+gwB6V6sKbXioTfc d3rINV2fT4E5CtTXST1Hi1uIc8+o1Qk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-1-L-VJ-BMHMhabWvdCReSGXA-1; Mon, 20 Feb 2023 08:43:11 -0500 X-MC-Unique: L-VJ-BMHMhabWvdCReSGXA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A642218A6465; Mon, 20 Feb 2023 13:43:10 +0000 (UTC) Received: from dwysocha.rdu.csb (unknown [10.22.8.11]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 5AB48404CD84; Mon, 20 Feb 2023 13:43:10 +0000 (UTC) From: Dave Wysochanski To: Anna Schumaker , Trond Myklebust , David Howells , Benjamin Maynard , Daire Byrne Cc: linux-nfs@vger.kernel.org, linux-cachefs@redhat.com Subject: [PATCH v11 3/5] NFS: Convert buffered read paths to use netfs when fscache is enabled Date: Mon, 20 Feb 2023 08:43:06 -0500 Message-Id: <20230220134308.1193219-4-dwysocha@redhat.com> In-Reply-To: <20230220134308.1193219-1-dwysocha@redhat.com> References: <20230220134308.1193219-1-dwysocha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Convert the NFS buffered read code paths to corresponding netfs APIs, but only when fscache is configured and enabled. The netfs API defines struct netfs_request_ops which must be filled in by the network filesystem. For NFS, we only need to define 5 of the functions, the main one being the issue_read() function. The issue_read() function is called by the netfs layer when a read cannot be fulfilled locally, and must be sent to the server (either the cache is not active, or it is active but the data is not available). Once the read from the server is complete, netfs requires a call to netfs_subreq_terminated() which conveys either how many bytes were read successfully, or an error. Note that issue_read() is called with a structure, netfs_io_subrequest, which defines the IO requested, and contains a start and a length (both in bytes), and assumes the underlying netfs will return a either an error on the whole region, or the number of bytes successfully read. The NFS IO path is page based and the main APIs are the pgio APIs defined in pagelist.c. For the pgio APIs, there is no way for the caller to know how many RPCs will be sent and how the pages will be broken up into underlying RPCs, each of which will have their own completion and return code. In contrast, netfs is subrequest based, a single subrequest may contain multiple pages, and a single subrequest is initiated with issue_read() and terminated with netfs_subreq_terminated(). Thus, to utilze the netfs APIs, NFS needs some way to accommodate the netfs API requirement on the single response to the whole subrequest, while also minimizing disruptive changes to the NFS pgio layer. The approach taken with this patch is to allocate a small structure for each nfs_netfs_issue_read() call, store the final error and number of bytes successfully transferred in the structure, and update these values as each RPC completes. The refcount on the structure is used as a marker for the last RPC completion, is incremented in nfs_netfs_read_initiate(), and decremented inside nfs_netfs_read_completion(), when a nfs_pgio_header contains a valid pointer to the data. On the final put (which signals the final outstanding RPC is complete) in nfs_netfs_read_completion(), call netfs_subreq_terminated() with either the final error value (if one or more READs complete with an error) or the number of bytes successfully transferred (if all RPCs complete successfully). Note that when all RPCs complete successfully, the number of bytes transferred is capped to the length of the subrequest. Capping the transferred length to the subrequest length prevents "Subreq overread" warnings from netfs. This is due to the "aligned_len" in nfs_pageio_add_page(), and the corner case where NFS requests a full page at the end of the file, even when i_size reflects only a partial page (NFS overread). Signed-off-by: Dave Wysochanski --- fs/nfs/fscache.c | 226 +++++++++++++++++++++++---------------- fs/nfs/fscache.h | 122 +++++++++++++++------ fs/nfs/inode.c | 2 + fs/nfs/internal.h | 9 ++ fs/nfs/pagelist.c | 4 + fs/nfs/read.c | 61 +++++------ include/linux/nfs_page.h | 3 + include/linux/nfs_xdr.h | 3 + 8 files changed, 274 insertions(+), 156 deletions(-) diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c index 313c1519510c..95c2b3056e2b 100644 --- a/fs/nfs/fscache.c +++ b/fs/nfs/fscache.c @@ -15,6 +15,9 @@ #include #include #include +#include +#include +#include #include "internal.h" #include "iostat.h" @@ -235,112 +238,153 @@ void nfs_fscache_release_file(struct inode *inode, struct file *filp) fscache_unuse_cookie(cookie, &auxdata, &i_size); } -/* - * Fallback page reading interface. - */ -static int fscache_fallback_read_page(struct inode *inode, struct page *page) +int nfs_netfs_read_folio(struct file *file, struct folio *folio) { - struct netfs_cache_resources cres; - struct fscache_cookie *cookie = netfs_i_cookie(&NFS_I(inode)->netfs); - struct iov_iter iter; - struct bio_vec bvec[1]; - int ret; - - memset(&cres, 0, sizeof(cres)); - bvec[0].bv_page = page; - bvec[0].bv_offset = 0; - bvec[0].bv_len = PAGE_SIZE; - iov_iter_bvec(&iter, ITER_DEST, bvec, ARRAY_SIZE(bvec), PAGE_SIZE); - - ret = fscache_begin_read_operation(&cres, cookie); - if (ret < 0) - return ret; - - ret = fscache_read(&cres, page_offset(page), &iter, NETFS_READ_HOLE_FAIL, - NULL, NULL); - fscache_end_operation(&cres); - return ret; + if (!netfs_inode(folio_inode(folio))->cache) + return -ENOBUFS; + + return netfs_read_folio(file, folio); } -/* - * Fallback page writing interface. - */ -static int fscache_fallback_write_page(struct inode *inode, struct page *page, - bool no_space_allocated_yet) +int nfs_netfs_readahead(struct readahead_control *ractl) { - struct netfs_cache_resources cres; - struct fscache_cookie *cookie = netfs_i_cookie(&NFS_I(inode)->netfs); - struct iov_iter iter; - struct bio_vec bvec[1]; - loff_t start = page_offset(page); - size_t len = PAGE_SIZE; - int ret; - - memset(&cres, 0, sizeof(cres)); - bvec[0].bv_page = page; - bvec[0].bv_offset = 0; - bvec[0].bv_len = PAGE_SIZE; - iov_iter_bvec(&iter, ITER_SOURCE, bvec, ARRAY_SIZE(bvec), PAGE_SIZE); - - ret = fscache_begin_write_operation(&cres, cookie); - if (ret < 0) - return ret; - - ret = cres.ops->prepare_write(&cres, &start, &len, i_size_read(inode), - no_space_allocated_yet); - if (ret == 0) - ret = fscache_write(&cres, page_offset(page), &iter, NULL, NULL); - fscache_end_operation(&cres); - return ret; + struct inode *inode = ractl->mapping->host; + + if (!netfs_inode(inode)->cache) + return -ENOBUFS; + + netfs_readahead(ractl); + return 0; } -/* - * Retrieve a page from fscache - */ -int __nfs_fscache_read_page(struct inode *inode, struct page *page) +atomic_t nfs_netfs_debug_id; +static int nfs_netfs_init_request(struct netfs_io_request *rreq, struct file *file) { - int ret; + rreq->netfs_priv = get_nfs_open_context(nfs_file_open_context(file)); + rreq->debug_id = atomic_inc_return(&nfs_netfs_debug_id); - trace_nfs_fscache_read_page(inode, page); - if (PageChecked(page)) { - ClearPageChecked(page); - ret = 1; - goto out; - } + return 0; +} - ret = fscache_fallback_read_page(inode, page); - if (ret < 0) { - nfs_inc_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_FAIL); - SetPageChecked(page); - goto out; - } +static void nfs_netfs_free_request(struct netfs_io_request *rreq) +{ + put_nfs_open_context(rreq->netfs_priv); +} - /* Read completed synchronously */ - nfs_inc_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_OK); - SetPageUptodate(page); - ret = 0; -out: - trace_nfs_fscache_read_page_exit(inode, page, ret); - return ret; +static inline int nfs_netfs_begin_cache_operation(struct netfs_io_request *rreq) +{ + return fscache_begin_read_operation(&rreq->cache_resources, + netfs_i_cookie(netfs_inode(rreq->inode))); } -/* - * Store a newly fetched page in fscache. We can be certain there's no page - * stored in the cache as yet otherwise we would've read it from there. - */ -void __nfs_fscache_write_page(struct inode *inode, struct page *page) +static struct nfs_netfs_io_data *nfs_netfs_alloc(struct netfs_io_subrequest *sreq) { - int ret; + struct nfs_netfs_io_data *netfs; + + netfs = kzalloc(sizeof(*netfs), GFP_KERNEL_ACCOUNT); + if (!netfs) + return NULL; + netfs->sreq = sreq; + refcount_set(&netfs->refcount, 1); + return netfs; +} - trace_nfs_fscache_write_page(inode, page); +static bool nfs_netfs_clamp_length(struct netfs_io_subrequest *sreq) +{ + size_t rsize = NFS_SB(sreq->rreq->inode->i_sb)->rsize; - ret = fscache_fallback_write_page(inode, page, true); + sreq->len = min(sreq->len, rsize); + return true; +} - if (ret != 0) { - nfs_inc_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_WRITTEN_FAIL); - nfs_inc_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_UNCACHED); - } else { - nfs_inc_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_WRITTEN_OK); +static void nfs_netfs_issue_read(struct netfs_io_subrequest *sreq) +{ + struct nfs_netfs_io_data *netfs; + struct nfs_pageio_descriptor pgio; + struct inode *inode = sreq->rreq->inode; + struct nfs_open_context *ctx = sreq->rreq->netfs_priv; + struct page *page; + int err; + pgoff_t start = (sreq->start + sreq->transferred) >> PAGE_SHIFT; + pgoff_t last = ((sreq->start + sreq->len - + sreq->transferred - 1) >> PAGE_SHIFT); + XA_STATE(xas, &sreq->rreq->mapping->i_pages, start); + + nfs_pageio_init_read(&pgio, inode, false, + &nfs_async_read_completion_ops); + + netfs = nfs_netfs_alloc(sreq); + if (!netfs) + return netfs_subreq_terminated(sreq, -ENOMEM, false); + + pgio.pg_netfs = netfs; /* used in completion */ + + xas_lock(&xas); + xas_for_each(&xas, page, last) { + /* nfs_read_add_folio() may schedule() due to pNFS layout and other RPCs */ + xas_pause(&xas); + xas_unlock(&xas); + err = nfs_read_add_folio(&pgio, ctx, page_folio(page)); + if (err < 0) { + netfs->error = err; + goto out; + } + xas_lock(&xas); } - trace_nfs_fscache_write_page_exit(inode, page, ret); + xas_unlock(&xas); +out: + nfs_pageio_complete_read(&pgio); + nfs_netfs_put(netfs); } + +void nfs_netfs_initiate_read(struct nfs_pgio_header *hdr) +{ + struct nfs_netfs_io_data *netfs = hdr->netfs; + + if (!netfs) + return; + + nfs_netfs_get(netfs); +} + +int nfs_netfs_folio_unlock(struct folio *folio) +{ + struct inode *inode = folio_file_mapping(folio)->host; + + /* + * If fscache is enabled, netfs will unlock pages. + */ + if (netfs_inode(inode)->cache) + return 0; + + return 1; +} + +void nfs_netfs_read_completion(struct nfs_pgio_header *hdr) +{ + struct nfs_netfs_io_data *netfs = hdr->netfs; + struct netfs_io_subrequest *sreq; + + if (!netfs) + return; + + sreq = netfs->sreq; + if (test_bit(NFS_IOHDR_EOF, &hdr->flags)) + __set_bit(NETFS_SREQ_CLEAR_TAIL, &sreq->flags); + + if (hdr->error) + netfs->error = hdr->error; + else + atomic64_add(hdr->res.count, &netfs->transferred); + + nfs_netfs_put(netfs); + hdr->netfs = NULL; +} + +const struct netfs_request_ops nfs_netfs_ops = { + .init_request = nfs_netfs_init_request, + .free_request = nfs_netfs_free_request, + .begin_cache_operation = nfs_netfs_begin_cache_operation, + .issue_read = nfs_netfs_issue_read, + .clamp_length = nfs_netfs_clamp_length +}; diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h index 38614ed8f951..e1706e736c64 100644 --- a/fs/nfs/fscache.h +++ b/fs/nfs/fscache.h @@ -34,6 +34,58 @@ struct nfs_fscache_inode_auxdata { u64 change_attr; }; +struct nfs_netfs_io_data { + /* + * NFS may split a netfs_io_subrequest into multiple RPCs, each + * with their own read completion. In netfs, we can only call + * netfs_subreq_terminated() once for each subrequest. Use the + * refcount here to double as a marker of the last RPC completion, + * and only call netfs via netfs_subreq_terminated() once. + */ + refcount_t refcount; + struct netfs_io_subrequest *sreq; + + /* + * Final disposition of the netfs_io_subrequest, sent in + * netfs_subreq_terminated() + */ + atomic64_t transferred; + int error; +}; + +static inline void nfs_netfs_get(struct nfs_netfs_io_data *netfs) +{ + refcount_inc(&netfs->refcount); +} + +static inline void nfs_netfs_put(struct nfs_netfs_io_data *netfs) +{ + ssize_t final_len; + + /* Only the last RPC completion should call netfs_subreq_terminated() */ + if (!refcount_dec_and_test(&netfs->refcount)) + return; + + /* + * The NFS pageio interface may read a complete page, even when netfs + * only asked for a partial page. Specifically, this may be seen when + * one thread is truncating a file while another one is reading the last + * page of the file. + * Correct the final length here to be no larger than the netfs subrequest + * length, and thus avoid netfs's "Subreq overread" warning message. + */ + final_len = min_t(s64, netfs->sreq->len, atomic64_read(&netfs->transferred)); + netfs_subreq_terminated(netfs->sreq, netfs->error ?: final_len, false); + kfree(netfs); +} +static inline void nfs_netfs_inode_init(struct nfs_inode *nfsi) +{ + netfs_inode_init(&nfsi->netfs, &nfs_netfs_ops); +} +extern void nfs_netfs_initiate_read(struct nfs_pgio_header *hdr); +extern void nfs_netfs_read_completion(struct nfs_pgio_header *hdr); +extern int nfs_netfs_folio_unlock(struct folio *folio); + /* * fscache.c */ @@ -44,9 +96,8 @@ extern void nfs_fscache_init_inode(struct inode *); extern void nfs_fscache_clear_inode(struct inode *); extern void nfs_fscache_open_file(struct inode *, struct file *); extern void nfs_fscache_release_file(struct inode *, struct file *); - -extern int __nfs_fscache_read_page(struct inode *, struct page *); -extern void __nfs_fscache_write_page(struct inode *, struct page *); +extern int nfs_netfs_readahead(struct readahead_control *ractl); +extern int nfs_netfs_read_folio(struct file *file, struct folio *folio); static inline bool nfs_fscache_release_folio(struct folio *folio, gfp_t gfp) { @@ -54,34 +105,11 @@ static inline bool nfs_fscache_release_folio(struct folio *folio, gfp_t gfp) if (current_is_kswapd() || !(gfp & __GFP_FS)) return false; folio_wait_fscache(folio); - fscache_note_page_release(netfs_i_cookie(&NFS_I(folio->mapping->host)->netfs)); - nfs_inc_fscache_stats(folio->mapping->host, - NFSIOS_FSCACHE_PAGES_UNCACHED); } + fscache_note_page_release(netfs_i_cookie(netfs_inode(folio->mapping->host))); return true; } -/* - * Retrieve a page from an inode data storage object. - */ -static inline int nfs_fscache_read_page(struct inode *inode, struct page *page) -{ - if (netfs_inode(inode)->cache) - return __nfs_fscache_read_page(inode, page); - return -ENOBUFS; -} - -/* - * Store a page newly fetched from the server in an inode data storage object - * in the cache. - */ -static inline void nfs_fscache_write_page(struct inode *inode, - struct page *page) -{ - if (netfs_inode(inode)->cache) - __nfs_fscache_write_page(inode, page); -} - static inline void nfs_fscache_update_auxdata(struct nfs_fscache_inode_auxdata *auxdata, struct inode *inode) { @@ -117,7 +145,28 @@ static inline const char *nfs_server_fscache_state(struct nfs_server *server) return "no "; } +static inline void nfs_netfs_set_pgio_header(struct nfs_pgio_header *hdr, + struct nfs_pageio_descriptor *desc) +{ + hdr->netfs = desc->pg_netfs; +} +static inline void nfs_netfs_set_pageio_descriptor(struct nfs_pageio_descriptor *desc, + struct nfs_pgio_header *hdr) +{ + desc->pg_netfs = hdr->netfs; +} +static inline void nfs_netfs_reset_pageio_descriptor(struct nfs_pageio_descriptor *desc) +{ + desc->pg_netfs = NULL; +} #else /* CONFIG_NFS_FSCACHE */ +static inline void nfs_netfs_inode_init(struct nfs_inode *nfsi) {} +static inline void nfs_netfs_initiate_read(struct nfs_pgio_header *hdr) {} +static inline void nfs_netfs_read_completion(struct nfs_pgio_header *hdr) {} +static inline int nfs_netfs_folio_unlock(struct folio *folio) +{ + return 1; +} static inline void nfs_fscache_release_super_cookie(struct super_block *sb) {} static inline void nfs_fscache_init_inode(struct inode *inode) {} @@ -125,22 +174,29 @@ static inline void nfs_fscache_clear_inode(struct inode *inode) {} static inline void nfs_fscache_open_file(struct inode *inode, struct file *filp) {} static inline void nfs_fscache_release_file(struct inode *inode, struct file *file) {} - -static inline bool nfs_fscache_release_folio(struct folio *folio, gfp_t gfp) +static inline int nfs_netfs_readahead(struct readahead_control *ractl) { - return true; /* may release folio */ + return -ENOBUFS; } -static inline int nfs_fscache_read_page(struct inode *inode, struct page *page) +static inline int nfs_netfs_read_folio(struct file *file, struct folio *folio) { return -ENOBUFS; } -static inline void nfs_fscache_write_page(struct inode *inode, struct page *page) {} + +static inline bool nfs_fscache_release_folio(struct folio *folio, gfp_t gfp) +{ + return true; /* may release folio */ +} static inline void nfs_fscache_invalidate(struct inode *inode, int flags) {} static inline const char *nfs_server_fscache_state(struct nfs_server *server) { return "no "; } - +static inline void nfs_netfs_set_pgio_header(struct nfs_pgio_header *hdr, + struct nfs_pageio_descriptor *desc) {} +static inline void nfs_netfs_set_pageio_descriptor(struct nfs_pageio_descriptor *desc, + struct nfs_pgio_header *hdr) {} +static inline void nfs_netfs_reset_pageio_descriptor(struct nfs_pageio_descriptor *desc) {} #endif /* CONFIG_NFS_FSCACHE */ #endif /* _NFS_FSCACHE_H */ diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c index e98ee7599eeb..68503a7e5b8d 100644 --- a/fs/nfs/inode.c +++ b/fs/nfs/inode.c @@ -2246,6 +2246,8 @@ struct inode *nfs_alloc_inode(struct super_block *sb) #ifdef CONFIG_NFS_V4_2 nfsi->xattr_cache = NULL; #endif + nfs_netfs_inode_init(nfsi); + return &nfsi->vfs_inode; } EXPORT_SYMBOL_GPL(nfs_alloc_inode); diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h index 239ae5084774..53e7257feb14 100644 --- a/fs/nfs/internal.h +++ b/fs/nfs/internal.h @@ -452,6 +452,10 @@ extern void nfs_sb_deactive(struct super_block *sb); extern int nfs_client_for_each_server(struct nfs_client *clp, int (*fn)(struct nfs_server *, void *), void *data); +#ifdef CONFIG_NFS_FSCACHE +extern const struct netfs_request_ops nfs_netfs_ops; +#endif + /* io.c */ extern void nfs_start_io_read(struct inode *inode); extern void nfs_end_io_read(struct inode *inode); @@ -481,9 +485,14 @@ extern int nfs4_get_rootfh(struct nfs_server *server, struct nfs_fh *mntfh, bool struct nfs_pgio_completion_ops; /* read.c */ +extern const struct nfs_pgio_completion_ops nfs_async_read_completion_ops; extern void nfs_pageio_init_read(struct nfs_pageio_descriptor *pgio, struct inode *inode, bool force_mds, const struct nfs_pgio_completion_ops *compl_ops); +extern int nfs_read_add_folio(struct nfs_pageio_descriptor *pgio, + struct nfs_open_context *ctx, + struct folio *folio); +extern void nfs_pageio_complete_read(struct nfs_pageio_descriptor *pgio); extern void nfs_read_prepare(struct rpc_task *task, void *calldata); extern void nfs_pageio_reset_read_mds(struct nfs_pageio_descriptor *pgio); diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c index 0b4c07c93a52..43582dcba66c 100644 --- a/fs/nfs/pagelist.c +++ b/fs/nfs/pagelist.c @@ -25,6 +25,7 @@ #include "internal.h" #include "pnfs.h" #include "nfstrace.h" +#include "fscache.h" #define NFSDBG_FACILITY NFSDBG_PAGECACHE @@ -104,6 +105,7 @@ void nfs_pgheader_init(struct nfs_pageio_descriptor *desc, hdr->good_bytes = mirror->pg_count; hdr->io_completion = desc->pg_io_completion; hdr->dreq = desc->pg_dreq; + nfs_netfs_set_pgio_header(hdr, desc); hdr->release = release; hdr->completion_ops = desc->pg_completion_ops; if (hdr->completion_ops->init_hdr) @@ -940,6 +942,7 @@ void nfs_pageio_init(struct nfs_pageio_descriptor *desc, desc->pg_lseg = NULL; desc->pg_io_completion = NULL; desc->pg_dreq = NULL; + nfs_netfs_reset_pageio_descriptor(desc); desc->pg_bsize = bsize; desc->pg_mirror_count = 1; @@ -1476,6 +1479,7 @@ int nfs_pageio_resend(struct nfs_pageio_descriptor *desc, desc->pg_io_completion = hdr->io_completion; desc->pg_dreq = hdr->dreq; + nfs_netfs_set_pageio_descriptor(desc, hdr); list_splice_init(&hdr->pages, &pages); while (!list_empty(&pages)) { struct nfs_page *req = nfs_list_entry(pages.next); diff --git a/fs/nfs/read.c b/fs/nfs/read.c index 4cb3991c4735..e3a630f4764b 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -30,7 +30,7 @@ #define NFSDBG_FACILITY NFSDBG_PAGECACHE -static const struct nfs_pgio_completion_ops nfs_async_read_completion_ops; +const struct nfs_pgio_completion_ops nfs_async_read_completion_ops; static const struct nfs_rw_ops nfs_rw_read_ops; static struct kmem_cache *nfs_rdata_cachep; @@ -73,7 +73,7 @@ void nfs_pageio_init_read(struct nfs_pageio_descriptor *pgio, } EXPORT_SYMBOL_GPL(nfs_pageio_init_read); -static void nfs_pageio_complete_read(struct nfs_pageio_descriptor *pgio) +void nfs_pageio_complete_read(struct nfs_pageio_descriptor *pgio) { struct nfs_pgio_mirror *pgm; unsigned long npages; @@ -109,20 +109,14 @@ EXPORT_SYMBOL_GPL(nfs_pageio_reset_read_mds); static void nfs_readpage_release(struct nfs_page *req, int error) { - struct inode *inode = d_inode(nfs_req_openctx(req)->dentry); struct folio *folio = nfs_page_to_folio(req); - dprintk("NFS: read done (%s/%llu %d@%lld)\n", inode->i_sb->s_id, - (unsigned long long)NFS_FILEID(inode), req->wb_bytes, - (long long)req_offset(req)); - if (nfs_error_is_fatal_on_server(error) && error != -ETIMEDOUT) folio_set_error(folio); - if (nfs_page_group_sync_on_bit(req, PG_UNLOCKPAGE)) { - if (folio_test_uptodate(folio)) - nfs_fscache_write_page(inode, &folio->page); - folio_unlock(folio); - } + if (nfs_page_group_sync_on_bit(req, PG_UNLOCKPAGE)) + if (nfs_netfs_folio_unlock(folio)) + folio_unlock(folio); + nfs_release_request(req); } @@ -176,6 +170,8 @@ static void nfs_read_completion(struct nfs_pgio_header *hdr) nfs_list_remove_request(req); nfs_readpage_release(req, error); } + nfs_netfs_read_completion(hdr); + out: hdr->release(hdr); } @@ -186,6 +182,7 @@ static void nfs_initiate_read(struct nfs_pgio_header *hdr, struct rpc_task_setup *task_setup_data, int how) { rpc_ops->read_setup(hdr, msg); + nfs_netfs_initiate_read(hdr); trace_nfs_initiate_read(hdr); } @@ -201,7 +198,7 @@ nfs_async_read_error(struct list_head *head, int error) } } -static const struct nfs_pgio_completion_ops nfs_async_read_completion_ops = { +const struct nfs_pgio_completion_ops nfs_async_read_completion_ops = { .error_cleanup = nfs_async_read_error, .completion = nfs_read_completion, }; @@ -276,9 +273,9 @@ static void nfs_readpage_result(struct rpc_task *task, nfs_readpage_retry(task, hdr); } -static int nfs_read_add_folio(struct nfs_pageio_descriptor *pgio, - struct nfs_open_context *ctx, - struct folio *folio) +int nfs_read_add_folio(struct nfs_pageio_descriptor *pgio, + struct nfs_open_context *ctx, + struct folio *folio) { struct inode *inode = folio_file_mapping(folio)->host; struct nfs_server *server = NFS_SERVER(inode); @@ -294,15 +291,11 @@ static int nfs_read_add_folio(struct nfs_pageio_descriptor *pgio, aligned_len = min_t(unsigned int, ALIGN(len, rsize), fsize); - if (!IS_SYNC(inode)) { - error = nfs_fscache_read_page(inode, &folio->page); - if (error == 0) - goto out_unlock; - } - new = nfs_page_create_from_folio(ctx, folio, 0, aligned_len); - if (IS_ERR(new)) - goto out_error; + if (IS_ERR(new)) { + error = PTR_ERR(new); + goto out; + } if (len < fsize) folio_zero_segment(folio, len, fsize); @@ -313,10 +306,6 @@ static int nfs_read_add_folio(struct nfs_pageio_descriptor *pgio, goto out; } return 0; -out_error: - error = PTR_ERR(new); -out_unlock: - folio_unlock(folio); out: return error; } @@ -354,6 +343,10 @@ int nfs_read_folio(struct file *file, struct folio *folio) if (NFS_STALE(inode)) goto out_unlock; + ret = nfs_netfs_read_folio(file, folio); + if (!ret) + goto out; + ctx = get_nfs_open_context(nfs_file_open_context(file)); xchg(&ctx->error, 0); @@ -362,7 +355,7 @@ int nfs_read_folio(struct file *file, struct folio *folio) ret = nfs_read_add_folio(&pgio, ctx, folio); if (ret) - goto out; + goto out_put; nfs_pageio_complete_read(&pgio); ret = pgio.pg_error < 0 ? pgio.pg_error : 0; @@ -371,14 +364,14 @@ int nfs_read_folio(struct file *file, struct folio *folio) if (!folio_test_uptodate(folio) && !ret) ret = xchg(&ctx->error, 0); } -out: +out_put: put_nfs_open_context(ctx); +out: trace_nfs_aop_readpage_done(inode, folio, ret); return ret; out_unlock: folio_unlock(folio); - trace_nfs_aop_readpage_done(inode, folio, ret); - return ret; + goto out; } void nfs_readahead(struct readahead_control *ractl) @@ -398,6 +391,10 @@ void nfs_readahead(struct readahead_control *ractl) if (NFS_STALE(inode)) goto out; + ret = nfs_netfs_readahead(ractl); + if (!ret) + goto out; + if (file == NULL) { ret = -EBADF; ctx = nfs_find_open_context(inode, NULL, FMODE_READ); diff --git a/include/linux/nfs_page.h b/include/linux/nfs_page.h index a2f1ca657623..aa9f4c6ebe26 100644 --- a/include/linux/nfs_page.h +++ b/include/linux/nfs_page.h @@ -105,6 +105,9 @@ struct nfs_pageio_descriptor { struct pnfs_layout_segment *pg_lseg; struct nfs_io_completion *pg_io_completion; struct nfs_direct_req *pg_dreq; +#ifdef CONFIG_NFS_FSCACHE + void *pg_netfs; +#endif unsigned int pg_bsize; /* default bsize for mirrors */ u32 pg_mirror_count; diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h index e86cf6642d21..e196ef595908 100644 --- a/include/linux/nfs_xdr.h +++ b/include/linux/nfs_xdr.h @@ -1619,6 +1619,9 @@ struct nfs_pgio_header { const struct nfs_rw_ops *rw_ops; struct nfs_io_completion *io_completion; struct nfs_direct_req *dreq; +#ifdef CONFIG_NFS_FSCACHE + void *netfs; +#endif int pnfs_error; int error; /* merge with pnfs_error */ From patchwork Mon Feb 20 13:43:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 13146461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FDAEC636CC for ; Mon, 20 Feb 2023 13:44:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231940AbjBTNoJ (ORCPT ); Mon, 20 Feb 2023 08:44:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231705AbjBTNoG (ORCPT ); Mon, 20 Feb 2023 08:44:06 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 545701C7D4 for ; Mon, 20 Feb 2023 05:43:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1676900594; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=L2AJtsFPZBKYokV218FEgdDycX+OLdAKiTDvaBJ75Ag=; b=DljSATXtcIHLS3F9jLk4V68XWz818iKY3QCXtfLMGfYBpH/kpQj5QqVsGPszROVS9kN1EW PPXG8SIvxF55ZVJR93fmBqu8/hklhRFYZTMIBsJxMluAunrKN0NmJVfyEmcp7xXjT0uXLd 3b6bfsnmvbnxyekBkq15vDvYvM4xfMc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-581-HZx7-_IhNS-Tf16WatBtCQ-1; Mon, 20 Feb 2023 08:43:11 -0500 X-MC-Unique: HZx7-_IhNS-Tf16WatBtCQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F05DE100F901; Mon, 20 Feb 2023 13:43:10 +0000 (UTC) Received: from dwysocha.rdu.csb (unknown [10.22.8.11]) by smtp.corp.redhat.com (Postfix) with ESMTPS id AFDCE404CD84; Mon, 20 Feb 2023 13:43:10 +0000 (UTC) From: Dave Wysochanski To: Anna Schumaker , Trond Myklebust , David Howells , Benjamin Maynard , Daire Byrne Cc: linux-nfs@vger.kernel.org, linux-cachefs@redhat.com Subject: [PATCH v11 4/5] NFS: Remove all NFSIOS_FSCACHE counters due to conversion to netfs API Date: Mon, 20 Feb 2023 08:43:07 -0500 Message-Id: <20230220134308.1193219-5-dwysocha@redhat.com> In-Reply-To: <20230220134308.1193219-1-dwysocha@redhat.com> References: <20230220134308.1193219-1-dwysocha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org The old NFSIOS_FSCACHE counters are no longer accurate or useful with the conversion to the new netfs API. The new API does not have a page based interface, and so the counters in nfs_stat_fscachecounters are no longer obtainable. The new netfs the API has extensive statistics inside /proc/fs/fscache/stats so we no longer need NFS specific fscache stats. Note this also removes the 'fsc:' line from /proc/self/mountstats so it will be a user-visible change. Signed-off-by: Dave Wysochanski Reviewed-by: Jeff Layton --- fs/nfs/iostat.h | 17 ----------------- fs/nfs/super.c | 11 ----------- include/linux/nfs_iostat.h | 12 ------------ 3 files changed, 40 deletions(-) diff --git a/fs/nfs/iostat.h b/fs/nfs/iostat.h index 2ddaab1ac653..5aa776b5a3e7 100644 --- a/fs/nfs/iostat.h +++ b/fs/nfs/iostat.h @@ -17,9 +17,6 @@ struct nfs_iostats { unsigned long long bytes[__NFSIOS_BYTESMAX]; -#ifdef CONFIG_NFS_FSCACHE - unsigned long long fscache[__NFSIOS_FSCACHEMAX]; -#endif unsigned long events[__NFSIOS_COUNTSMAX]; } ____cacheline_aligned; @@ -49,20 +46,6 @@ static inline void nfs_add_stats(const struct inode *inode, nfs_add_server_stats(NFS_SERVER(inode), stat, addend); } -#ifdef CONFIG_NFS_FSCACHE -static inline void nfs_add_fscache_stats(struct inode *inode, - enum nfs_stat_fscachecounters stat, - long addend) -{ - this_cpu_add(NFS_SERVER(inode)->io_stats->fscache[stat], addend); -} -static inline void nfs_inc_fscache_stats(struct inode *inode, - enum nfs_stat_fscachecounters stat) -{ - this_cpu_inc(NFS_SERVER(inode)->io_stats->fscache[stat]); -} -#endif - static inline struct nfs_iostats __percpu *nfs_alloc_iostats(void) { return alloc_percpu(struct nfs_iostats); diff --git a/fs/nfs/super.c b/fs/nfs/super.c index 05ae23657527..90796db8c205 100644 --- a/fs/nfs/super.c +++ b/fs/nfs/super.c @@ -692,10 +692,6 @@ int nfs_show_stats(struct seq_file *m, struct dentry *root) totals.events[i] += stats->events[i]; for (i = 0; i < __NFSIOS_BYTESMAX; i++) totals.bytes[i] += stats->bytes[i]; -#ifdef CONFIG_NFS_FSCACHE - for (i = 0; i < __NFSIOS_FSCACHEMAX; i++) - totals.fscache[i] += stats->fscache[i]; -#endif preempt_enable(); } @@ -706,13 +702,6 @@ int nfs_show_stats(struct seq_file *m, struct dentry *root) seq_puts(m, "\n\tbytes:\t"); for (i = 0; i < __NFSIOS_BYTESMAX; i++) seq_printf(m, "%Lu ", totals.bytes[i]); -#ifdef CONFIG_NFS_FSCACHE - if (nfss->options & NFS_OPTION_FSCACHE) { - seq_puts(m, "\n\tfsc:\t"); - for (i = 0; i < __NFSIOS_FSCACHEMAX; i++) - seq_printf(m, "%Lu ", totals.fscache[i]); - } -#endif seq_putc(m, '\n'); rpc_clnt_show_stats(m, nfss->client); diff --git a/include/linux/nfs_iostat.h b/include/linux/nfs_iostat.h index 027874c36c88..8d946089d151 100644 --- a/include/linux/nfs_iostat.h +++ b/include/linux/nfs_iostat.h @@ -119,16 +119,4 @@ enum nfs_stat_eventcounters { __NFSIOS_COUNTSMAX, }; -/* - * NFS local caching servicing counters - */ -enum nfs_stat_fscachecounters { - NFSIOS_FSCACHE_PAGES_READ_OK, - NFSIOS_FSCACHE_PAGES_READ_FAIL, - NFSIOS_FSCACHE_PAGES_WRITTEN_OK, - NFSIOS_FSCACHE_PAGES_WRITTEN_FAIL, - NFSIOS_FSCACHE_PAGES_UNCACHED, - __NFSIOS_FSCACHEMAX, -}; - #endif /* _LINUX_NFS_IOSTAT */ From patchwork Mon Feb 20 13:43:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 13146463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51354C64EC7 for ; Mon, 20 Feb 2023 13:44:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232243AbjBTNoL (ORCPT ); Mon, 20 Feb 2023 08:44:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232235AbjBTNoH (ORCPT ); Mon, 20 Feb 2023 08:44:07 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B5BC1ADE8 for ; Mon, 20 Feb 2023 05:43:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1676900592; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RBugVf10c9MjPQYLpdSEXbCGaHl2WL0p/2X6fdbtjXM=; b=IvY32adMeetdTEkaYshyHr0YntKjCCj4lKT1mPug/Ox0b5HcCgY3F2yqAuQYwGvYUzeHQa vOI7jasiY6s8Kzii+e2O8mCzrsof2shUT408wZmq96wdEiCCHS+8vQzQ5RYGFIvEEAwRhG wBZX7I4NGlQsp+i+GXQN5FjMGeJHEkA= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-606-QywOI7FsOTu-yje4y7iT0Q-1; Mon, 20 Feb 2023 08:43:11 -0500 X-MC-Unique: QywOI7FsOTu-yje4y7iT0Q-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4B1463815EE5; Mon, 20 Feb 2023 13:43:11 +0000 (UTC) Received: from dwysocha.rdu.csb (unknown [10.22.8.11]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 06067404CD84; Mon, 20 Feb 2023 13:43:10 +0000 (UTC) From: Dave Wysochanski To: Anna Schumaker , Trond Myklebust , David Howells , Benjamin Maynard , Daire Byrne Cc: linux-nfs@vger.kernel.org, linux-cachefs@redhat.com Subject: [PATCH v11 5/5] NFS: Remove fscache specific trace points and NFS_INO_FSCACHE bit Date: Mon, 20 Feb 2023 08:43:08 -0500 Message-Id: <20230220134308.1193219-6-dwysocha@redhat.com> In-Reply-To: <20230220134308.1193219-1-dwysocha@redhat.com> References: <20230220134308.1193219-1-dwysocha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org The NFS specific trace points are no longer needed as tracing is well covered by netfs and fscache. Signed-off-by: Dave Wysochanski Reviewed-by: Jeff Layton --- fs/nfs/nfstrace.h | 91 ------------------------------------------ include/linux/nfs_fs.h | 1 - 2 files changed, 92 deletions(-) diff --git a/fs/nfs/nfstrace.h b/fs/nfs/nfstrace.h index a778713343df..4e90ca531176 100644 --- a/fs/nfs/nfstrace.h +++ b/fs/nfs/nfstrace.h @@ -39,7 +39,6 @@ { BIT(NFS_INO_STALE), "STALE" }, \ { BIT(NFS_INO_ACL_LRU_SET), "ACL_LRU_SET" }, \ { BIT(NFS_INO_INVALIDATING), "INVALIDATING" }, \ - { BIT(NFS_INO_FSCACHE), "FSCACHE" }, \ { BIT(NFS_INO_LAYOUTCOMMIT), "NEED_LAYOUTCOMMIT" }, \ { BIT(NFS_INO_LAYOUTCOMMITTING), "LAYOUTCOMMIT" }, \ { BIT(NFS_INO_LAYOUTSTATS), "LAYOUTSTATS" }, \ @@ -1243,96 +1242,6 @@ TRACE_EVENT(nfs_readpage_short, ) ); -DECLARE_EVENT_CLASS(nfs_fscache_page_event, - TP_PROTO( - const struct inode *inode, - struct page *page - ), - - TP_ARGS(inode, page), - - TP_STRUCT__entry( - __field(dev_t, dev) - __field(u32, fhandle) - __field(u64, fileid) - __field(loff_t, offset) - ), - - TP_fast_assign( - const struct nfs_inode *nfsi = NFS_I(inode); - const struct nfs_fh *fh = &nfsi->fh; - - __entry->offset = page_index(page) << PAGE_SHIFT; - __entry->dev = inode->i_sb->s_dev; - __entry->fileid = nfsi->fileid; - __entry->fhandle = nfs_fhandle_hash(fh); - ), - - TP_printk( - "fileid=%02x:%02x:%llu fhandle=0x%08x " - "offset=%lld", - MAJOR(__entry->dev), MINOR(__entry->dev), - (unsigned long long)__entry->fileid, - __entry->fhandle, - (long long)__entry->offset - ) -); -DECLARE_EVENT_CLASS(nfs_fscache_page_event_done, - TP_PROTO( - const struct inode *inode, - struct page *page, - int error - ), - - TP_ARGS(inode, page, error), - - TP_STRUCT__entry( - __field(int, error) - __field(dev_t, dev) - __field(u32, fhandle) - __field(u64, fileid) - __field(loff_t, offset) - ), - - TP_fast_assign( - const struct nfs_inode *nfsi = NFS_I(inode); - const struct nfs_fh *fh = &nfsi->fh; - - __entry->offset = page_index(page) << PAGE_SHIFT; - __entry->dev = inode->i_sb->s_dev; - __entry->fileid = nfsi->fileid; - __entry->fhandle = nfs_fhandle_hash(fh); - __entry->error = error; - ), - - TP_printk( - "fileid=%02x:%02x:%llu fhandle=0x%08x " - "offset=%lld error=%d", - MAJOR(__entry->dev), MINOR(__entry->dev), - (unsigned long long)__entry->fileid, - __entry->fhandle, - (long long)__entry->offset, __entry->error - ) -); -#define DEFINE_NFS_FSCACHE_PAGE_EVENT(name) \ - DEFINE_EVENT(nfs_fscache_page_event, name, \ - TP_PROTO( \ - const struct inode *inode, \ - struct page *page \ - ), \ - TP_ARGS(inode, page)) -#define DEFINE_NFS_FSCACHE_PAGE_EVENT_DONE(name) \ - DEFINE_EVENT(nfs_fscache_page_event_done, name, \ - TP_PROTO( \ - const struct inode *inode, \ - struct page *page, \ - int error \ - ), \ - TP_ARGS(inode, page, error)) -DEFINE_NFS_FSCACHE_PAGE_EVENT(nfs_fscache_read_page); -DEFINE_NFS_FSCACHE_PAGE_EVENT_DONE(nfs_fscache_read_page_exit); -DEFINE_NFS_FSCACHE_PAGE_EVENT(nfs_fscache_write_page); -DEFINE_NFS_FSCACHE_PAGE_EVENT_DONE(nfs_fscache_write_page_exit); TRACE_EVENT(nfs_pgio_error, TP_PROTO( diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h index 580847c70fec..137d296e6002 100644 --- a/include/linux/nfs_fs.h +++ b/include/linux/nfs_fs.h @@ -281,7 +281,6 @@ struct nfs4_copy_state { #define NFS_INO_ACL_LRU_SET (2) /* Inode is on the LRU list */ #define NFS_INO_INVALIDATING (3) /* inode is being invalidated */ #define NFS_INO_PRESERVE_UNLINKED (4) /* preserve file if removed while open */ -#define NFS_INO_FSCACHE (5) /* inode can be cached by FS-Cache */ #define NFS_INO_LAYOUTCOMMIT (9) /* layoutcommit required */ #define NFS_INO_LAYOUTCOMMITTING (10) /* layoutcommit inflight */ #define NFS_INO_LAYOUTSTATS (11) /* layoutstats inflight */