From patchwork Sat Nov 21 13:29:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 11923361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09F90C388F9 for ; Sat, 21 Nov 2020 13:29:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B1E7822206 for ; Sat, 21 Nov 2020 13:29:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MFK14bFb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727677AbgKUN3T (ORCPT ); Sat, 21 Nov 2020 08:29:19 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:20269 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726175AbgKUN3S (ORCPT ); Sat, 21 Nov 2020 08:29:18 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605965356; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc; bh=buXyeDqhJk3lr2ZvPH0W6IIv0GhiX9VQVRcp4DWDrA4=; b=MFK14bFbmobiAnyCdusxID5liboR10AvjC4chopepbXBifyGfCoJZ1TPZ663YrK0uxk5Xq KPQSddH5MfwZNAmxM1fMB/W7jJEbcxi0F2zihGICO5NK8tOWP0wa/MIJB5UZlwUKiF7rXk xi9QFk4+dLbFfc7TvDNR/jzYOV2LK20= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-280-p68vy3MJMC2GqGY3RetWyQ-1; Sat, 21 Nov 2020 08:29:14 -0500 X-MC-Unique: p68vy3MJMC2GqGY3RetWyQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BA0911DDF9; Sat, 21 Nov 2020 13:29:13 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-41.rdu2.redhat.com [10.10.112.41]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 21FD96085D; Sat, 21 Nov 2020 13:29:13 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, dhowells@redhat.com Subject: [PATCH v1 01/13] NFS: Clean up nfs_readpage() and nfs_readpages() Date: Sat, 21 Nov 2020 08:29:11 -0500 Message-Id: <1605965351-24503-1-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org In prep for the new fscache netfs API, refactor nfs_readpage() and nfs_readpages() for future patches. No functional change. Signed-off-by: Dave Wysochanski --- fs/nfs/read.c | 45 +++++++++++++++++++++++---------------------- 1 file changed, 23 insertions(+), 22 deletions(-) diff --git a/fs/nfs/read.c b/fs/nfs/read.c index eb854f1f86e2..a05fb3904ddf 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -310,11 +310,11 @@ static void nfs_readpage_result(struct rpc_task *task, * - The error flag is set for this page. This happens only when a * previous async read operation failed. */ -int nfs_readpage(struct file *file, struct page *page) +int nfs_readpage(struct file *filp, struct page *page) { struct nfs_open_context *ctx; struct inode *inode = page_file_mapping(page)->host; - int error; + int ret; dprintk("NFS: nfs_readpage (%p %ld@%lu)\n", page, PAGE_SIZE, page_index(page)); @@ -328,43 +328,43 @@ int nfs_readpage(struct file *file, struct page *page) * be any new pending writes generated at this point * for this page (other pages can be written to). */ - error = nfs_wb_page(inode, page); - if (error) + ret = nfs_wb_page(inode, page); + if (ret) goto out_unlock; if (PageUptodate(page)) goto out_unlock; - error = -ESTALE; + ret = -ESTALE; if (NFS_STALE(inode)) goto out_unlock; - if (file == NULL) { - error = -EBADF; + if (filp == NULL) { + ret = -EBADF; ctx = nfs_find_open_context(inode, NULL, FMODE_READ); if (ctx == NULL) goto out_unlock; } else - ctx = get_nfs_open_context(nfs_file_open_context(file)); + ctx = get_nfs_open_context(nfs_file_open_context(filp)); if (!IS_SYNC(inode)) { - error = nfs_readpage_from_fscache(ctx, inode, page); - if (error == 0) + ret = nfs_readpage_from_fscache(ctx, inode, page); + if (ret == 0) goto out; } xchg(&ctx->error, 0); - error = nfs_readpage_async(ctx, inode, page); - if (!error) { - error = wait_on_page_locked_killable(page); - if (!PageUptodate(page) && !error) - error = xchg(&ctx->error, 0); + ret = nfs_readpage_async(ctx, inode, page); + if (!ret) { + ret = wait_on_page_locked_killable(page); + if (!PageUptodate(page) && !ret) + ret = xchg(&ctx->error, 0); } out: put_nfs_open_context(ctx); - return error; + return ret; out_unlock: unlock_page(page); - return error; + return ret; } struct nfs_readdesc { @@ -409,12 +409,10 @@ int nfs_readpages(struct file *filp, struct address_space *mapping, { struct nfs_pageio_descriptor pgio; struct nfs_pgio_mirror *pgm; - struct nfs_readdesc desc = { - .pgio = &pgio, - }; + struct nfs_readdesc desc; struct inode *inode = mapping->host; unsigned long npages; - int ret = -ESTALE; + int ret; dprintk("NFS: nfs_readpages (%s/%Lu %d)\n", inode->i_sb->s_id, @@ -422,13 +420,15 @@ int nfs_readpages(struct file *filp, struct address_space *mapping, nr_pages); nfs_inc_stats(inode, NFSIOS_VFSREADPAGES); + ret = -ESTALE; if (NFS_STALE(inode)) goto out; if (filp == NULL) { + ret = -EBADF; desc.ctx = nfs_find_open_context(inode, NULL, FMODE_READ); if (desc.ctx == NULL) - return -EBADF; + goto out; } else desc.ctx = get_nfs_open_context(nfs_file_open_context(filp)); @@ -440,6 +440,7 @@ int nfs_readpages(struct file *filp, struct address_space *mapping, if (ret == 0) goto read_complete; /* all pages were read */ + desc.pgio = &pgio; nfs_pageio_init_read(&pgio, inode, false, &nfs_async_read_completion_ops); From patchwork Sat Nov 21 13:29:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 11923363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BF99C56202 for ; Sat, 21 Nov 2020 13:29:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EB07E2224A for ; Sat, 21 Nov 2020 13:29:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="TZzJoqIJ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727418AbgKUN3V (ORCPT ); Sat, 21 Nov 2020 08:29:21 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:57297 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727702AbgKUN3T (ORCPT ); Sat, 21 Nov 2020 08:29:19 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605965358; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc; bh=TmUm+FqsWNSGezib7++Wn5qxVEto0VOC87DhDKJ5c38=; b=TZzJoqIJ5/5W2MU35dPepxQAidjzFejcjwHba/Lfow3aK0qv8UHqSLswT5UYAWA0+yTWUH cax4CjrBk+JCkj2tMsOjDvwYiN7B/J5JuRgPu9QS9twUIfltRtlV8D0xYx1Zud47qAvGyh EX9BiRysfRifiStXj1Z0aNra3jFpSeU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-437-7kPhSD1tP7mATZB1MMSQgA-1; Sat, 21 Nov 2020 08:29:17 -0500 X-MC-Unique: 7kPhSD1tP7mATZB1MMSQgA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E8FF71DDE7; Sat, 21 Nov 2020 13:29:15 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-41.rdu2.redhat.com [10.10.112.41]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 700AA1002C10; Sat, 21 Nov 2020 13:29:15 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, dhowells@redhat.com Subject: [PATCH v1 02/13] NFS: In nfs_readpage() only increment NFSIOS_READPAGES when read succeeds Date: Sat, 21 Nov 2020 08:29:14 -0500 Message-Id: <1605965354-24538-1-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org There is a small inconsistency with nfs_readpage() vs nfs_readpages() with regards to NFSIOS_READPAGES. In readpage we unconditionally increment NFSIOS_READPAGES at the top, which means even if the read fails. In readpages, we increment NFSIOS_READPAGES at the bottom based on how many pages were successfully read. Change readpage to be consistent with readpages and so NFSIOS_READPAGES only reflects successful, non-fscache reads. Signed-off-by: Dave Wysochanski --- fs/nfs/read.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/nfs/read.c b/fs/nfs/read.c index a05fb3904ddf..1153c4e0a155 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -319,7 +319,6 @@ int nfs_readpage(struct file *filp, struct page *page) dprintk("NFS: nfs_readpage (%p %ld@%lu)\n", page, PAGE_SIZE, page_index(page)); nfs_inc_stats(inode, NFSIOS_VFSREADPAGE); - nfs_add_stats(inode, NFSIOS_READPAGES, 1); /* * Try to flush any pending writes to the file.. @@ -359,6 +358,7 @@ int nfs_readpage(struct file *filp, struct page *page) if (!PageUptodate(page) && !ret) ret = xchg(&ctx->error, 0); } + nfs_add_stats(inode, NFSIOS_READPAGES, 1); out: put_nfs_open_context(ctx); return ret; From patchwork Sat Nov 21 13:29:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 11923367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 841E3C388F9 for ; Sat, 21 Nov 2020 13:29:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 41E0822206 for ; Sat, 21 Nov 2020 13:29:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MRZ+YOC5" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727702AbgKUN3W (ORCPT ); Sat, 21 Nov 2020 08:29:22 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:29708 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726175AbgKUN3W (ORCPT ); Sat, 21 Nov 2020 08:29:22 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605965360; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc; bh=oIKkKvCK0reUE2MSQqAJw0OdbfqiTIE85S+RpDoJ/sQ=; b=MRZ+YOC5idmllj9iFEE4/rZF0pm6gGAO6pzaD8+5DST6Hm2ht7u5iAnfxhu7JOlUTzDnou HCmzUmwa/Xbdlq3jZVq6PvOn4V9bSZv5qLjG19xAPqyrnvE4L+j8be13IW4ywCR2ACSQjW MfHjbYx7A+2lYhMheFCLQtedmuUaKDs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-472-jS1AibsONQGeX60M-NJLCw-1; Sat, 21 Nov 2020 08:29:19 -0500 X-MC-Unique: jS1AibsONQGeX60M-NJLCw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1CC5180EDAA; Sat, 21 Nov 2020 13:29:18 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-41.rdu2.redhat.com [10.10.112.41]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 957B05C1D5; Sat, 21 Nov 2020 13:29:17 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, dhowells@redhat.com Subject: [PATCH v1 03/13] NFS: Refactor nfs_readpage() and nfs_readpage_async() to use nfs_readdesc Date: Sat, 21 Nov 2020 08:29:16 -0500 Message-Id: <1605965356-24573-1-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Both nfs_readpage() and nfs_readpages() use similar code. This patch should be no functional change, and refactors nfs_readpage_async() to use nfs_readdesc to enable future merging of nfs_readpage_async() and nfs_readpage_async_filler(). Signed-off-by: Dave Wysochanski --- fs/nfs/read.c | 62 ++++++++++++++++++++++++-------------------------- include/linux/nfs_fs.h | 3 +-- 2 files changed, 31 insertions(+), 34 deletions(-) diff --git a/fs/nfs/read.c b/fs/nfs/read.c index 1153c4e0a155..5fda30742a32 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -114,18 +114,23 @@ static void nfs_readpage_release(struct nfs_page *req, int error) nfs_release_request(req); } -int nfs_readpage_async(struct nfs_open_context *ctx, struct inode *inode, +struct nfs_readdesc { + struct nfs_pageio_descriptor pgio; + struct nfs_open_context *ctx; +}; + +int nfs_readpage_async(void *data, struct inode *inode, struct page *page) { + struct nfs_readdesc *desc = (struct nfs_readdesc *)data; struct nfs_page *new; unsigned int len; - struct nfs_pageio_descriptor pgio; struct nfs_pgio_mirror *pgm; len = nfs_page_length(page); if (len == 0) return nfs_return_empty_page(page); - new = nfs_create_request(ctx, page, 0, len); + new = nfs_create_request(desc->ctx, page, 0, len); if (IS_ERR(new)) { unlock_page(page); return PTR_ERR(new); @@ -133,21 +138,21 @@ int nfs_readpage_async(struct nfs_open_context *ctx, struct inode *inode, if (len < PAGE_SIZE) zero_user_segment(page, len, PAGE_SIZE); - nfs_pageio_init_read(&pgio, inode, false, + nfs_pageio_init_read(&desc->pgio, inode, false, &nfs_async_read_completion_ops); - if (!nfs_pageio_add_request(&pgio, new)) { + if (!nfs_pageio_add_request(&desc->pgio, new)) { nfs_list_remove_request(new); - nfs_readpage_release(new, pgio.pg_error); + nfs_readpage_release(new, desc->pgio.pg_error); } - nfs_pageio_complete(&pgio); + nfs_pageio_complete(&desc->pgio); /* It doesn't make sense to do mirrored reads! */ - WARN_ON_ONCE(pgio.pg_mirror_count != 1); + WARN_ON_ONCE(desc->pgio.pg_mirror_count != 1); - pgm = &pgio.pg_mirrors[0]; + pgm = &desc->pgio.pg_mirrors[0]; NFS_I(inode)->read_io += pgm->pg_bytes_written; - return pgio.pg_error < 0 ? pgio.pg_error : 0; + return desc->pgio.pg_error < 0 ? desc->pgio.pg_error : 0; } static void nfs_page_group_set_uptodate(struct nfs_page *req) @@ -312,7 +317,7 @@ static void nfs_readpage_result(struct rpc_task *task, */ int nfs_readpage(struct file *filp, struct page *page) { - struct nfs_open_context *ctx; + struct nfs_readdesc desc; struct inode *inode = page_file_mapping(page)->host; int ret; @@ -339,39 +344,34 @@ int nfs_readpage(struct file *filp, struct page *page) if (filp == NULL) { ret = -EBADF; - ctx = nfs_find_open_context(inode, NULL, FMODE_READ); - if (ctx == NULL) + desc.ctx = nfs_find_open_context(inode, NULL, FMODE_READ); + if (desc.ctx == NULL) goto out_unlock; } else - ctx = get_nfs_open_context(nfs_file_open_context(filp)); + desc.ctx = get_nfs_open_context(nfs_file_open_context(filp)); if (!IS_SYNC(inode)) { - ret = nfs_readpage_from_fscache(ctx, inode, page); + ret = nfs_readpage_from_fscache(desc.ctx, inode, page); if (ret == 0) goto out; } - xchg(&ctx->error, 0); - ret = nfs_readpage_async(ctx, inode, page); + xchg(&desc.ctx->error, 0); + ret = nfs_readpage_async(&desc, inode, page); if (!ret) { ret = wait_on_page_locked_killable(page); if (!PageUptodate(page) && !ret) - ret = xchg(&ctx->error, 0); + ret = xchg(&desc.ctx->error, 0); } nfs_add_stats(inode, NFSIOS_READPAGES, 1); out: - put_nfs_open_context(ctx); + put_nfs_open_context(desc.ctx); return ret; out_unlock: unlock_page(page); return ret; } -struct nfs_readdesc { - struct nfs_pageio_descriptor *pgio; - struct nfs_open_context *ctx; -}; - static int readpage_async_filler(void *data, struct page *page) { @@ -390,9 +390,9 @@ struct nfs_readdesc { if (len < PAGE_SIZE) zero_user_segment(page, len, PAGE_SIZE); - if (!nfs_pageio_add_request(desc->pgio, new)) { + if (!nfs_pageio_add_request(&desc->pgio, new)) { nfs_list_remove_request(new); - error = desc->pgio->pg_error; + error = desc->pgio.pg_error; nfs_readpage_release(new, error); goto out; } @@ -407,7 +407,6 @@ struct nfs_readdesc { int nfs_readpages(struct file *filp, struct address_space *mapping, struct list_head *pages, unsigned nr_pages) { - struct nfs_pageio_descriptor pgio; struct nfs_pgio_mirror *pgm; struct nfs_readdesc desc; struct inode *inode = mapping->host; @@ -440,17 +439,16 @@ int nfs_readpages(struct file *filp, struct address_space *mapping, if (ret == 0) goto read_complete; /* all pages were read */ - desc.pgio = &pgio; - nfs_pageio_init_read(&pgio, inode, false, + nfs_pageio_init_read(&desc.pgio, inode, false, &nfs_async_read_completion_ops); ret = read_cache_pages(mapping, pages, readpage_async_filler, &desc); - nfs_pageio_complete(&pgio); + nfs_pageio_complete(&desc.pgio); /* It doesn't make sense to do mirrored reads! */ - WARN_ON_ONCE(pgio.pg_mirror_count != 1); + WARN_ON_ONCE(desc.pgio.pg_mirror_count != 1); - pgm = &pgio.pg_mirrors[0]; + pgm = &desc.pgio.pg_mirrors[0]; NFS_I(inode)->read_io += pgm->pg_bytes_written; npages = (pgm->pg_bytes_written + PAGE_SIZE - 1) >> PAGE_SHIFT; diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h index a2c6455ea3fa..3cc42b5c7482 100644 --- a/include/linux/nfs_fs.h +++ b/include/linux/nfs_fs.h @@ -565,8 +565,7 @@ extern int nfs_access_get_cached(struct inode *inode, const struct cred *cred, s extern int nfs_readpage(struct file *, struct page *); extern int nfs_readpages(struct file *, struct address_space *, struct list_head *, unsigned); -extern int nfs_readpage_async(struct nfs_open_context *, struct inode *, - struct page *); +extern int nfs_readpage_async(void *, struct inode *, struct page *); /* * inline functions From patchwork Sat Nov 21 13:29:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 11923369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15638C56202 for ; Sat, 21 Nov 2020 13:29:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C436622206 for ; Sat, 21 Nov 2020 13:29:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ikOqzvQ7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727807AbgKUN30 (ORCPT ); Sat, 21 Nov 2020 08:29:26 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:22205 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726175AbgKUN30 (ORCPT ); Sat, 21 Nov 2020 08:29:26 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605965365; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc; bh=+Om3M0BIxbSuF3ZOTs6rshwBCB7b8mi5Pxx0b5la7A4=; b=ikOqzvQ7cdOtC3N6wKGcv4sH0W5tMU9zDZ2Jvs7K6coMDDRZm4iMAmd3sSIyHU/exG3g49 r689uczevoxD9KCjCwIAEn1mZyNsjtrRjom+UjSWzax4LqObqfCClrjeoQX3+wPG8zoX22 tGJg8tML/BvR90XRhzcQTD+UTjtIypw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-464-puEb43ZsP8aFux_UD7fq3w-1; Sat, 21 Nov 2020 08:29:21 -0500 X-MC-Unique: puEb43ZsP8aFux_UD7fq3w-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3ECD780EDAA; Sat, 21 Nov 2020 13:29:20 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-41.rdu2.redhat.com [10.10.112.41]) by smtp.corp.redhat.com (Postfix) with ESMTPS id BB9196085D; Sat, 21 Nov 2020 13:29:19 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, dhowells@redhat.com Subject: [PATCH v1 04/13] NFS: Call readpage_async_filler() from nfs_readpage_async() Date: Sat, 21 Nov 2020 08:29:18 -0500 Message-Id: <1605965358-24607-1-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Refactor slightly so nfs_readpage_async() calls into readpage_async_filler(). Signed-off-by: Dave Wysochanski --- fs/nfs/read.c | 28 +++++++++++----------------- 1 file changed, 11 insertions(+), 17 deletions(-) diff --git a/fs/nfs/read.c b/fs/nfs/read.c index 5fda30742a32..1401f1c734c0 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -119,31 +119,22 @@ struct nfs_readdesc { struct nfs_open_context *ctx; }; +static int readpage_async_filler(void *data, struct page *page); + int nfs_readpage_async(void *data, struct inode *inode, struct page *page) { struct nfs_readdesc *desc = (struct nfs_readdesc *)data; - struct nfs_page *new; - unsigned int len; struct nfs_pgio_mirror *pgm; - - len = nfs_page_length(page); - if (len == 0) - return nfs_return_empty_page(page); - new = nfs_create_request(desc->ctx, page, 0, len); - if (IS_ERR(new)) { - unlock_page(page); - return PTR_ERR(new); - } - if (len < PAGE_SIZE) - zero_user_segment(page, len, PAGE_SIZE); + int error; nfs_pageio_init_read(&desc->pgio, inode, false, &nfs_async_read_completion_ops); - if (!nfs_pageio_add_request(&desc->pgio, new)) { - nfs_list_remove_request(new); - nfs_readpage_release(new, desc->pgio.pg_error); - } + + error = readpage_async_filler(desc, page); + if (error) + goto out; + nfs_pageio_complete(&desc->pgio); /* It doesn't make sense to do mirrored reads! */ @@ -153,6 +144,9 @@ int nfs_readpage_async(void *data, struct inode *inode, NFS_I(inode)->read_io += pgm->pg_bytes_written; return desc->pgio.pg_error < 0 ? desc->pgio.pg_error : 0; + +out: + return error; } static void nfs_page_group_set_uptodate(struct nfs_page *req) From patchwork Sat Nov 21 13:29:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 11923377 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31A1EC63697 for ; Sat, 21 Nov 2020 13:29:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EBA8322206 for ; Sat, 21 Nov 2020 13:29:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="dltqCOah" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727834AbgKUN3g (ORCPT ); Sat, 21 Nov 2020 08:29:36 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:30378 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727823AbgKUN3g (ORCPT ); Sat, 21 Nov 2020 08:29:36 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605965374; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc; bh=PyEqaw6kiks288ycj7lAIQMeFNdhXzO+mMgP9/ubAEs=; b=dltqCOahQnAeumAzUePBKBeVIEGZSyM4np3r6oaLS08uEun8P7ViM/RpzXRHMFu0qhfCO7 hZKJHGvp0HkoJxhJd+l0FHmapKmqtXCfYKUh+ZBazB63BP8896yuG3u9UDMz2uyA7xx236 /wYf3V+0gLs1p0EEqryjWhqhMu5t6gU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-502-j78Q7oVGOKS9Z4qr8cZq7A-1; Sat, 21 Nov 2020 08:29:23 -0500 X-MC-Unique: j78Q7oVGOKS9Z4qr8cZq7A-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6B1081005D65; Sat, 21 Nov 2020 13:29:22 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-41.rdu2.redhat.com [10.10.112.41]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E60115C1D5; Sat, 21 Nov 2020 13:29:21 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, dhowells@redhat.com Subject: [PATCH v1 05/13] NFS: Add nfs_pageio_complete_read() and remove nfs_readpage_async() Date: Sat, 21 Nov 2020 08:29:20 -0500 Message-Id: <1605965360-24642-1-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Add nfs_pageio_complete_read() and call this from both nfs_readpage() and nfs_readpages(), since the submission and accounting is the same for both functions. Signed-off-by: Dave Wysochanski --- fs/nfs/read.c | 137 ++++++++++++++++++++++--------------------------- include/linux/nfs_fs.h | 1 - 2 files changed, 61 insertions(+), 77 deletions(-) diff --git a/fs/nfs/read.c b/fs/nfs/read.c index 1401f1c734c0..c2df4040f26c 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -74,6 +74,24 @@ void nfs_pageio_init_read(struct nfs_pageio_descriptor *pgio, } EXPORT_SYMBOL_GPL(nfs_pageio_init_read); +static void nfs_pageio_complete_read(struct nfs_pageio_descriptor *pgio, + struct inode *inode) +{ + struct nfs_pgio_mirror *pgm; + unsigned long npages; + + nfs_pageio_complete(pgio); + + /* It doesn't make sense to do mirrored reads! */ + WARN_ON_ONCE(pgio->pg_mirror_count != 1); + + pgm = &pgio->pg_mirrors[0]; + NFS_I(inode)->read_io += pgm->pg_bytes_written; + npages = (pgm->pg_bytes_written + PAGE_SIZE - 1) >> PAGE_SHIFT; + nfs_add_stats(inode, NFSIOS_READPAGES, npages); +} + + void nfs_pageio_reset_read_mds(struct nfs_pageio_descriptor *pgio) { struct nfs_pgio_mirror *mirror; @@ -119,36 +137,6 @@ struct nfs_readdesc { struct nfs_open_context *ctx; }; -static int readpage_async_filler(void *data, struct page *page); - -int nfs_readpage_async(void *data, struct inode *inode, - struct page *page) -{ - struct nfs_readdesc *desc = (struct nfs_readdesc *)data; - struct nfs_pgio_mirror *pgm; - int error; - - nfs_pageio_init_read(&desc->pgio, inode, false, - &nfs_async_read_completion_ops); - - error = readpage_async_filler(desc, page); - if (error) - goto out; - - nfs_pageio_complete(&desc->pgio); - - /* It doesn't make sense to do mirrored reads! */ - WARN_ON_ONCE(desc->pgio.pg_mirror_count != 1); - - pgm = &desc->pgio.pg_mirrors[0]; - NFS_I(inode)->read_io += pgm->pg_bytes_written; - - return desc->pgio.pg_error < 0 ? desc->pgio.pg_error : 0; - -out: - return error; -} - static void nfs_page_group_set_uptodate(struct nfs_page *req) { if (nfs_page_group_sync_on_bit(req, PG_UPTODATE)) @@ -170,8 +158,7 @@ static void nfs_read_completion(struct nfs_pgio_header *hdr) if (test_bit(NFS_IOHDR_EOF, &hdr->flags)) { /* note: regions of the page not covered by a - * request are zeroed in nfs_readpage_async / - * readpage_async_filler */ + * request are zeroed in readpage_async_filler */ if (bytes > hdr->good_bytes) { /* nothing in this request was good, so zero * the full extent of the request */ @@ -303,6 +290,38 @@ static void nfs_readpage_result(struct rpc_task *task, nfs_readpage_retry(task, hdr); } +static int +readpage_async_filler(void *data, struct page *page) +{ + struct nfs_readdesc *desc = (struct nfs_readdesc *)data; + struct nfs_page *new; + unsigned int len; + int error; + + len = nfs_page_length(page); + if (len == 0) + return nfs_return_empty_page(page); + + new = nfs_create_request(desc->ctx, page, 0, len); + if (IS_ERR(new)) + goto out_error; + + if (len < PAGE_SIZE) + zero_user_segment(page, len, PAGE_SIZE); + if (!nfs_pageio_add_request(&desc->pgio, new)) { + nfs_list_remove_request(new); + error = desc->pgio.pg_error; + nfs_readpage_release(new, error); + goto out; + } + return 0; +out_error: + error = PTR_ERR(new); + unlock_page(page); +out: + return error; +} + /* * Read a page over NFS. * We read the page synchronously in the following case: @@ -351,13 +370,20 @@ int nfs_readpage(struct file *filp, struct page *page) } xchg(&desc.ctx->error, 0); - ret = nfs_readpage_async(&desc, inode, page); + nfs_pageio_init_read(&desc.pgio, inode, false, + &nfs_async_read_completion_ops); + + ret = readpage_async_filler(&desc, page); + + if (!ret) + nfs_pageio_complete_read(&desc.pgio, inode); + + ret = desc.pgio.pg_error < 0 ? desc.pgio.pg_error : 0; if (!ret) { ret = wait_on_page_locked_killable(page); if (!PageUptodate(page) && !ret) ret = xchg(&desc.ctx->error, 0); } - nfs_add_stats(inode, NFSIOS_READPAGES, 1); out: put_nfs_open_context(desc.ctx); return ret; @@ -366,45 +392,11 @@ int nfs_readpage(struct file *filp, struct page *page) return ret; } -static int -readpage_async_filler(void *data, struct page *page) -{ - struct nfs_readdesc *desc = (struct nfs_readdesc *)data; - struct nfs_page *new; - unsigned int len; - int error; - - len = nfs_page_length(page); - if (len == 0) - return nfs_return_empty_page(page); - - new = nfs_create_request(desc->ctx, page, 0, len); - if (IS_ERR(new)) - goto out_error; - - if (len < PAGE_SIZE) - zero_user_segment(page, len, PAGE_SIZE); - if (!nfs_pageio_add_request(&desc->pgio, new)) { - nfs_list_remove_request(new); - error = desc->pgio.pg_error; - nfs_readpage_release(new, error); - goto out; - } - return 0; -out_error: - error = PTR_ERR(new); - unlock_page(page); -out: - return error; -} - int nfs_readpages(struct file *filp, struct address_space *mapping, struct list_head *pages, unsigned nr_pages) { - struct nfs_pgio_mirror *pgm; struct nfs_readdesc desc; struct inode *inode = mapping->host; - unsigned long npages; int ret; dprintk("NFS: nfs_readpages (%s/%Lu %d)\n", @@ -437,16 +429,9 @@ int nfs_readpages(struct file *filp, struct address_space *mapping, &nfs_async_read_completion_ops); ret = read_cache_pages(mapping, pages, readpage_async_filler, &desc); - nfs_pageio_complete(&desc.pgio); - /* It doesn't make sense to do mirrored reads! */ - WARN_ON_ONCE(desc.pgio.pg_mirror_count != 1); + nfs_pageio_complete_read(&desc.pgio, inode); - pgm = &desc.pgio.pg_mirrors[0]; - NFS_I(inode)->read_io += pgm->pg_bytes_written; - npages = (pgm->pg_bytes_written + PAGE_SIZE - 1) >> - PAGE_SHIFT; - nfs_add_stats(inode, NFSIOS_READPAGES, npages); read_complete: put_nfs_open_context(desc.ctx); out: diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h index 3cc42b5c7482..abe6416236c9 100644 --- a/include/linux/nfs_fs.h +++ b/include/linux/nfs_fs.h @@ -565,7 +565,6 @@ extern int nfs_access_get_cached(struct inode *inode, const struct cred *cred, s extern int nfs_readpage(struct file *, struct page *); extern int nfs_readpages(struct file *, struct address_space *, struct list_head *, unsigned); -extern int nfs_readpage_async(void *, struct inode *, struct page *); /* * inline functions From patchwork Sat Nov 21 13:29:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 11923371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61736C56202 for ; Sat, 21 Nov 2020 13:29:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 24DAE22206 for ; Sat, 21 Nov 2020 13:29:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Tp3zjf/d" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727817AbgKUN32 (ORCPT ); Sat, 21 Nov 2020 08:29:28 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:43363 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726175AbgKUN32 (ORCPT ); Sat, 21 Nov 2020 08:29:28 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605965367; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc; bh=99+yO/aWThQ14Eme0jkMqC+QKGQbeO7+eG+X+ZgQ/3o=; b=Tp3zjf/dCasZmIKfdNFJoHIMX3dqdk9WTRFZRLX3/l6XO/FuyeSbQtDO0RF7VNb8JUFqEf xo54MfayPCwUrLPqnIGRqommRVDEKmde01D/Wmq7T/jYMolyiUcAesLHR4se4310diusGH /F61mBlHIivCGZnXJdfMW2DgLl+6RI0= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-130-o_95wvC1NtOZU41jjQyX4Q-1; Sat, 21 Nov 2020 08:29:25 -0500 X-MC-Unique: o_95wvC1NtOZU41jjQyX4Q-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A3C6E1005D65; Sat, 21 Nov 2020 13:29:24 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-41.rdu2.redhat.com [10.10.112.41]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2CD6B5D9D7; Sat, 21 Nov 2020 13:29:24 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, dhowells@redhat.com Subject: [PATCH v1 06/13] NFS: Allow internal use of read structs and functions Date: Sat, 21 Nov 2020 08:29:22 -0500 Message-Id: <1605965362-24677-1-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org The conversion of the NFS read paths to the new fscache API will require use of a few read structs and functions, so move these declarations as required. Signed-off-by: Dave Wysochanski --- fs/nfs/internal.h | 8 ++++++++ fs/nfs/read.c | 13 ++++--------- 2 files changed, 12 insertions(+), 9 deletions(-) diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h index 6673a77884d9..513c62fc3df0 100644 --- a/fs/nfs/internal.h +++ b/fs/nfs/internal.h @@ -443,9 +443,17 @@ extern char *nfs_path(char **p, struct dentry *dentry, struct nfs_pgio_completion_ops; /* read.c */ +extern const struct nfs_pgio_completion_ops nfs_async_read_completion_ops; extern void nfs_pageio_init_read(struct nfs_pageio_descriptor *pgio, struct inode *inode, bool force_mds, const struct nfs_pgio_completion_ops *compl_ops); +struct nfs_readdesc { + struct nfs_pageio_descriptor pgio; + struct nfs_open_context *ctx; +}; +extern int readpage_async_filler(void *data, struct page *page); +extern void nfs_pageio_complete_read(struct nfs_pageio_descriptor *pgio, + struct inode *inode); extern void nfs_read_prepare(struct rpc_task *task, void *calldata); extern void nfs_pageio_reset_read_mds(struct nfs_pageio_descriptor *pgio); diff --git a/fs/nfs/read.c b/fs/nfs/read.c index c2df4040f26c..13266eda8f60 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -30,7 +30,7 @@ #define NFSDBG_FACILITY NFSDBG_PAGECACHE -static const struct nfs_pgio_completion_ops nfs_async_read_completion_ops; +const struct nfs_pgio_completion_ops nfs_async_read_completion_ops; static const struct nfs_rw_ops nfs_rw_read_ops; static struct kmem_cache *nfs_rdata_cachep; @@ -74,7 +74,7 @@ void nfs_pageio_init_read(struct nfs_pageio_descriptor *pgio, } EXPORT_SYMBOL_GPL(nfs_pageio_init_read); -static void nfs_pageio_complete_read(struct nfs_pageio_descriptor *pgio, +void nfs_pageio_complete_read(struct nfs_pageio_descriptor *pgio, struct inode *inode) { struct nfs_pgio_mirror *pgm; @@ -132,11 +132,6 @@ static void nfs_readpage_release(struct nfs_page *req, int error) nfs_release_request(req); } -struct nfs_readdesc { - struct nfs_pageio_descriptor pgio; - struct nfs_open_context *ctx; -}; - static void nfs_page_group_set_uptodate(struct nfs_page *req) { if (nfs_page_group_sync_on_bit(req, PG_UPTODATE)) @@ -215,7 +210,7 @@ static void nfs_initiate_read(struct nfs_pgio_header *hdr, } } -static const struct nfs_pgio_completion_ops nfs_async_read_completion_ops = { +const struct nfs_pgio_completion_ops nfs_async_read_completion_ops = { .error_cleanup = nfs_async_read_error, .completion = nfs_read_completion, }; @@ -290,7 +285,7 @@ static void nfs_readpage_result(struct rpc_task *task, nfs_readpage_retry(task, hdr); } -static int +int readpage_async_filler(void *data, struct page *page) { struct nfs_readdesc *desc = (struct nfs_readdesc *)data; From patchwork Sat Nov 21 13:29:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 11923373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F554C388F9 for ; Sat, 21 Nov 2020 13:29:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D178822206 for ; Sat, 21 Nov 2020 13:29:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ZAtSPvNF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727821AbgKUN3c (ORCPT ); Sat, 21 Nov 2020 08:29:32 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:56023 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726175AbgKUN3c (ORCPT ); Sat, 21 Nov 2020 08:29:32 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605965369; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc; bh=AkDueXSyhYk/fnhHJOkXgmT8EfmSgc7iKBSb1IeCD2Q=; b=ZAtSPvNFOvCEOJWpazdSjeoyj60NhnGXFGajvSLI2zt2SVYSIpkm84ZYwgzl4Grp8NRWHl s1F2LQ/XNqABB5okhwKW1IHJOazusdFVJD5a/7E/jEYBctOoRrNTYpAnZgonT+YRmJQ0EZ G0o7bb7hpT6yO54vrK1eowqlQf7kzE0= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-170-27r7wh59MqO-hDZ4sWqT8g-1; Sat, 21 Nov 2020 08:29:27 -0500 X-MC-Unique: 27r7wh59MqO-hDZ4sWqT8g-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E457E1DDE7; Sat, 21 Nov 2020 13:29:26 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-41.rdu2.redhat.com [10.10.112.41]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 5D1375C1D5; Sat, 21 Nov 2020 13:29:26 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, dhowells@redhat.com Subject: [PATCH v1 07/13] NFS: Convert fscache_acquire_cookie and fscache_relinquish_cookie Date: Sat, 21 Nov 2020 08:29:24 -0500 Message-Id: <1605965364-24711-1-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org The new FS-Cache netfs API changes the cookie API slightly. The changes to fscache_acquire_cookie are: * remove struct fscache_cookie_def * add 'type' of cookie (was member of fscache_cookie_def) * add 'name' of cookie (was member of fscache_cookie_def) * add 'advice' flags (tells cache how to handle object); set to 0 * add 'preferred_cache' tag (if NULL, derive from parent) * remove 'netfs_data' * remove 'enable' (See fscache_use_cookie()) The changes to fscache_relinquish_cookie are: * remove 'aux_data' Signed-off-by: Dave Wysochanski --- fs/nfs/fscache-index.c | 94 -------------------------------------------------- fs/nfs/fscache.c | 44 +++++++++++++++-------- fs/nfs/fscache.h | 3 -- 3 files changed, 29 insertions(+), 112 deletions(-) diff --git a/fs/nfs/fscache-index.c b/fs/nfs/fscache-index.c index b1049815729e..b4fdacd955f3 100644 --- a/fs/nfs/fscache-index.c +++ b/fs/nfs/fscache-index.c @@ -44,97 +44,3 @@ void nfs_fscache_unregister(void) { fscache_unregister_netfs(&nfs_fscache_netfs); } - -/* - * Define the server object for FS-Cache. This is used to describe a server - * object to fscache_acquire_cookie(). It is keyed by the NFS protocol and - * server address parameters. - */ -const struct fscache_cookie_def nfs_fscache_server_index_def = { - .name = "NFS.server", - .type = FSCACHE_COOKIE_TYPE_INDEX, -}; - -/* - * Define the superblock object for FS-Cache. This is used to describe a - * superblock object to fscache_acquire_cookie(). It is keyed by all the NFS - * parameters that might cause a separate superblock. - */ -const struct fscache_cookie_def nfs_fscache_super_index_def = { - .name = "NFS.super", - .type = FSCACHE_COOKIE_TYPE_INDEX, -}; - -/* - * Consult the netfs about the state of an object - * - This function can be absent if the index carries no state data - * - The netfs data from the cookie being used as the target is - * presented, as is the auxiliary data - */ -static -enum fscache_checkaux nfs_fscache_inode_check_aux(void *cookie_netfs_data, - const void *data, - uint16_t datalen, - loff_t object_size) -{ - struct nfs_fscache_inode_auxdata auxdata; - struct nfs_inode *nfsi = cookie_netfs_data; - - if (datalen != sizeof(auxdata)) - return FSCACHE_CHECKAUX_OBSOLETE; - - memset(&auxdata, 0, sizeof(auxdata)); - auxdata.mtime_sec = nfsi->vfs_inode.i_mtime.tv_sec; - auxdata.mtime_nsec = nfsi->vfs_inode.i_mtime.tv_nsec; - auxdata.ctime_sec = nfsi->vfs_inode.i_ctime.tv_sec; - auxdata.ctime_nsec = nfsi->vfs_inode.i_ctime.tv_nsec; - - if (NFS_SERVER(&nfsi->vfs_inode)->nfs_client->rpc_ops->version == 4) - auxdata.change_attr = inode_peek_iversion_raw(&nfsi->vfs_inode); - - if (memcmp(data, &auxdata, datalen) != 0) - return FSCACHE_CHECKAUX_OBSOLETE; - - return FSCACHE_CHECKAUX_OKAY; -} - -/* - * Get an extra reference on a read context. - * - This function can be absent if the completion function doesn't require a - * context. - * - The read context is passed back to NFS in the event that a data read on the - * cache fails with EIO - in which case the server must be contacted to - * retrieve the data, which requires the read context for security. - */ -static void nfs_fh_get_context(void *context) -{ - get_nfs_open_context(context); -} - -/* - * Release an extra reference on a read context. - * - This function can be absent if the completion function doesn't require a - * context. - */ -static void nfs_fh_put_context(void *context) -{ - if (context) - put_nfs_open_context(context); -} - -/* - * Define the inode object for FS-Cache. This is used to describe an inode - * object to fscache_acquire_cookie(). It is keyed by the NFS file handle for - * an inode. - * - * Coherency is managed by comparing the copies of i_size, i_mtime and i_ctime - * held in the cache auxiliary data for the data storage object with those in - * the inode struct in memory. - */ -const struct fscache_cookie_def nfs_fscache_inode_object_def = { - .name = "NFS.fh", - .type = FSCACHE_COOKIE_TYPE_DATAFILE, - .check_aux = nfs_fscache_inode_check_aux, - .get_context = nfs_fh_get_context, - .put_context = nfs_fh_put_context, -}; diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c index a60df88efc40..7ef711ca9592 100644 --- a/fs/nfs/fscache.c +++ b/fs/nfs/fscache.c @@ -81,10 +81,15 @@ void nfs_fscache_get_client_cookie(struct nfs_client *clp) /* create a cache index for looking up filehandles */ clp->fscache = fscache_acquire_cookie(nfs_fscache_netfs.primary_index, - &nfs_fscache_server_index_def, - &key, len, - NULL, 0, - clp, 0, true); + FSCACHE_COOKIE_TYPE_INDEX, + "NFS.server", + 0, /* advice */ + NULL, /* preferred_cache */ + &key, /* index_key */ + len, + NULL, /* aux_data */ + 0, + 0); dfprintk(FSCACHE, "NFS: get client cookie (0x%p/0x%p)\n", clp, clp->fscache); } @@ -97,7 +102,7 @@ void nfs_fscache_release_client_cookie(struct nfs_client *clp) dfprintk(FSCACHE, "NFS: releasing client cookie (0x%p/0x%p)\n", clp, clp->fscache); - fscache_relinquish_cookie(clp->fscache, NULL, false); + fscache_relinquish_cookie(clp->fscache, false); clp->fscache = NULL; } @@ -185,11 +190,15 @@ void nfs_fscache_get_super_cookie(struct super_block *sb, const char *uniq, int /* create a cache index for looking up filehandles */ nfss->fscache = fscache_acquire_cookie(nfss->nfs_client->fscache, - &nfs_fscache_super_index_def, - &key->key, + FSCACHE_COOKIE_TYPE_INDEX, + "NFS.super", + 0, /* advice */ + NULL, /* preferred_cache */ + &key->key, /* index_key */ sizeof(key->key) + ulen, - NULL, 0, - nfss, 0, true); + NULL, /* aux_data */ + 0, + 0); dfprintk(FSCACHE, "NFS: get superblock cookie (0x%p/0x%p)\n", nfss, nfss->fscache); return; @@ -213,7 +222,7 @@ void nfs_fscache_release_super_cookie(struct super_block *sb) dfprintk(FSCACHE, "NFS: releasing superblock cookie (0x%p/0x%p)\n", nfss, nfss->fscache); - fscache_relinquish_cookie(nfss->fscache, NULL, false); + fscache_relinquish_cookie(nfss->fscache, false); nfss->fscache = NULL; if (nfss->fscache_key) { @@ -254,10 +263,15 @@ void nfs_fscache_init_inode(struct inode *inode) nfs_fscache_update_auxdata(&auxdata, nfsi); nfsi->fscache = fscache_acquire_cookie(NFS_SB(inode->i_sb)->fscache, - &nfs_fscache_inode_object_def, - nfsi->fh.data, nfsi->fh.size, - &auxdata, sizeof(auxdata), - nfsi, nfsi->vfs_inode.i_size, false); + FSCACHE_COOKIE_TYPE_DATAFILE, + "NFS.fh", + 0, /* advice */ + NULL, /* preferred_cache */ + nfsi->fh.data, /* index_key */ + nfsi->fh.size, + &auxdata, /* aux_data */ + sizeof(auxdata), + i_size_read(&nfsi->vfs_inode)); } /* @@ -272,7 +286,7 @@ void nfs_fscache_clear_inode(struct inode *inode) dfprintk(FSCACHE, "NFS: clear cookie (0x%p/0x%p)\n", nfsi, cookie); nfs_fscache_update_auxdata(&auxdata, nfsi); - fscache_relinquish_cookie(cookie, &auxdata, false); + fscache_relinquish_cookie(cookie, false); nfsi->fscache = NULL; } diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h index 6754c8607230..6e6d9971244a 100644 --- a/fs/nfs/fscache.h +++ b/fs/nfs/fscache.h @@ -73,9 +73,6 @@ struct nfs_fscache_inode_auxdata { * fscache-index.c */ extern struct fscache_netfs nfs_fscache_netfs; -extern const struct fscache_cookie_def nfs_fscache_server_index_def; -extern const struct fscache_cookie_def nfs_fscache_super_index_def; -extern const struct fscache_cookie_def nfs_fscache_inode_object_def; extern int nfs_fscache_register(void); extern void nfs_fscache_unregister(void); From patchwork Sat Nov 21 13:29:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 11923375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB670C388F9 for ; Sat, 21 Nov 2020 13:29:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8D04F22206 for ; Sat, 21 Nov 2020 13:29:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UyhGKrFX" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727828AbgKUN3f (ORCPT ); Sat, 21 Nov 2020 08:29:35 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:41191 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726175AbgKUN3e (ORCPT ); Sat, 21 Nov 2020 08:29:34 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605965373; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc; bh=t6sT8eK0HKwBbciQaYAbZaxrD7klMu9hQd/5sSyOW/Q=; b=UyhGKrFXbjG5upqYzKjtYsIBvGw2J3HDYZ/je2BJIlTpwcxO5dZz2TbpRMf842nxg2oh2X +p6wKINriNL0Nwf+YLHezDRnOip9uWze7vbahHvFhYNI4LTI02xkGJ74rTwA9a8uN3CUBr VjIG6k6AdwCIPX4czw9p4CUyixC8RGE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-445-7HubOquYPR6u9xprbWbIfA-1; Sat, 21 Nov 2020 08:29:30 -0500 X-MC-Unique: 7HubOquYPR6u9xprbWbIfA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EC141814411; Sat, 21 Nov 2020 13:29:28 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-41.rdu2.redhat.com [10.10.112.41]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 804A412D7E; Sat, 21 Nov 2020 13:29:28 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, dhowells@redhat.com Subject: [PATCH v1 08/13] NFS: Convert fscache_enable_cookie and fscache_disable_cookie Date: Sat, 21 Nov 2020 08:29:27 -0500 Message-Id: <1605965367-24746-1-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org The new FS-Cache API removes the fscache_enable_cookie() and fscache_disable_cookie() in favor of the new APIs fscache_use_cookie() and fscache_unuse_cookie(), so update these areas as needed. For NFS, we enable fscache on an inode only if the inode is open readonly and not if the inode is open for write. Use the new APIs and change the existing logic slightly and make a decision whether to "use" an inode based fscache cookie one time, by gating the call to fscache_use_cookie() and fscache_unuse_cookie() by the NFS_INO_FSCACHE flag on the nfs_inode. Signed-off-by: Dave Wysochanski --- fs/nfs/fscache.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c index 7ef711ca9592..d0477e40aa32 100644 --- a/fs/nfs/fscache.c +++ b/fs/nfs/fscache.c @@ -285,7 +285,10 @@ void nfs_fscache_clear_inode(struct inode *inode) dfprintk(FSCACHE, "NFS: clear cookie (0x%p/0x%p)\n", nfsi, cookie); - nfs_fscache_update_auxdata(&auxdata, nfsi); + if (test_and_clear_bit(NFS_INO_FSCACHE, &NFS_I(inode)->flags)) { + nfs_fscache_update_auxdata(&auxdata, nfsi); + fscache_unuse_cookie(cookie, &auxdata, NULL); + } fscache_relinquish_cookie(cookie, false); nfsi->fscache = NULL; } @@ -325,19 +328,17 @@ void nfs_fscache_open_file(struct inode *inode, struct file *filp) if (!fscache_cookie_valid(cookie)) return; - nfs_fscache_update_auxdata(&auxdata, nfsi); - if (inode_is_open_for_write(inode)) { - dfprintk(FSCACHE, "NFS: nfsi 0x%p disabling cache\n", nfsi); - clear_bit(NFS_INO_FSCACHE, &nfsi->flags); - fscache_disable_cookie(cookie, &auxdata, true); - fscache_uncache_all_inode_pages(cookie, inode); + if (test_and_clear_bit(NFS_INO_FSCACHE, &nfsi->flags)) { + dfprintk(FSCACHE, "NFS: nfsi 0x%p disabling cache\n", nfsi); + nfs_fscache_update_auxdata(&auxdata, nfsi); + fscache_unuse_cookie(cookie, &auxdata, NULL); + } } else { - dfprintk(FSCACHE, "NFS: nfsi 0x%p enabling cache\n", nfsi); - fscache_enable_cookie(cookie, &auxdata, nfsi->vfs_inode.i_size, - nfs_fscache_can_enable, inode); - if (fscache_cookie_enabled(cookie)) - set_bit(NFS_INO_FSCACHE, &NFS_I(inode)->flags); + if (!test_and_set_bit(NFS_INO_FSCACHE, &nfsi->flags)) { + dfprintk(FSCACHE, "NFS: nfsi 0x%p enabling cache\n", nfsi); + fscache_use_cookie(cookie, false); + } } } EXPORT_SYMBOL_GPL(nfs_fscache_open_file); From patchwork Sat Nov 21 13:29:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 11923379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2BABC56202 for ; Sat, 21 Nov 2020 13:29:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7A32A22206 for ; Sat, 21 Nov 2020 13:29:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="WB8ceDoX" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727823AbgKUN3h (ORCPT ); Sat, 21 Nov 2020 08:29:37 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:59251 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727827AbgKUN3g (ORCPT ); Sat, 21 Nov 2020 08:29:36 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605965374; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc; bh=aIhNDUGmnqR3uHiXMsSBbOvLJ1UnCFBC9WvzLh9SNrg=; b=WB8ceDoX9xiA2PAQxGrgkI+fETyzNLuU9ph4ldUgckAk3/6NCB2shinTDnvqhMPZezCnEV F3d17m3u2CKMDJbcDSZJjTNzh81Ap08Yt+mfTKA0cUo1F6nWdqBy89vvXTdcbGJeLwf6mO dEKGtBYIA41rl91BJsDSZhA/S4Acx+I= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-447-7sDlgxUVOASMjZbuuWp4dQ-1; Sat, 21 Nov 2020 08:29:32 -0500 X-MC-Unique: 7sDlgxUVOASMjZbuuWp4dQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1B4A61005D69; Sat, 21 Nov 2020 13:29:31 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-41.rdu2.redhat.com [10.10.112.41]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 90A585D6BA; Sat, 21 Nov 2020 13:29:30 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, dhowells@redhat.com Subject: [PATCH v1 09/13] NFS: Convert fscache invalidation and update aux_data and i_size Date: Sat, 21 Nov 2020 08:29:29 -0500 Message-Id: <1605965369-24781-1-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Convert nfs_fscache_invalidate to the new FS-Cache API. Also, now when invalidating an fscache cookie, be sure to pass the latest i_size as well as aux_data to fscache. A few APIs no longer exist so remove them. We can call directly to wait_on_page_fscache() because it checks whether a page is an fscache page before waiting on it. Since the current NFS fscache implementation is only enabled when a file is open for read, handle writes with invalidation. If a write extends the size of the file and fscache is enabled, we must invalidate the object with the new size. We must also invalidate fscache when doing a direct write to avoid issues seen specifically with NFSv4.x when direct writes are followed by non-direct (buffered) reads. Note that we cannot call fscache_invalidate() while holding inode->i_lock. Signed-off-by: Dave Wysochanski --- fs/nfs/direct.c | 2 ++ fs/nfs/file.c | 20 ++++++++++------ fs/nfs/fscache.c | 20 ---------------- fs/nfs/fscache.h | 69 +++++++++++++++++++------------------------------------ fs/nfs/inode.c | 4 ++-- fs/nfs/nfs4proc.c | 2 +- fs/nfs/write.c | 3 ++- 7 files changed, 44 insertions(+), 76 deletions(-) diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c index 2d30a4da49fa..8f920c1ad522 100644 --- a/fs/nfs/direct.c +++ b/fs/nfs/direct.c @@ -59,6 +59,7 @@ #include "internal.h" #include "iostat.h" #include "pnfs.h" +#include "fscache.h" #define NFSDBG_FACILITY NFSDBG_VFS @@ -962,6 +963,7 @@ ssize_t nfs_file_direct_write(struct kiocb *iocb, struct iov_iter *iter) } else { result = requested; } + nfs_fscache_invalidate(inode, FSCACHE_INVAL_DIO_WRITE); out_release: nfs_direct_req_release(dreq); out: diff --git a/fs/nfs/file.c b/fs/nfs/file.c index 63940a7a70be..ba988e639037 100644 --- a/fs/nfs/file.c +++ b/fs/nfs/file.c @@ -415,8 +415,8 @@ static void nfs_invalidate_page(struct page *page, unsigned int offset, return; /* Cancel any unstarted writes on this page */ nfs_wb_page_cancel(page_file_mapping(page)->host, page); - - nfs_fscache_invalidate_page(page, page->mapping->host); + wait_on_page_fscache(page); + nfs_fscache_invalidate(page_file_mapping(page)->host, 0); } /* @@ -431,8 +431,13 @@ static int nfs_release_page(struct page *page, gfp_t gfp) /* If PagePrivate() is set, then the page is not freeable */ if (PagePrivate(page)) - return 0; - return nfs_fscache_release_page(page, gfp); + return false; + if (PageFsCache(page)) { + if (!(gfp & __GFP_DIRECT_RECLAIM) || !(gfp & __GFP_FS)) + return false; + wait_on_page_fscache(page); + } + return true; } static void nfs_check_dirty_writeback(struct page *page, @@ -475,12 +480,11 @@ static void nfs_check_dirty_writeback(struct page *page, static int nfs_launder_page(struct page *page) { struct inode *inode = page_file_mapping(page)->host; - struct nfs_inode *nfsi = NFS_I(inode); dfprintk(PAGECACHE, "NFS: launder_page(%ld, %llu)\n", inode->i_ino, (long long)page_offset(page)); - nfs_fscache_wait_on_page_write(nfsi, page); + wait_on_page_fscache(page); return nfs_wb_page(inode, page); } @@ -555,7 +559,9 @@ static vm_fault_t nfs_vm_page_mkwrite(struct vm_fault *vmf) sb_start_pagefault(inode->i_sb); /* make sure the cache has finished storing the page */ - nfs_fscache_wait_on_page_write(NFS_I(inode), page); + if (PageFsCache(vmf->page) && + wait_on_page_bit_killable(vmf->page, PG_fscache) < 0) + return VM_FAULT_RETRY; wait_on_bit_action(&NFS_I(inode)->flags, NFS_INO_INVALIDATING, nfs_wait_bit_killable, TASK_KILLABLE); diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c index d0477e40aa32..c1ca241381cb 100644 --- a/fs/nfs/fscache.c +++ b/fs/nfs/fscache.c @@ -234,19 +234,6 @@ void nfs_fscache_release_super_cookie(struct super_block *sb) } } -static void nfs_fscache_update_auxdata(struct nfs_fscache_inode_auxdata *auxdata, - struct nfs_inode *nfsi) -{ - memset(auxdata, 0, sizeof(*auxdata)); - auxdata->mtime_sec = nfsi->vfs_inode.i_mtime.tv_sec; - auxdata->mtime_nsec = nfsi->vfs_inode.i_mtime.tv_nsec; - auxdata->ctime_sec = nfsi->vfs_inode.i_ctime.tv_sec; - auxdata->ctime_nsec = nfsi->vfs_inode.i_ctime.tv_nsec; - - if (NFS_SERVER(&nfsi->vfs_inode)->nfs_client->rpc_ops->version == 4) - auxdata->change_attr = inode_peek_iversion_raw(&nfsi->vfs_inode); -} - /* * Initialise the per-inode cache cookie pointer for an NFS inode. */ @@ -293,13 +280,6 @@ void nfs_fscache_clear_inode(struct inode *inode) nfsi->fscache = NULL; } -static bool nfs_fscache_can_enable(void *data) -{ - struct inode *inode = data; - - return !inode_is_open_for_write(inode); -} - /* * Enable or disable caching for a file that is being opened as appropriate. * The cookie is allocated when the inode is initialised, but is not enabled at diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h index 6e6d9971244a..9fcaa17584b6 100644 --- a/fs/nfs/fscache.h +++ b/fs/nfs/fscache.h @@ -12,6 +12,7 @@ #include #include #include +#include #ifdef CONFIG_NFS_FSCACHE @@ -90,37 +91,13 @@ struct nfs_fscache_inode_auxdata { extern void nfs_fscache_clear_inode(struct inode *); extern void nfs_fscache_open_file(struct inode *, struct file *); -extern void __nfs_fscache_invalidate_page(struct page *, struct inode *); -extern int nfs_fscache_release_page(struct page *, gfp_t); - extern int __nfs_readpage_from_fscache(struct nfs_open_context *, struct inode *, struct page *); extern int __nfs_readpages_from_fscache(struct nfs_open_context *, struct inode *, struct address_space *, struct list_head *, unsigned *); -extern void __nfs_readpage_to_fscache(struct inode *, struct page *, int); - -/* - * wait for a page to complete writing to the cache - */ -static inline void nfs_fscache_wait_on_page_write(struct nfs_inode *nfsi, - struct page *page) -{ - if (PageFsCache(page)) - fscache_wait_on_page_write(nfsi->fscache, page); -} - -/* - * release the caching state associated with a page if undergoing complete page - * invalidation - */ -static inline void nfs_fscache_invalidate_page(struct page *page, - struct inode *inode) -{ - if (PageFsCache(page)) - __nfs_fscache_invalidate_page(page, inode); -} - +extern void __nfs_read_completion_to_fscache(struct nfs_pgio_header *hdr, + unsigned long bytes); /* * Retrieve a page from an inode data storage object. */ @@ -160,20 +137,32 @@ static inline void nfs_readpage_to_fscache(struct inode *inode, __nfs_readpage_to_fscache(inode, page, sync); } -/* - * Invalidate the contents of fscache for this inode. This will not sleep. - */ -static inline void nfs_fscache_invalidate(struct inode *inode) +static inline void nfs_fscache_update_auxdata(struct nfs_fscache_inode_auxdata *auxdata, + struct nfs_inode *nfsi) { - fscache_invalidate(NFS_I(inode)->fscache); + memset(auxdata, 0, sizeof(*auxdata)); + auxdata->mtime_sec = nfsi->vfs_inode.i_mtime.tv_sec; + auxdata->mtime_nsec = nfsi->vfs_inode.i_mtime.tv_nsec; + auxdata->ctime_sec = nfsi->vfs_inode.i_ctime.tv_sec; + auxdata->ctime_nsec = nfsi->vfs_inode.i_ctime.tv_nsec; + + if (NFS_SERVER(&nfsi->vfs_inode)->nfs_client->rpc_ops->version == 4) + auxdata->change_attr = inode_peek_iversion_raw(&nfsi->vfs_inode); } /* - * Wait for an object to finish being invalidated. + * Invalidate the contents of fscache for this inode. This will not sleep. */ -static inline void nfs_fscache_wait_on_invalidate(struct inode *inode) +static inline void nfs_fscache_invalidate(struct inode *inode, int flags) { - fscache_wait_on_invalidate(NFS_I(inode)->fscache); + struct nfs_fscache_inode_auxdata auxdata; + struct nfs_inode *nfsi = NFS_I(inode); + + if (nfsi->fscache) { + nfs_fscache_update_auxdata(&auxdata, nfsi); + fscache_invalidate(nfsi->fscache, &auxdata, + i_size_read(&nfsi->vfs_inode), flags); + } } /* @@ -200,15 +189,6 @@ static inline void nfs_fscache_clear_inode(struct inode *inode) {} static inline void nfs_fscache_open_file(struct inode *inode, struct file *filp) {} -static inline int nfs_fscache_release_page(struct page *page, gfp_t gfp) -{ - return 1; /* True: may release page */ -} -static inline void nfs_fscache_invalidate_page(struct page *page, - struct inode *inode) {} -static inline void nfs_fscache_wait_on_page_write(struct nfs_inode *nfsi, - struct page *page) {} - static inline int nfs_readpage_from_fscache(struct nfs_open_context *ctx, struct inode *inode, struct page *page) @@ -227,8 +207,7 @@ static inline void nfs_readpage_to_fscache(struct inode *inode, struct page *page, int sync) {} -static inline void nfs_fscache_invalidate(struct inode *inode) {} -static inline void nfs_fscache_wait_on_invalidate(struct inode *inode) {} +static inline void nfs_fscache_invalidate(struct inode *inode, int flags) {} static inline const char *nfs_server_fscache_state(struct nfs_server *server) { diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c index aa6493905bbe..c525dea851c8 100644 --- a/fs/nfs/inode.c +++ b/fs/nfs/inode.c @@ -213,7 +213,7 @@ static void nfs_set_cache_invalid(struct inode *inode, unsigned long flags) flags &= ~(NFS_INO_INVALID_DATA|NFS_INO_DATA_INVAL_DEFER); nfsi->cache_validity |= flags; if (flags & NFS_INO_INVALID_DATA) - nfs_fscache_invalidate(inode); + nfs_fscache_invalidate(inode, 0); } /* @@ -1240,6 +1240,7 @@ static int nfs_invalidate_mapping(struct inode *inode, struct address_space *map struct nfs_inode *nfsi = NFS_I(inode); int ret; + nfs_fscache_invalidate(inode, 0); if (mapping->nrpages != 0) { if (S_ISREG(inode->i_mode)) { ret = nfs_sync_mapping(mapping); @@ -1256,7 +1257,6 @@ static int nfs_invalidate_mapping(struct inode *inode, struct address_space *map spin_unlock(&inode->i_lock); } nfs_inc_stats(inode, NFSIOS_DATAINVALIDATE); - nfs_fscache_wait_on_invalidate(inode); dfprintk(PAGECACHE, "NFS: (%s/%Lu) data cache invalidated\n", inode->i_sb->s_id, diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index 9e0ca9b2b210..9799f09f1540 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -1217,7 +1217,7 @@ int nfs4_call_sync(struct rpc_clnt *clnt, nfsi->cache_validity &= ~NFS_INO_INVALID_CHANGE; if (nfsi->cache_validity & NFS_INO_INVALID_DATA) - nfs_fscache_invalidate(inode); + nfs_fscache_invalidate(inode, 0); } void diff --git a/fs/nfs/write.c b/fs/nfs/write.c index 639c34fec04a..4657ac0c68ac 100644 --- a/fs/nfs/write.c +++ b/fs/nfs/write.c @@ -293,6 +293,7 @@ static void nfs_grow_file(struct page *page, unsigned int offset, unsigned int c nfs_inc_stats(inode, NFSIOS_EXTENDWRITE); out: spin_unlock(&inode->i_lock); + nfs_fscache_invalidate(inode, 0); } /* A writeback failed: mark the page as bad, and invalidate the page cache */ @@ -2112,7 +2113,7 @@ int nfs_migrate_page(struct address_space *mapping, struct page *newpage, if (PagePrivate(page)) return -EBUSY; - if (!nfs_fscache_release_page(page, GFP_KERNEL)) + if (PageFsCache(page)) return -EBUSY; return migrate_page(mapping, newpage, page, mode); From patchwork Sat Nov 21 13:29:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 11923381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2B97C56202 for ; Sat, 21 Nov 2020 13:29:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 71D8622206 for ; Sat, 21 Nov 2020 13:29:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="SrhO165S" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727827AbgKUN3k (ORCPT ); Sat, 21 Nov 2020 08:29:40 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:45291 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726175AbgKUN3j (ORCPT ); Sat, 21 Nov 2020 08:29:39 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605965376; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc; bh=a/BMNedu8DZVEt4fB8L5Z1eQo4E0dqztlOpnGkPppNE=; b=SrhO165SFMx4rQTLpIjkEo951qNWKNwZOAL/mcaXZJDjXUbJ1QmsnEfbaLKGS2CAN04cJj d1O3U054h9fo/L8uX9dGQ4QOAZyG/rbZT/AFwRpIjvVxXOfyYdd1nLFjFoL4kQggIDMjj7 ZxQWHRTAkepc9gv31HZHD1FRbhW/txk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-503-k_2IvfAvMWW2JoXpatZ6fg-1; Sat, 21 Nov 2020 08:29:34 -0500 X-MC-Unique: k_2IvfAvMWW2JoXpatZ6fg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 32FF380ED8E; Sat, 21 Nov 2020 13:29:33 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-41.rdu2.redhat.com [10.10.112.41]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A867116D5E; Sat, 21 Nov 2020 13:29:32 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, dhowells@redhat.com Subject: [PATCH v1 10/13] NFS: Convert to the netfs API and nfs_readpage to use netfs_readpage Date: Sat, 21 Nov 2020 08:29:31 -0500 Message-Id: <1605965371-24816-1-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org This patch converts the main NFS read paths to the new netfs API, when fscache is enabled, and converts readpage while minimizing changes to the existing NFS read code paths. The netfs API requires a few functions to be provided by the netfs: - init_rreq: allows netfs to allocate resources prior to IO - is_cache_enabled: allows netfs to disable fscache - begin_cache_operation: signals the start of an fscache IO - issue_op: called when netfs should issue read to server - clamp_length: allows netfs to limit size of IO - cleanup: allows netfs to cleanup after an IO is complete The new netfs_readpage() API is called when fscache is enabled. If a read cannot be satisfied from fscache, the netfs is called back via issue_op() to obtain the data from the server. Once the read completes, the netfs must call netfs_subreq_terminated() which then may write the data to fscache. In order to call back into fscache via netfs_subreq_terminated(), we must save the netfs_read_subrequest* as a field in the nfs_pgio_header, similar to nfs_direct_req. If the netfs has a read IO limit (for example, NFS 'rsize' mount options) the clamp_length() function is called. Signed-off-by: Dave Wysochanski --- fs/nfs/fscache.c | 177 ++++++++++++++++++++++++----------------------- fs/nfs/fscache.h | 37 +++++----- fs/nfs/pagelist.c | 2 + fs/nfs/read.c | 9 ++- include/linux/nfs_page.h | 1 + include/linux/nfs_xdr.h | 1 + 6 files changed, 119 insertions(+), 108 deletions(-) diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c index c1ca241381cb..782ef2a8733b 100644 --- a/fs/nfs/fscache.c +++ b/fs/nfs/fscache.c @@ -15,6 +15,9 @@ #include #include #include +#include +#include +#include #include "internal.h" #include "iostat.h" @@ -323,111 +326,124 @@ void nfs_fscache_open_file(struct inode *inode, struct file *filp) } EXPORT_SYMBOL_GPL(nfs_fscache_open_file); -/* - * Release the caching state associated with a page, if the page isn't busy - * interacting with the cache. - * - Returns true (can release page) or false (page busy). - */ -int nfs_fscache_release_page(struct page *page, gfp_t gfp) +static void nfs_issue_op(struct netfs_read_subrequest *subreq) { - if (PageFsCache(page)) { - struct fscache_cookie *cookie = nfs_i_fscache(page->mapping->host); - - BUG_ON(!cookie); - dfprintk(FSCACHE, "NFS: fscache releasepage (0x%p/0x%p/0x%p)\n", - cookie, page, NFS_I(page->mapping->host)); + struct inode *inode = subreq->rreq->inode; + struct nfs_readdesc *desc = subreq->rreq->netfs_priv; + struct page *page; + pgoff_t start = (subreq->start + subreq->transferred) >> PAGE_SHIFT; + pgoff_t last = ((subreq->start + subreq->len - + subreq->transferred - 1) >> PAGE_SHIFT); + XA_STATE(xas, &subreq->rreq->mapping->i_pages, start); + + dfprintk(FSCACHE, "NFS: %s(fsc:%p s:%lu l:%lu) subreq->start: %lld " + "subreq->len: %ld subreq->transferred: %ld\n", + __func__, nfs_i_fscache(inode), start, last, subreq->start, + subreq->len, subreq->transferred); + + nfs_add_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_FAIL, + last - start + 1); + nfs_pageio_init_read(&desc->pgio, inode, false, + &nfs_async_read_completion_ops); + + desc->pgio.pg_fsc = subreq; /* used in completion */ + + rcu_read_lock(); + xas_for_each(&xas, page, last) { + subreq->error = readpage_async_filler(desc, page); + if (subreq->error < 0) + break; + } + rcu_read_unlock(); + nfs_pageio_complete_read(&desc->pgio, inode); +} - if (!fscache_maybe_release_page(cookie, page, gfp)) - return 0; +static bool nfs_clamp_length(struct netfs_read_subrequest *subreq) +{ + struct inode *inode = subreq->rreq->mapping->host; + unsigned int rsize = NFS_SB(inode->i_sb)->rsize; - nfs_inc_fscache_stats(page->mapping->host, - NFSIOS_FSCACHE_PAGES_UNCACHED); + if (subreq->len > rsize) { + dfprintk(FSCACHE, + "NFS: %s(fsc:%p slen:%lu rsize: %u)\n", + __func__, nfs_i_fscache(inode), subreq->len, rsize); + subreq->len = rsize; } - return 1; + return true; } -/* - * Release the caching state associated with a page if undergoing complete page - * invalidation. - */ -void __nfs_fscache_invalidate_page(struct page *page, struct inode *inode) +static void nfs_cleanup(struct address_space *mapping, void *netfs_priv) { - struct fscache_cookie *cookie = nfs_i_fscache(inode); + ; /* fscache assumes if netfs_priv is given we have cleanup */ +} - BUG_ON(!cookie); +atomic_t nfs_fscache_debug_id; +static void nfs_init_rreq(struct netfs_read_request *rreq, struct file *file) +{ + struct nfs_inode *nfsi = NFS_I(rreq->inode); - dfprintk(FSCACHE, "NFS: fscache invalidatepage (0x%p/0x%p/0x%p)\n", - cookie, page, NFS_I(inode)); + if (nfsi->fscache && test_bit(NFS_INO_FSCACHE, &nfsi->flags)) + rreq->cookie_debug_id = atomic_inc_return(&nfs_fscache_debug_id); +} - fscache_wait_on_page_write(cookie, page); +static bool nfs_is_cache_enabled(struct inode *inode) +{ + struct nfs_inode *nfsi = NFS_I(inode); - BUG_ON(!PageLocked(page)); - fscache_uncache_page(cookie, page); - nfs_inc_fscache_stats(page->mapping->host, - NFSIOS_FSCACHE_PAGES_UNCACHED); + return nfsi->fscache && test_bit(NFS_INO_FSCACHE, &nfsi->flags); } -/* - * Handle completion of a page being read from the cache. - * - Called in process (keventd) context. - */ -static void nfs_readpage_from_fscache_complete(struct page *page, - void *context, - int error) +static int nfs_begin_cache_operation(struct netfs_read_request *rreq) { - dfprintk(FSCACHE, - "NFS: readpage_from_fscache_complete (0x%p/0x%p/%d)\n", - page, context, error); - - /* if the read completes with an error, we just unlock the page and let - * the VM reissue the readpage */ - if (!error) { - SetPageUptodate(page); - unlock_page(page); - } else { - error = nfs_readpage_async(context, page->mapping->host, page); - if (error) - unlock_page(page); - } + struct nfs_inode *nfsi = NFS_I(rreq->inode); + return fscache_begin_operation(nfsi->fscache, + &rreq->cache_resources, + FSCACHE_WANT_PARAMS); } +static struct netfs_read_request_ops nfs_fscache_req_ops = { + .init_rreq = nfs_init_rreq, + .is_cache_enabled = nfs_is_cache_enabled, + .begin_cache_operation = nfs_begin_cache_operation, + .issue_op = nfs_issue_op, + .clamp_length = nfs_clamp_length, + .cleanup = nfs_cleanup +}; + /* * Retrieve a page from fscache */ -int __nfs_readpage_from_fscache(struct nfs_open_context *ctx, - struct inode *inode, struct page *page) +int __nfs_readpage_from_fscache(struct file *filp, + struct page *page, + struct nfs_readdesc *desc) { int ret; + struct inode *inode = file_inode(filp); dfprintk(FSCACHE, "NFS: readpage_from_fscache(fsc:%p/p:%p(i:%lx f:%lx)/0x%p)\n", nfs_i_fscache(inode), page, page->index, page->flags, inode); - ret = fscache_read_or_alloc_page(nfs_i_fscache(inode), - page, - nfs_readpage_from_fscache_complete, - ctx, - GFP_KERNEL); + ret = netfs_readpage(filp, page, &nfs_fscache_req_ops, desc); switch (ret) { - case 0: /* read BIO submitted (page in fscache) */ - dfprintk(FSCACHE, - "NFS: readpage_from_fscache: BIO submitted\n"); + case 0: /* read submitted */ + dfprintk(FSCACHE, "NFS: readpage_from_fscache: submitted\n"); nfs_inc_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_OK); return ret; case -ENOBUFS: /* inode not in cache */ case -ENODATA: /* page not in cache */ nfs_inc_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_FAIL); - dfprintk(FSCACHE, - "NFS: readpage_from_fscache %d\n", ret); + dfprintk(FSCACHE, "NFS: readpage_from_fscache %d\n", ret); return 1; default: dfprintk(FSCACHE, "NFS: readpage_from_fscache %d\n", ret); nfs_inc_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_FAIL); } + return ret; } @@ -482,30 +498,19 @@ int __nfs_readpages_from_fscache(struct nfs_open_context *ctx, } /* - * Store a newly fetched page in fscache - * - PG_fscache must be set on the page + * Store a newly fetched data in fscache */ -void __nfs_readpage_to_fscache(struct inode *inode, struct page *page, int sync) +void __nfs_read_completion_to_fscache(struct nfs_pgio_header *hdr, + unsigned long bytes) { - int ret; - - dfprintk(FSCACHE, - "NFS: readpage_to_fscache(fsc:%p/p:%p(i:%lx f:%lx)/%d)\n", - nfs_i_fscache(inode), page, page->index, page->flags, sync); + struct netfs_read_subrequest *subreq = hdr->fsc; - ret = fscache_write_page(nfs_i_fscache(inode), page, - inode->i_size, GFP_KERNEL); - dfprintk(FSCACHE, - "NFS: readpage_to_fscache: p:%p(i:%lu f:%lx) ret %d\n", - page, page->index, page->flags, ret); - - if (ret != 0) { - fscache_uncache_page(nfs_i_fscache(inode), page); - nfs_inc_fscache_stats(inode, - NFSIOS_FSCACHE_PAGES_WRITTEN_FAIL); - nfs_inc_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_UNCACHED); - } else { - nfs_inc_fscache_stats(inode, - NFSIOS_FSCACHE_PAGES_WRITTEN_OK); + if (subreq) { + dfprintk(FSCACHE, + "NFS: read_completion_to_fscache(fsc:%p err:%d bytes:%lu subreq->len:%lu\n", + NFS_I(hdr->inode)->fscache, hdr->error, bytes, subreq->len); + __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); + netfs_subreq_terminated(subreq, hdr->error ?: bytes); + hdr->fsc = NULL; } } diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h index 9fcaa17584b6..93747bcd4399 100644 --- a/fs/nfs/fscache.h +++ b/fs/nfs/fscache.h @@ -91,8 +91,9 @@ struct nfs_fscache_inode_auxdata { extern void nfs_fscache_clear_inode(struct inode *); extern void nfs_fscache_open_file(struct inode *, struct file *); -extern int __nfs_readpage_from_fscache(struct nfs_open_context *, - struct inode *, struct page *); +extern int __nfs_readpage_from_fscache(struct file *filp, + struct page *page, + struct nfs_readdesc *desc); extern int __nfs_readpages_from_fscache(struct nfs_open_context *, struct inode *, struct address_space *, struct list_head *, unsigned *); @@ -101,12 +102,12 @@ extern void __nfs_read_completion_to_fscache(struct nfs_pgio_header *hdr, /* * Retrieve a page from an inode data storage object. */ -static inline int nfs_readpage_from_fscache(struct nfs_open_context *ctx, - struct inode *inode, - struct page *page) +static inline int nfs_readpage_from_fscache(struct file *filp, + struct page *page, + struct nfs_readdesc *desc) { - if (NFS_I(inode)->fscache) - return __nfs_readpage_from_fscache(ctx, inode, page); + if (NFS_I(file_inode(filp))->fscache) + return __nfs_readpage_from_fscache(filp, page, desc); return -ENOBUFS; } @@ -126,15 +127,14 @@ static inline int nfs_readpages_from_fscache(struct nfs_open_context *ctx, } /* - * Store a page newly fetched from the server in an inode data storage object + * Store pages newly fetched from the server in an inode data storage object * in the cache. */ -static inline void nfs_readpage_to_fscache(struct inode *inode, - struct page *page, - int sync) +static inline void nfs_read_completion_to_fscache(struct nfs_pgio_header *hdr, + unsigned long bytes) { - if (PageFsCache(page)) - __nfs_readpage_to_fscache(inode, page, sync); + if (NFS_I(hdr->inode)->fscache) + __nfs_read_completion_to_fscache(hdr, bytes); } static inline void nfs_fscache_update_auxdata(struct nfs_fscache_inode_auxdata *auxdata, @@ -189,9 +189,9 @@ static inline void nfs_fscache_clear_inode(struct inode *inode) {} static inline void nfs_fscache_open_file(struct inode *inode, struct file *filp) {} -static inline int nfs_readpage_from_fscache(struct nfs_open_context *ctx, - struct inode *inode, - struct page *page) +static inline int nfs_readpage_from_fscache(struct file *filp, + struct page *page, + struct nfs_readdesc *desc) { return -ENOBUFS; } @@ -203,9 +203,8 @@ static inline int nfs_readpages_from_fscache(struct nfs_open_context *ctx, { return -ENOBUFS; } -static inline void nfs_readpage_to_fscache(struct inode *inode, - struct page *page, int sync) {} - +static inline void nfs_read_completion_to_fscache(struct nfs_pgio_header *hdr, + unsigned long bytes) {} static inline void nfs_fscache_invalidate(struct inode *inode, int flags) {} diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c index 6985cacf4700..dd7014aa2d5c 100644 --- a/fs/nfs/pagelist.c +++ b/fs/nfs/pagelist.c @@ -52,6 +52,7 @@ void nfs_pgheader_init(struct nfs_pageio_descriptor *desc, hdr->good_bytes = mirror->pg_count; hdr->io_completion = desc->pg_io_completion; hdr->dreq = desc->pg_dreq; + hdr->fsc = desc->pg_fsc; hdr->release = release; hdr->completion_ops = desc->pg_completion_ops; if (hdr->completion_ops->init_hdr) @@ -833,6 +834,7 @@ void nfs_pageio_init(struct nfs_pageio_descriptor *desc, desc->pg_lseg = NULL; desc->pg_io_completion = NULL; desc->pg_dreq = NULL; + desc->pg_fsc = NULL; desc->pg_bsize = bsize; desc->pg_mirror_count = 1; diff --git a/fs/nfs/read.c b/fs/nfs/read.c index 13266eda8f60..7a76ab474fe0 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -124,10 +124,11 @@ static void nfs_readpage_release(struct nfs_page *req, int error) struct address_space *mapping = page_file_mapping(page); if (PageUptodate(page)) - nfs_readpage_to_fscache(inode, page, 0); + ; /* FIXME: review fscache page error handling */ else if (!PageError(page) && !PagePrivate(page)) generic_error_remove_page(mapping, page); - unlock_page(page); + if (!nfs_i_fscache(inode)) + unlock_page(page); } nfs_release_request(req); } @@ -181,6 +182,8 @@ static void nfs_read_completion(struct nfs_pgio_header *hdr) nfs_list_remove_request(req); nfs_readpage_release(req, error); } + /* FIXME: NFS_IOHDR_ERROR and NFS_IOHDR_EOF handled per-page */ + nfs_read_completion_to_fscache(hdr, bytes); out: hdr->release(hdr); } @@ -359,7 +362,7 @@ int nfs_readpage(struct file *filp, struct page *page) desc.ctx = get_nfs_open_context(nfs_file_open_context(filp)); if (!IS_SYNC(inode)) { - ret = nfs_readpage_from_fscache(desc.ctx, inode, page); + ret = nfs_readpage_from_fscache(filp, page, &desc); if (ret == 0) goto out; } diff --git a/include/linux/nfs_page.h b/include/linux/nfs_page.h index c32c15216da3..991069293ea8 100644 --- a/include/linux/nfs_page.h +++ b/include/linux/nfs_page.h @@ -97,6 +97,7 @@ struct nfs_pageio_descriptor { struct pnfs_layout_segment *pg_lseg; struct nfs_io_completion *pg_io_completion; struct nfs_direct_req *pg_dreq; + void *pg_fsc; unsigned int pg_bsize; /* default bsize for mirrors */ u32 pg_mirror_count; diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h index d63cb862d58e..8e4a0a9c117a 100644 --- a/include/linux/nfs_xdr.h +++ b/include/linux/nfs_xdr.h @@ -1593,6 +1593,7 @@ struct nfs_pgio_header { const struct nfs_rw_ops *rw_ops; struct nfs_io_completion *io_completion; struct nfs_direct_req *dreq; + void *fsc; int pnfs_error; int error; /* merge with pnfs_error */ From patchwork Sat Nov 21 13:29:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 11923383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68C31C56202 for ; Sat, 21 Nov 2020 13:29:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2C3D922206 for ; Sat, 21 Nov 2020 13:29:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="EL16YtIv" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727837AbgKUN3r (ORCPT ); Sat, 21 Nov 2020 08:29:47 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:53776 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726175AbgKUN3r (ORCPT ); Sat, 21 Nov 2020 08:29:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605965385; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc; bh=vDVsgFCkq6y36btUrg7x7SEJfP+WOeJycmq/AhpKb8E=; b=EL16YtIvGwImMY1tED91uV1LCXpQGCuOtJdQBRV/cd3EDFkcN1PIkTsOfiNcjmmmGFbl4b 6Smtu9YoOFZOR0ILjeXAt3arYXH38qzTzwILAMVjiRdra47vn8sIOUd/i/mnUJGAWcMIl3 szHUim2yD8+qyimOftihoL70DgIknjs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-40-Fjed01rPO7Ciw_fDX-0ySw-1; Sat, 21 Nov 2020 08:29:43 -0500 X-MC-Unique: Fjed01rPO7Ciw_fDX-0ySw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5DF541005D69; Sat, 21 Nov 2020 13:29:42 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-41.rdu2.redhat.com [10.10.112.41]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3C5E55C224; Sat, 21 Nov 2020 13:29:40 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, dhowells@redhat.com Subject: [PATCH v1 11/13] NFS: Convert readpage to readahead and use netfs_readahead for fscache Date: Sat, 21 Nov 2020 08:29:33 -0500 Message-Id: <1605965373-24850-1-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org The new FS-Cache API does not have a readpages equivalent function, and instead of fscache_read_or_alloc_pages() it implements a readahead function, netfs_readahead(). Call netfs_readahead() if fscache is enabled, and if not, utilize readahead_page() to run through the pages needed calling readpage_async_filler(). Signed-off-by: Dave Wysochanski --- fs/nfs/file.c | 2 +- fs/nfs/fscache.c | 49 ++++++++-------------------------------------- fs/nfs/fscache.h | 25 +++++++++-------------- fs/nfs/read.c | 32 +++++++++++++++--------------- include/linux/nfs_fs.h | 3 +-- include/linux/nfs_iostat.h | 2 +- 6 files changed, 36 insertions(+), 77 deletions(-) diff --git a/fs/nfs/file.c b/fs/nfs/file.c index ba988e639037..ed3d6fbed8d1 100644 --- a/fs/nfs/file.c +++ b/fs/nfs/file.c @@ -519,7 +519,7 @@ static void nfs_swap_deactivate(struct file *file) const struct address_space_operations nfs_file_aops = { .readpage = nfs_readpage, - .readpages = nfs_readpages, + .readahead = nfs_readahead, .set_page_dirty = __set_page_dirty_nobuffers, .writepage = nfs_writepage, .writepages = nfs_writepages, diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c index 782ef2a8733b..917b92f5461b 100644 --- a/fs/nfs/fscache.c +++ b/fs/nfs/fscache.c @@ -450,51 +450,18 @@ int __nfs_readpage_from_fscache(struct file *filp, /* * Retrieve a set of pages from fscache */ -int __nfs_readpages_from_fscache(struct nfs_open_context *ctx, - struct inode *inode, - struct address_space *mapping, - struct list_head *pages, - unsigned *nr_pages) +int __nfs_readahead_from_fscache(struct nfs_readdesc *desc, + struct readahead_control *rac) { - unsigned npages = *nr_pages; - int ret; + struct inode *inode = rac->mapping->host; - dfprintk(FSCACHE, "NFS: nfs_getpages_from_fscache (0x%p/%u/0x%p)\n", - nfs_i_fscache(inode), npages, inode); - - ret = fscache_read_or_alloc_pages(nfs_i_fscache(inode), - mapping, pages, nr_pages, - nfs_readpage_from_fscache_complete, - ctx, - mapping_gfp_mask(mapping)); - if (*nr_pages < npages) - nfs_add_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_OK, - npages); - if (*nr_pages > 0) - nfs_add_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_FAIL, - *nr_pages); + dfprintk(FSCACHE, "NFS: nfs_readahead_from_fscache (0x%p/0x%p)\n", + nfs_i_fscache(inode), inode); - switch (ret) { - case 0: /* read submitted to the cache for all pages */ - BUG_ON(!list_empty(pages)); - BUG_ON(*nr_pages != 0); - dfprintk(FSCACHE, - "NFS: nfs_getpages_from_fscache: submitted\n"); + netfs_readahead(rac, &nfs_fscache_req_ops, desc); - return ret; - - case -ENOBUFS: /* some pages aren't cached and can't be */ - case -ENODATA: /* some pages aren't cached */ - dfprintk(FSCACHE, - "NFS: nfs_getpages_from_fscache: no page: %d\n", ret); - return 1; - - default: - dfprintk(FSCACHE, - "NFS: nfs_getpages_from_fscache: ret %d\n", ret); - } - - return ret; + /* FIXME: NFSIOS_NFSIOS_FSCACHE_ stats */ + return 0; } /* diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h index 93747bcd4399..446a889433a8 100644 --- a/fs/nfs/fscache.h +++ b/fs/nfs/fscache.h @@ -94,11 +94,11 @@ struct nfs_fscache_inode_auxdata { extern int __nfs_readpage_from_fscache(struct file *filp, struct page *page, struct nfs_readdesc *desc); -extern int __nfs_readpages_from_fscache(struct nfs_open_context *, - struct inode *, struct address_space *, - struct list_head *, unsigned *); +extern int __nfs_readahead_from_fscache(struct nfs_readdesc *desc, + struct readahead_control *rac); extern void __nfs_read_completion_to_fscache(struct nfs_pgio_header *hdr, unsigned long bytes); + /* * Retrieve a page from an inode data storage object. */ @@ -114,15 +114,11 @@ static inline int nfs_readpage_from_fscache(struct file *filp, /* * Retrieve a set of pages from an inode data storage object. */ -static inline int nfs_readpages_from_fscache(struct nfs_open_context *ctx, - struct inode *inode, - struct address_space *mapping, - struct list_head *pages, - unsigned *nr_pages) +static inline int nfs_readahead_from_fscache(struct nfs_readdesc *desc, + struct readahead_control *rac) { - if (NFS_I(inode)->fscache) - return __nfs_readpages_from_fscache(ctx, inode, mapping, pages, - nr_pages); + if (NFS_I(rac->mapping->host)->fscache) + return __nfs_readahead_from_fscache(desc, rac); return -ENOBUFS; } @@ -195,11 +191,8 @@ static inline int nfs_readpage_from_fscache(struct file *filp, { return -ENOBUFS; } -static inline int nfs_readpages_from_fscache(struct nfs_open_context *ctx, - struct inode *inode, - struct address_space *mapping, - struct list_head *pages, - unsigned *nr_pages) +static inline int nfs_readahead_from_fscache(struct nfs_readdesc *desc, + struct readahead_control *rac) { return -ENOBUFS; } diff --git a/fs/nfs/read.c b/fs/nfs/read.c index 7a76ab474fe0..da44ce68488c 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -390,50 +390,50 @@ int nfs_readpage(struct file *filp, struct page *page) return ret; } -int nfs_readpages(struct file *filp, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) +void nfs_readahead(struct readahead_control *rac) { struct nfs_readdesc desc; - struct inode *inode = mapping->host; + struct inode *inode = rac->mapping->host; + struct page *page; int ret; - dprintk("NFS: nfs_readpages (%s/%Lu %d)\n", - inode->i_sb->s_id, - (unsigned long long)NFS_FILEID(inode), - nr_pages); + dprintk("NFS: %s (%s/%llu %lld)\n", __func__, + inode->i_sb->s_id, + (unsigned long long)NFS_FILEID(inode), + readahead_length(rac)); nfs_inc_stats(inode, NFSIOS_VFSREADPAGES); ret = -ESTALE; if (NFS_STALE(inode)) - goto out; + return; - if (filp == NULL) { + if (rac->file == NULL) { ret = -EBADF; desc.ctx = nfs_find_open_context(inode, NULL, FMODE_READ); if (desc.ctx == NULL) - goto out; + return; } else - desc.ctx = get_nfs_open_context(nfs_file_open_context(filp)); + desc.ctx = get_nfs_open_context(nfs_file_open_context(rac->file)); /* attempt to read as many of the pages as possible from the cache * - this returns -ENOBUFS immediately if the cookie is negative */ - ret = nfs_readpages_from_fscache(desc.ctx, inode, mapping, - pages, &nr_pages); + ret = nfs_readahead_from_fscache(&desc, rac); if (ret == 0) goto read_complete; /* all pages were read */ nfs_pageio_init_read(&desc.pgio, inode, false, &nfs_async_read_completion_ops); - ret = read_cache_pages(mapping, pages, readpage_async_filler, &desc); + while ((page = readahead_page(rac))) { + ret = readpage_async_filler(&desc, page); + put_page(page); + } nfs_pageio_complete_read(&desc.pgio, inode); read_complete: put_nfs_open_context(desc.ctx); -out: - return ret; } int __init nfs_init_readpagecache(void) diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h index abe6416236c9..217900eaf456 100644 --- a/include/linux/nfs_fs.h +++ b/include/linux/nfs_fs.h @@ -563,8 +563,7 @@ extern int nfs_access_get_cached(struct inode *inode, const struct cred *cred, s * linux/fs/nfs/read.c */ extern int nfs_readpage(struct file *, struct page *); -extern int nfs_readpages(struct file *, struct address_space *, - struct list_head *, unsigned); +extern void nfs_readahead(struct readahead_control *rac); /* * inline functions diff --git a/include/linux/nfs_iostat.h b/include/linux/nfs_iostat.h index 027874c36c88..8baf8fb7551d 100644 --- a/include/linux/nfs_iostat.h +++ b/include/linux/nfs_iostat.h @@ -53,7 +53,7 @@ * NFS page counters * * These count the number of pages read or written via nfs_readpage(), - * nfs_readpages(), or their write equivalents. + * nfs_readahead(), or their write equivalents. * * NB: When adding new byte counters, please include the measured * units in the name of each byte counter to help users of this From patchwork Sat Nov 21 13:29:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 11923385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38289C388F9 for ; Sat, 21 Nov 2020 13:29:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 087F022206 for ; Sat, 21 Nov 2020 13:29:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="fc+duRVE" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726175AbgKUN3s (ORCPT ); Sat, 21 Nov 2020 08:29:48 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:46648 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727835AbgKUN3s (ORCPT ); Sat, 21 Nov 2020 08:29:48 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605965387; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc; bh=VAXckMHHl5zNItyQP+pQvMSgmB6LlMsGCXAt9gDyUbs=; b=fc+duRVEUu+9XdOPh0cmp/MeQGXDBg5xKi14bWZ998VWQoGuDIcRTGSQ7Mcx+t9OY47yEO r6cfn9+hNc9J/O6sO2Ll5DCWnV3jig8fvYlIYuRhbGRm7qEHYeBk26QpgvAdRThqUDp2yB noFi6PGLom4cbvjQltSZ+2w0Eg/kfAk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-142-ErDDHUS0Oz2MqX8A3cSb6w-1; Sat, 21 Nov 2020 08:29:45 -0500 X-MC-Unique: ErDDHUS0Oz2MqX8A3cSb6w-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8CEC580EDAC; Sat, 21 Nov 2020 13:29:44 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-41.rdu2.redhat.com [10.10.112.41]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 23B855D9D7; Sat, 21 Nov 2020 13:29:44 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, dhowells@redhat.com Subject: [PATCH v1 12/13] NFS: Allow NFS use of new fscache API in build Date: Sat, 21 Nov 2020 08:29:42 -0500 Message-Id: <1605965382-24902-1-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Add the ability for NFS to build FS-Cache against the new API. Signed-off-by: Dave Wysochanski --- fs/nfs/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/nfs/Kconfig b/fs/nfs/Kconfig index 8909ef506073..88e1763e02f3 100644 --- a/fs/nfs/Kconfig +++ b/fs/nfs/Kconfig @@ -170,7 +170,7 @@ config ROOT_NFS config NFS_FSCACHE bool "Provide NFS client caching support" - depends on NFS_FS=m && FSCACHE_OLD || NFS_FS=y && FSCACHE_OLD=y + depends on NFS_FS=m && FSCACHE || NFS_FS=y && FSCACHE=y help Say Y here if you want NFS data to be cached locally on disc through the general filesystem cache manager From patchwork Sat Nov 21 13:29:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 11923387 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B680C56202 for ; Sat, 21 Nov 2020 13:29:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 183AB22206 for ; Sat, 21 Nov 2020 13:29:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="f39zYXqA" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727838AbgKUN3v (ORCPT ); Sat, 21 Nov 2020 08:29:51 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:49349 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727835AbgKUN3v (ORCPT ); Sat, 21 Nov 2020 08:29:51 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605965389; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc; bh=J6xwu6+u1foI6kR1phHej5hvt+QmOmb70x1vccYwCSg=; b=f39zYXqAEk8GHyNHiFFDVh6MxIb4EhrGOo0EI69WLCFJzRMzObI0B8h72+jWuaEfhStq5B OYAxljNJuoSInNgxMwXh2MRnIpbkFIP+ewsUQZ98D0UwRksU5leXn1/hBKQKVTaNlO8kfB VBKrfw7bRYdvBmMZI37pGmNgrqBusQw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-289-_TsHyNcXPkSaVYdUicdiJQ-1; Sat, 21 Nov 2020 08:29:47 -0500 X-MC-Unique: _TsHyNcXPkSaVYdUicdiJQ-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B13C13FF5; Sat, 21 Nov 2020 13:29:46 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-41.rdu2.redhat.com [10.10.112.41]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 38F5A5C1D5; Sat, 21 Nov 2020 13:29:46 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, dhowells@redhat.com Subject: [PATCH v1 13/13] NFS: Ensure proper page unlocking when reads fail with retryable errors Date: Sat, 21 Nov 2020 08:29:44 -0500 Message-Id: <1605965384-24936-1-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org The netfs API handles unlock of pages involved in IOs. However, the current NFS client implementation only utilizes the netfs API when fscache is enabled. If fscache is disabled, NFS is then responsible for unlocking pages involved in READs. This patch addresses an issue when fscache is enabled and READs complete with a retryable error. Specifically this problem is easily reproduced with the connectathon 'holey' test with NFSv2 and a NFS server that does not support READ_PLUS. With such a configuration, the READ_PLUS operation fails with -ENOTSUPP (-524), and due to commit 8f54c7a4babf ("NFS: Fix spurious EIO read errors"), inside nfs_readpage_release() the page is removed from the mapping via generic_error_remove_page(). Since fscache was enabled, nfs_readpage_release() skipped unlocking the page, with the assumption that netfs_subreq_terminated() would unltimately unlock the page inside netfs_rreq_unlock(). However, since NFS removed the page from the mapping, netfs_rreq_unlock() failed to see the page when iterating with xas_for_each, leaving the page locked. Sometime later when the page was freed, a bad page error would result. Fix the above by ensuring NFS unlocks the page and does not call netfs_subreq_terminated() when NFS encounters a retryable error with fscache enabled. Signed-off-by: Dave Wysochanski --- fs/nfs/read.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/fs/nfs/read.c b/fs/nfs/read.c index da44ce68488c..92992f5baf0b 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -123,11 +123,10 @@ static void nfs_readpage_release(struct nfs_page *req, int error) if (nfs_page_group_sync_on_bit(req, PG_UNLOCKPAGE)) { struct address_space *mapping = page_file_mapping(page); - if (PageUptodate(page)) - ; /* FIXME: review fscache page error handling */ - else if (!PageError(page) && !PagePrivate(page)) + if (!PageUptodate(page) && !PageError(page) && !PagePrivate(page)) generic_error_remove_page(mapping, page); - if (!nfs_i_fscache(inode)) + if (!nfs_i_fscache(inode) || + (nfs_i_fscache(inode) && error && !nfs_error_is_fatal_on_server(error))) unlock_page(page); } nfs_release_request(req); @@ -182,8 +181,9 @@ static void nfs_read_completion(struct nfs_pgio_header *hdr) nfs_list_remove_request(req); nfs_readpage_release(req, error); } - /* FIXME: NFS_IOHDR_ERROR and NFS_IOHDR_EOF handled per-page */ - nfs_read_completion_to_fscache(hdr, bytes); + /* Only call back into fscache if the read was not retried */ + if (!hdr->error || nfs_error_is_fatal_on_server(hdr->error)) + nfs_read_completion_to_fscache(hdr, bytes); out: hdr->release(hdr); }