From patchwork Thu May 15 15:56:47 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Weston Andros Adamson X-Patchwork-Id: 4183421 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id BB4CF9F271 for ; Thu, 15 May 2014 15:57:25 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C7BD3201FE for ; Thu, 15 May 2014 15:57:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7C34520386 for ; Thu, 15 May 2014 15:57:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755105AbaEOP5T (ORCPT ); Thu, 15 May 2014 11:57:19 -0400 Received: from mail-ig0-f173.google.com ([209.85.213.173]:40715 "EHLO mail-ig0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751912AbaEOP5T (ORCPT ); Thu, 15 May 2014 11:57:19 -0400 Received: by mail-ig0-f173.google.com with SMTP id hn18so8099883igb.12 for ; Thu, 15 May 2014 08:57:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=wIvhplujT9orePfxgOhjyknp7h5VDRxcn01ZCdsLxT4=; b=L1C1Ob3qdCgpCRZY620yYCx/oeqAoUNzJwgMM/f7rQx0lUkCojv41MdEJZ73MgCF0O ONVd+fuU24wpgMCt4CtVsYluL8bGylxU9XZu9IpErpiqQ0EKHXQM1SCBY+lElAysB73L hUhUXBcZxlt6+G1ymq+1Vj4Gl3FEwqw6hFfBLn2P9zMFUJO6E5rZpAT4SSQ16KwWxouY 5OIOCfo0Ef2fhg40ByRAAX0kH0r6Y/M/VeShyiLtsFuyDa3x33T5M3GnPutVQVXu+APp ttb8cDxCddNnrqVWGnXn1qPy80lj8NCPm7bewReOL8cP8MbsTklO5WYxtdoHG6xl6wb6 BXlw== X-Gm-Message-State: ALoCoQmukL2UDMY0+hBtgEKW95/HxXsMvFlk7PBSJopKfrfDPuImr/1i0Cxodw4QvLkBy4zXUvRv X-Received: by 10.42.232.19 with SMTP id js19mr10473475icb.17.1400169438758; Thu, 15 May 2014 08:57:18 -0700 (PDT) Received: from gavrio-wifi.robotsandstuff.fake (c-98-209-19-144.hsd1.mi.comcast.net. [98.209.19.144]) by mx.google.com with ESMTPSA id jf5sm13964856igb.19.2014.05.15.08.57.16 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 15 May 2014 08:57:17 -0700 (PDT) From: Weston Andros Adamson To: trond.myklebust@primarydata.com Cc: linux-nfs@vger.kernel.org, Weston Andros Adamson Subject: [PATCH v3 08/18] nfs: page group syncing in write path Date: Thu, 15 May 2014 11:56:47 -0400 Message-Id: <1400169417-28245-9-git-send-email-dros@primarydata.com> X-Mailer: git-send-email 1.8.5.2 (Apple Git-48) In-Reply-To: <1400169417-28245-1-git-send-email-dros@primarydata.com> References: <1400169417-28245-1-git-send-email-dros@primarydata.com> Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Operations that modify state for a whole page must be syncronized across all requests within a page group. In the write path, this is calling end_page_writeback and removing the head request from an inode. Both of these operations should not be called until all requests in a page group have reached the point where they would call them. This patch should have no effect yet since all page groups currently have one request, but will come into play when pg_test functions are modified to split pages into sub-page regions. Signed-off-by: Weston Andros Adamson --- fs/nfs/pagelist.c | 2 ++ fs/nfs/write.c | 32 ++++++++++++++++++++------------ include/linux/nfs_page.h | 2 ++ 3 files changed, 24 insertions(+), 12 deletions(-) diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c index 87cdb4b..ce3faad 100644 --- a/fs/nfs/pagelist.c +++ b/fs/nfs/pagelist.c @@ -397,6 +397,8 @@ static void nfs_free_request(struct nfs_page *req) WARN_ON_ONCE(test_bit(PG_TEARDOWN, &req->wb_flags)); WARN_ON_ONCE(test_bit(PG_UNLOCKPAGE, &req->wb_flags)); WARN_ON_ONCE(test_bit(PG_UPTODATE, &req->wb_flags)); + WARN_ON_ONCE(test_bit(PG_WB_END, &req->wb_flags)); + WARN_ON_ONCE(test_bit(PG_REMOVE, &req->wb_flags)); /* Release struct file and open context */ nfs_clear_request(req); diff --git a/fs/nfs/write.c b/fs/nfs/write.c index d0f30f1..5d75276 100644 --- a/fs/nfs/write.c +++ b/fs/nfs/write.c @@ -201,12 +201,15 @@ static void nfs_set_page_writeback(struct page *page) } } -static void nfs_end_page_writeback(struct page *page) +static void nfs_end_page_writeback(struct nfs_page *req) { - struct inode *inode = page_file_mapping(page)->host; + struct inode *inode = page_file_mapping(req->wb_page)->host; struct nfs_server *nfss = NFS_SERVER(inode); - end_page_writeback(page); + if (!nfs_page_group_sync_on_bit(req, PG_WB_END)) + return; + + end_page_writeback(req->wb_page); if (atomic_long_dec_return(&nfss->writeback) < NFS_CONGESTION_OFF_THRESH) clear_bdi_congested(&nfss->backing_dev_info, BLK_RW_ASYNC); } @@ -397,15 +400,20 @@ static void nfs_inode_remove_request(struct nfs_page *req) { struct inode *inode = req->wb_context->dentry->d_inode; struct nfs_inode *nfsi = NFS_I(inode); + struct nfs_page *head; - spin_lock(&inode->i_lock); - if (likely(!PageSwapCache(req->wb_page))) { - set_page_private(req->wb_page, 0); - ClearPagePrivate(req->wb_page); - clear_bit(PG_MAPPED, &req->wb_flags); + if (nfs_page_group_sync_on_bit(req, PG_REMOVE)) { + head = req->wb_head; + + spin_lock(&inode->i_lock); + if (likely(!PageSwapCache(head->wb_page))) { + set_page_private(head->wb_page, 0); + ClearPagePrivate(head->wb_page); + clear_bit(PG_MAPPED, &head->wb_flags); + } + nfsi->npages--; + spin_unlock(&inode->i_lock); } - nfsi->npages--; - spin_unlock(&inode->i_lock); nfs_release_request(req); } @@ -599,7 +607,7 @@ remove_req: nfs_inode_remove_request(req); next: nfs_unlock_request(req); - nfs_end_page_writeback(req->wb_page); + nfs_end_page_writeback(req); do_destroy = !test_bit(NFS_IOHDR_NEED_COMMIT, &hdr->flags); nfs_release_request(req); } @@ -964,7 +972,7 @@ static void nfs_redirty_request(struct nfs_page *req) { nfs_mark_request_dirty(req); nfs_unlock_request(req); - nfs_end_page_writeback(req->wb_page); + nfs_end_page_writeback(req); nfs_release_request(req); } diff --git a/include/linux/nfs_page.h b/include/linux/nfs_page.h index 6385175..7d9096d 100644 --- a/include/linux/nfs_page.h +++ b/include/linux/nfs_page.h @@ -31,6 +31,8 @@ enum { PG_TEARDOWN, /* page group sync for destroy */ PG_UNLOCKPAGE, /* page group sync bit in read path */ PG_UPTODATE, /* page group sync bit in read path */ + PG_WB_END, /* page group sync bit in write path */ + PG_REMOVE, /* page group sync bit in write path */ }; struct nfs_inode;