From patchwork Thu Jan 22 23:36:16 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Haynes X-Patchwork-Id: 5689751 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 598A0C058D for ; Thu, 22 Jan 2015 23:38:27 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 65CCF2011D for ; Thu, 22 Jan 2015 23:38:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 66D272010B for ; Thu, 22 Jan 2015 23:38:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754139AbbAVXiY (ORCPT ); Thu, 22 Jan 2015 18:38:24 -0500 Received: from mail-pa0-f45.google.com ([209.85.220.45]:43030 "EHLO mail-pa0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753860AbbAVXiY (ORCPT ); Thu, 22 Jan 2015 18:38:24 -0500 Received: by mail-pa0-f45.google.com with SMTP id et14so299984pad.4 for ; Thu, 22 Jan 2015 15:38:23 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PBWt/+xym/ZSf0vgD5alj4qUkp0ig9/OVPLj/WRqS3U=; b=H2kvt6CPgqfEC4u2hVyMgED89Vm/Ef5/gWzbSyQKQJlUo/MUyKnr14BnSWmBgqTdfe XuKy7SCQHrKYma39solVff7x73YhafNQ3baqF4Tq3gxl1MLywsk7q1h0o8/8T5aceqgt Ka4Ry5AW+NSeNQKSxhhbW4AvNq7HnNxTpYod9uSoKVSElbREQU45EWNewJiIo7P6VZHK pWONRobmnOLK/JkaAonLBhZNnnnuszQ95lNmLNji23XuHQp8S2pDDjsATQSL/xbHWyTz AJV7t0dVH3N56GB/xxzemzDin12UzNjb6LJcnyW77Ng6dFL+BmQpUVmM0V+ubzQFmmws ubUQ== X-Gm-Message-State: ALoCoQm5+cdzs4F/huXlRN4gGPGYA6ecjkxD6zZvapkab8DYJsIgFr+uuHAngByAsehRU7KbCgI1 X-Received: by 10.68.231.102 with SMTP id tf6mr6612764pbc.40.1421969903452; Thu, 22 Jan 2015 15:38:23 -0800 (PST) Received: from localhost.localdomain ([50.242.95.105]) by mx.google.com with ESMTPSA id fu1sm366817pdb.80.2015.01.22.15.38.22 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 22 Jan 2015 15:38:22 -0800 (PST) From: Tom Haynes X-Google-Original-From: Tom Haynes To: Trond Myklebust Cc: Linux NFS Mailing list Subject: [PATCH v5 44/51] nfs41: introduce NFS_LAYOUT_RETURN_BEFORE_CLOSE Date: Thu, 22 Jan 2015 15:36:16 -0800 Message-Id: <1421969783-92997-45-git-send-email-loghyr@primarydata.com> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1421969783-92997-1-git-send-email-loghyr@primarydata.com> References: <1421969783-92997-1-git-send-email-loghyr@primarydata.com> Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Peng Tao When it is set, generic pnfs would try to send layoutreturn right before last close/delegation_return regard less NFS_LAYOUT_ROC is set or not. LD can then make sure layoutreturn is always sent rather than being omitted. The difference against NFS_LAYOUT_RETURN is that NFS_LAYOUT_RETURN_BEFORE_CLOSE does not block usage of the layout so LD can set it and expect generic layer to try pnfs path at the same time. Signed-off-by: Peng Tao Signed-off-by: Tom Haynes --- fs/nfs/nfs4proc.c | 2 ++ fs/nfs/pnfs.c | 40 +++++++++++++++++++++++++++++++++------- fs/nfs/pnfs.h | 1 + 3 files changed, 36 insertions(+), 7 deletions(-) diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index 2397c0f..7e1a97a 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -7797,6 +7797,8 @@ static void nfs4_layoutreturn_release(void *calldata) if (lrp->res.lrs_present) pnfs_set_layout_stateid(lo, &lrp->res.stateid, true); clear_bit(NFS_LAYOUT_RETURN, &lo->plh_flags); + clear_bit(NFS_LAYOUT_RETURN_BEFORE_CLOSE, &lo->plh_flags); + rpc_wake_up(&NFS_SERVER(lo->plh_inode)->roc_rpcwaitq); lo->plh_block_lgets--; spin_unlock(&lo->plh_inode->i_lock); pnfs_put_layout_hdr(lrp->args.layout); diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c index 0a0e209..d3c2ca7 100644 --- a/fs/nfs/pnfs.c +++ b/fs/nfs/pnfs.c @@ -909,6 +909,7 @@ pnfs_send_layoutreturn(struct pnfs_layout_hdr *lo, nfs4_stateid stateid, status = -ENOMEM; spin_lock(&ino->i_lock); lo->plh_block_lgets--; + rpc_wake_up(&NFS_SERVER(ino)->roc_rpcwaitq); spin_unlock(&ino->i_lock); pnfs_put_layout_hdr(lo); goto out; @@ -926,11 +927,6 @@ pnfs_send_layoutreturn(struct pnfs_layout_hdr *lo, nfs4_stateid stateid, status = nfs4_proc_layoutreturn(lrp, sync); out: - if (status) { - spin_lock(&ino->i_lock); - clear_bit(NFS_LAYOUT_RETURN, &lo->plh_flags); - spin_unlock(&ino->i_lock); - } dprintk("<-- %s status: %d\n", __func__, status); return status; } @@ -1028,8 +1024,9 @@ bool pnfs_roc(struct inode *ino) { struct pnfs_layout_hdr *lo; struct pnfs_layout_segment *lseg, *tmp; + nfs4_stateid stateid; LIST_HEAD(tmp_list); - bool found = false; + bool found = false, layoutreturn = false; spin_lock(&ino->i_lock); lo = NFS_I(ino)->layout; @@ -1050,7 +1047,20 @@ bool pnfs_roc(struct inode *ino) return true; out_nolayout: + if (lo) { + stateid = lo->plh_stateid; + layoutreturn = + test_and_clear_bit(NFS_LAYOUT_RETURN_BEFORE_CLOSE, + &lo->plh_flags); + if (layoutreturn) { + lo->plh_block_lgets++; + pnfs_get_layout_hdr(lo); + } + } spin_unlock(&ino->i_lock); + if (layoutreturn) + pnfs_send_layoutreturn(lo, stateid, IOMODE_ANY, 0, + NFS4_MAX_UINT64, true); return false; } @@ -1085,8 +1095,9 @@ bool pnfs_roc_drain(struct inode *ino, u32 *barrier, struct rpc_task *task) struct nfs_inode *nfsi = NFS_I(ino); struct pnfs_layout_hdr *lo; struct pnfs_layout_segment *lseg; + nfs4_stateid stateid; u32 current_seqid; - bool found = false; + bool found = false, layoutreturn = false; spin_lock(&ino->i_lock); list_for_each_entry(lseg, &nfsi->layout->plh_segs, pls_list) @@ -1103,7 +1114,22 @@ bool pnfs_roc_drain(struct inode *ino, u32 *barrier, struct rpc_task *task) */ *barrier = current_seqid + atomic_read(&lo->plh_outstanding); out: + if (!found) { + stateid = lo->plh_stateid; + layoutreturn = + test_and_clear_bit(NFS_LAYOUT_RETURN_BEFORE_CLOSE, + &lo->plh_flags); + if (layoutreturn) { + lo->plh_block_lgets++; + pnfs_get_layout_hdr(lo); + } + } spin_unlock(&ino->i_lock); + if (layoutreturn) { + rpc_sleep_on(&NFS_SERVER(ino)->roc_rpcwaitq, task, NULL); + pnfs_send_layoutreturn(lo, stateid, IOMODE_ANY, 0, + NFS4_MAX_UINT64, false); + } return found; } diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h index 1da6641..ec81767 100644 --- a/fs/nfs/pnfs.h +++ b/fs/nfs/pnfs.h @@ -96,6 +96,7 @@ enum { NFS_LAYOUT_BULK_RECALL, /* bulk recall affecting layout */ NFS_LAYOUT_ROC, /* some lseg had roc bit set */ NFS_LAYOUT_RETURN, /* Return this layout ASAP */ + NFS_LAYOUT_RETURN_BEFORE_CLOSE, /* Return this layout before close */ NFS_LAYOUT_INVALID_STID, /* layout stateid id is invalid */ NFS_LAYOUT_FIRST_LAYOUTGET, /* Serialize first layoutget */ };