From patchwork Tue Dec 16 19:01:28 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Haynes X-Patchwork-Id: 5502991 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7B6B59F326 for ; Tue, 16 Dec 2014 19:02:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 94D45209BD for ; Tue, 16 Dec 2014 19:02:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A87A5209C4 for ; Tue, 16 Dec 2014 19:02:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751147AbaLPTCv (ORCPT ); Tue, 16 Dec 2014 14:02:51 -0500 Received: from mail-pd0-f178.google.com ([209.85.192.178]:37354 "EHLO mail-pd0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751265AbaLPTCu (ORCPT ); Tue, 16 Dec 2014 14:02:50 -0500 Received: by mail-pd0-f178.google.com with SMTP id r10so14346803pdi.23 for ; Tue, 16 Dec 2014 11:02:50 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=gwJECN18fV6qRI3QigDKKCbyjurGVagYrmDCmXvS5Uo=; b=VJDcdummDUpEj1eKya923kC7jPbN/ds7aprwwKlR3VtI9jEdMzQ07vKUtP2xPJpH1e XYeeyDJsKNZ29xh3gUh0V2I2QAMndMiQ9wH0SxaXqAgw+Z4wR1OgpZhS0E3SQoX2CHoX 0GvoAR2bsXS2Xox8uI3/I1xlylgKC5yv5sPs1K1JMYZLBowbbTc+w3TduWmZbk1vgcIV EIhABjGo/cUXyD+NLgEECE1xnCBUfl3ZYH40V17xUDbypVv8tgtXHAsEff7PdL/Yse+T 0ZthkMObkff46/nnhHGIjdpkrMeJ68qrqLUvrYa2pdGYb7xI79T482REWYLW5qONiUzv fELg== X-Gm-Message-State: ALoCoQma6R5as28BS+s6rwwY+iY9UcPNfDNlqj9mqUJ8bMMBPypEfKzGjw2MQkszjEb4k9KQMa71 X-Received: by 10.70.26.35 with SMTP id i3mr63997716pdg.84.1418756570181; Tue, 16 Dec 2014 11:02:50 -0800 (PST) Received: from localhost.localdomain ([50.242.95.105]) by mx.google.com with ESMTPSA id q4sm1620669pdh.79.2014.12.16.11.02.49 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 16 Dec 2014 11:02:49 -0800 (PST) From: Tom Haynes X-Google-Original-From: Tom Haynes To: Trond Myklebust Cc: Linux NFS Mailing List Subject: [PATCH 25/50] nfs41: add a helper to mark layout for return Date: Tue, 16 Dec 2014 11:01:28 -0800 Message-Id: <1418756513-95187-26-git-send-email-loghyr@primarydata.com> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1418756513-95187-1-git-send-email-loghyr@primarydata.com> References: <1418756513-95187-1-git-send-email-loghyr@primarydata.com> Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Peng Tao It marks all matching layout segments as NFS_LSEG_LAYOUTRETURN, which is an indicator for pnfs_put_lseg() to send layoutreturn, and also prevents pnfs_update_layout() from using the returning segments. Once it is set, it never gets cleared. It also sets proper io failure bit so that pnfs path can be retried after PNFS_LAYOUTGET_RETRY_TIMEOUT second. Signed-off-by: Peng Tao Signed-off-by: Tom Haynes --- fs/nfs/pnfs.c | 55 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ fs/nfs/pnfs.h | 4 ++++ 2 files changed, 59 insertions(+) diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c index 1b97209..0bd149b 100644 --- a/fs/nfs/pnfs.c +++ b/fs/nfs/pnfs.c @@ -1479,6 +1479,61 @@ out_forget_reply: goto out; } +static void +pnfs_mark_matching_lsegs_return(struct pnfs_layout_hdr *lo, + struct list_head *tmp_list, + struct pnfs_layout_range *return_range) +{ + struct pnfs_layout_segment *lseg, *next; + + dprintk("%s:Begin lo %p\n", __func__, lo); + + if (list_empty(&lo->plh_segs)) + return; + + list_for_each_entry_safe(lseg, next, &lo->plh_segs, pls_list) + if (should_free_lseg(&lseg->pls_range, return_range)) { + dprintk("%s: marking lseg %p iomode %d " + "offset %llu length %llu\n", __func__, + lseg, lseg->pls_range.iomode, + lseg->pls_range.offset, + lseg->pls_range.length); + set_bit(NFS_LSEG_LAYOUTRETURN, &lseg->pls_flags); + mark_lseg_invalid(lseg, tmp_list); + } +} + +void pnfs_error_mark_layout_for_return(struct inode *inode, + struct pnfs_layout_segment *lseg) +{ + struct pnfs_layout_hdr *lo = NFS_I(inode)->layout; + int iomode = pnfs_iomode_to_fail_bit(lseg->pls_range.iomode); + struct pnfs_layout_range range = { + .iomode = lseg->pls_range.iomode, + .offset = 0, + .length = NFS4_MAX_UINT64, + }; + LIST_HEAD(free_me); + + spin_lock(&inode->i_lock); + /* set failure bit so that pnfs path will be retried later */ + pnfs_layout_set_fail_bit(lo, iomode); + set_bit(NFS_LAYOUT_RETURN, &lo->plh_flags); + if (lo->plh_return_iomode == 0) + lo->plh_return_iomode = range.iomode; + else if (lo->plh_return_iomode != range.iomode) + lo->plh_return_iomode = IOMODE_ANY; + /* + * mark all matching lsegs so that we are sure to have no live + * segments at hand when sending layoutreturn. See pnfs_put_lseg() + * for how it works. + */ + pnfs_mark_matching_lsegs_return(lo, &free_me, &range); + spin_unlock(&inode->i_lock); + pnfs_free_lseg_list(&free_me); +} +EXPORT_SYMBOL_GPL(pnfs_error_mark_layout_for_return); + void pnfs_generic_pg_init_read(struct nfs_pageio_descriptor *pgio, struct nfs_page *req) { diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h index 6594429..3ce292e 100644 --- a/fs/nfs/pnfs.h +++ b/fs/nfs/pnfs.h @@ -38,6 +38,7 @@ enum { NFS_LSEG_VALID = 0, /* cleared when lseg is recalled/returned */ NFS_LSEG_ROC, /* roc bit received from server */ NFS_LSEG_LAYOUTCOMMIT, /* layoutcommit bit set for layoutcommit */ + NFS_LSEG_LAYOUTRETURN, /* layoutreturn bit set for layoutreturn */ }; /* Individual ip address */ @@ -184,6 +185,7 @@ struct pnfs_layout_hdr { u32 plh_barrier; /* ignore lower seqids */ unsigned long plh_retry_timestamp; unsigned long plh_flags; + enum pnfs_iomode plh_return_iomode; loff_t plh_lwb; /* last write byte for layoutcommit */ struct rpc_cred *plh_lc_cred; /* layoutcommit cred */ struct inode *plh_inode; @@ -274,6 +276,8 @@ void nfs4_deviceid_mark_client_invalid(struct nfs_client *clp); int pnfs_read_done_resend_to_mds(struct nfs_pgio_header *); int pnfs_write_done_resend_to_mds(struct nfs_pgio_header *); struct nfs4_threshold *pnfs_mdsthreshold_alloc(void); +void pnfs_error_mark_layout_for_return(struct inode *inode, + struct pnfs_layout_segment *lseg); /* nfs4_deviceid_flags */ enum {