From patchwork Sat Aug 10 19:59:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13759615 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DB9B9200A3; Sat, 10 Aug 2024 20:00:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723320048; cv=none; b=cLKV5LtQx/ms9X1+FxWkfjAJfiNZaGlxYxNAXoKiARF7hUoZg4tkGhDNS1kiAPHNymoKQAAZ8vO2wIkUCwpAJBNDm3OGFcLi6MiEj2nFM6pt/LQ1GqknliLWpcT8MNiIVpIkAULnVAVbvSapIHK8F3t3wJ0EFQzGkEiZsLqZhkI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723320048; c=relaxed/simple; bh=zmY/wc/hPeawyw29C+jSurgeqH1p0Apjqt88nnplA0A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tLPy0/owlZuP7VhYPfLXotpxj13PoVqfeDkuMRJf49ncfSV4jNbnyf5cbSKDjf1NPiGzJJ5ZAKX0u08tHbm/NEa7HtoXXS32qpShvbfDW4riarlQWACgbr0EgpzEdNCzmC2VN5C81U7IGd0KJVeUupUqtBfV7IHyA9dPNRWGASE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KmLwAo+N; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KmLwAo+N" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9F556C4AF0C; Sat, 10 Aug 2024 20:00:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723320048; bh=zmY/wc/hPeawyw29C+jSurgeqH1p0Apjqt88nnplA0A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KmLwAo+NUFOPnyoB6thnr6VwdSuV/xjmfPz2v8dGFMUrIF6EfCZaZQr63qNUlCylo WVvCLamN3telEMHYRNrhtxszLArLz1aFzSlkw8cCqCgSl3QiuvobzPmat/YIe0rpnp bz2+eV4UhjH0qrlzM/YO3lopq3RCtdG8tKJbO65/Psd+xAPAYmtWIN+MH6lG5dlmpp PpVJxob382BCRTkSwf9PaWV/X720EP3mYMJBMDw7B91wES0L7rqdz4Po/gVOvonczb DaUKxqtjiE3CmhhJr6TMbawigAlrpFe0hRDSLlXUIObY0ftvxUOCvd/UlD10zTrHwY ggPbzA/E7HG7Q== From: cel@kernel.org To: Cc: , pvorel@suse.cz, sherry.yang@oracle.com, calum.mackay@oracle.com, kernel-team@fb.com, ltp@lists.linux.it, Chuck Lever , Jeff Layton Subject: [PATCH 6.1.y 05/18] NFSD: Replace nfsd_prune_bucket() Date: Sat, 10 Aug 2024 15:59:56 -0400 Message-ID: <20240810200009.9882-6-cel@kernel.org> X-Mailer: git-send-email 2.45.1 In-Reply-To: <20240810200009.9882-1-cel@kernel.org> References: <20240810200009.9882-1-cel@kernel.org> Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Chuck Lever [ Upstream commit a9507f6af1450ed26a4a36d979af518f5bb21e5d ] Enable nfsd_prune_bucket() to drop the bucket lock while calling kfree(). Use the same pattern that Jeff recently introduced in the NFSD filecache. A few percpu operations are moved outside the lock since they temporarily disable local IRQs which is expensive and does not need to be done while the lock is held. Reviewed-by: Jeff Layton Stable-dep-of: c135e1269f34 ("NFSD: Refactor the duplicate reply cache shrinker") Signed-off-by: Chuck Lever --- fs/nfsd/nfscache.c | 78 +++++++++++++++++++++++++++++++++++++--------- fs/nfsd/trace.h | 22 +++++++++++++ 2 files changed, 85 insertions(+), 15 deletions(-) diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c index 16e62e92964a..b553f2cece58 100644 --- a/fs/nfsd/nfscache.c +++ b/fs/nfsd/nfscache.c @@ -117,6 +117,21 @@ static void nfsd_cacherep_free(struct svc_cacherep *rp) kmem_cache_free(drc_slab, rp); } +static unsigned long +nfsd_cacherep_dispose(struct list_head *dispose) +{ + struct svc_cacherep *rp; + unsigned long freed = 0; + + while (!list_empty(dispose)) { + rp = list_first_entry(dispose, struct svc_cacherep, c_lru); + list_del(&rp->c_lru); + nfsd_cacherep_free(rp); + freed++; + } + return freed; +} + static void nfsd_cacherep_unlink_locked(struct nfsd_net *nn, struct nfsd_drc_bucket *b, struct svc_cacherep *rp) @@ -260,6 +275,41 @@ nfsd_cache_bucket_find(__be32 xid, struct nfsd_net *nn) return &nn->drc_hashtbl[hash]; } +/* + * Remove and return no more than @max expired entries in bucket @b. + * If @max is zero, do not limit the number of removed entries. + */ +static void +nfsd_prune_bucket_locked(struct nfsd_net *nn, struct nfsd_drc_bucket *b, + unsigned int max, struct list_head *dispose) +{ + unsigned long expiry = jiffies - RC_EXPIRE; + struct svc_cacherep *rp, *tmp; + unsigned int freed = 0; + + lockdep_assert_held(&b->cache_lock); + + /* The bucket LRU is ordered oldest-first. */ + list_for_each_entry_safe(rp, tmp, &b->lru_head, c_lru) { + /* + * Don't free entries attached to calls that are still + * in-progress, but do keep scanning the list. + */ + if (rp->c_state == RC_INPROG) + continue; + + if (atomic_read(&nn->num_drc_entries) <= nn->max_drc_entries && + time_before(expiry, rp->c_timestamp)) + break; + + nfsd_cacherep_unlink_locked(nn, b, rp); + list_add(&rp->c_lru, dispose); + + if (max && ++freed > max) + break; + } +} + static long prune_bucket(struct nfsd_drc_bucket *b, struct nfsd_net *nn, unsigned int max) { @@ -283,11 +333,6 @@ static long prune_bucket(struct nfsd_drc_bucket *b, struct nfsd_net *nn, return freed; } -static long nfsd_prune_bucket(struct nfsd_drc_bucket *b, struct nfsd_net *nn) -{ - return prune_bucket(b, nn, 3); -} - /* * Walk the LRU list and prune off entries that are older than RC_EXPIRE. * Also prune the oldest ones when the total exceeds the max number of entries. @@ -466,6 +511,8 @@ int nfsd_cache_lookup(struct svc_rqst *rqstp, unsigned int start, __wsum csum; struct nfsd_drc_bucket *b; int type = rqstp->rq_cachetype; + unsigned long freed; + LIST_HEAD(dispose); int rtn = RC_DOIT; rqstp->rq_cacherep = NULL; @@ -490,20 +537,18 @@ int nfsd_cache_lookup(struct svc_rqst *rqstp, unsigned int start, found = nfsd_cache_insert(b, rp, nn); if (found != rp) goto found_entry; - - nfsd_stats_rc_misses_inc(); rqstp->rq_cacherep = rp; rp->c_state = RC_INPROG; + nfsd_prune_bucket_locked(nn, b, 3, &dispose); + spin_unlock(&b->cache_lock); + freed = nfsd_cacherep_dispose(&dispose); + trace_nfsd_drc_gc(nn, freed); + + nfsd_stats_rc_misses_inc(); atomic_inc(&nn->num_drc_entries); nfsd_stats_drc_mem_usage_add(nn, sizeof(*rp)); - - nfsd_prune_bucket(b, nn); - -out_unlock: - spin_unlock(&b->cache_lock); -out: - return rtn; + goto out; found_entry: /* We found a matching entry which is either in progress or done. */ @@ -541,7 +586,10 @@ int nfsd_cache_lookup(struct svc_rqst *rqstp, unsigned int start, out_trace: trace_nfsd_drc_found(nn, rqstp, rtn); - goto out_unlock; +out_unlock: + spin_unlock(&b->cache_lock); +out: + return rtn; } /** diff --git a/fs/nfsd/trace.h b/fs/nfsd/trace.h index 84f26f281fe9..447b3483f94b 100644 --- a/fs/nfsd/trace.h +++ b/fs/nfsd/trace.h @@ -1261,6 +1261,28 @@ TRACE_EVENT(nfsd_drc_mismatch, __entry->ingress) ); +TRACE_EVENT_CONDITION(nfsd_drc_gc, + TP_PROTO( + const struct nfsd_net *nn, + unsigned long freed + ), + TP_ARGS(nn, freed), + TP_CONDITION(freed > 0), + TP_STRUCT__entry( + __field(unsigned long long, boot_time) + __field(unsigned long, freed) + __field(int, total) + ), + TP_fast_assign( + __entry->boot_time = nn->boot_time; + __entry->freed = freed; + __entry->total = atomic_read(&nn->num_drc_entries); + ), + TP_printk("boot_time=%16llx total=%d freed=%lu", + __entry->boot_time, __entry->total, __entry->freed + ) +); + TRACE_EVENT(nfsd_cb_args, TP_PROTO( const struct nfs4_client *clp,