From patchwork Wed Nov 4 16:02:29 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Layton X-Patchwork-Id: 7551651 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id F0ABA9F399 for ; Wed, 4 Nov 2015 16:02:40 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E57D12061F for ; Wed, 4 Nov 2015 16:02:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1620E2038F for ; Wed, 4 Nov 2015 16:02:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965613AbbKDQCh (ORCPT ); Wed, 4 Nov 2015 11:02:37 -0500 Received: from mail-qg0-f41.google.com ([209.85.192.41]:35967 "EHLO mail-qg0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964926AbbKDQCf (ORCPT ); Wed, 4 Nov 2015 11:02:35 -0500 Received: by qgad10 with SMTP id d10so43590478qga.3 for ; Wed, 04 Nov 2015 08:02:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=poochiereds_net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=XATN/MHb/IMkXuujaQmtI+zkct/GEcy5P0QFKutKlqU=; b=qV6S3fYk5YxLgfZdjGyLmDgb88wLYq6cJmD56rh4/I58sknuuXz5QLkajfg870sE4F +x4cJPMAL8pvB2E0Z0g4KS2V/4WHItYRio5zqHj3ctPIvuTlvXTJ8Dk9B7JTeBUhR1Oe hdx+ckZASVYxTp+GzSMIkwbvdKxRuLKqwrn4yKU1ukjrd0vH2DUaPN3a4YGTkpBMFWVD 0Ix7TF2Up+zUzzLHmig7Hf7EDjQ0DLwEV3t+Lu2T/1b82rMs1S5VMTvhpWOEILQ6dXnN 86lR/TkYQItLW91kkqBKDhZBrDAXgNaH3WINCaD6JcLqPskIecaTQ7ZDItmdxMSK+DLn hmcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=XATN/MHb/IMkXuujaQmtI+zkct/GEcy5P0QFKutKlqU=; b=ZHcOHVLCgF+zfLknxLoIP6I9ZlJax3lAYjyBvTYmbuOloWYm0RsUuWDy4qgzqRSkNU iMhNslmivztUzL6M31yKGQq2eR0NhxIVtq6sMndUcotIjp/prYWT8h4/Ful4hBv/xpoT 2VIz0f7t0KObwT7D/xJhivUMzYk4JiegRXWC1i6w6R4r/PID8JlnrWdx3TZSgp7ToxPk yjVgE5/4zOgtCu6ey3xIv6ETPOMEunPXdMMpFHGATtUbdGX01++Dzsg9gIn/6XVcBDsm Ql2U5dVbDUWkhg0rnu6u+krBhjy2EEECTup/iqe1zN8HrAYSEZDTsZJhhi3Mu1Q2b0M7 mCoQ== X-Gm-Message-State: ALoCoQlwZhf7EM2K40B8ZJHP44DwRU5NPBsTHVPDwc3MGAiTqZBsK8wUSEUyE6Ez4G2zq3eQEr6+ X-Received: by 10.140.232.70 with SMTP id d67mr2436698qhc.62.1446652955016; Wed, 04 Nov 2015 08:02:35 -0800 (PST) Received: from tlielax.poochiereds.net ([2606:a000:1125:6079::d5a]) by smtp.googlemail.com with ESMTPSA id l85sm484185qhl.3.2015.11.04.08.02.34 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 04 Nov 2015 08:02:34 -0800 (PST) From: Jeff Layton X-Google-Original-From: Jeff Layton To: bfields@fieldses.org Cc: linux-nfs@vger.kernel.org Subject: [PATCH] nfsd: remove recurring workqueue job to clean DRC Date: Wed, 4 Nov 2015 11:02:29 -0500 Message-Id: <1446652949-16672-1-git-send-email-jeff.layton@primarydata.com> X-Mailer: git-send-email 2.4.3 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We have a shrinker, we clean out the cache when nfsd is shut down, and prune the chains on each request. A recurring workqueue job seems like unnecessary overhead. Just remove it. Signed-off-by: Jeff Layton --- fs/nfsd/nfscache.c | 26 -------------------------- 1 file changed, 26 deletions(-) diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c index 46ec934f5dee..8af64b6ffe91 100644 --- a/fs/nfsd/nfscache.c +++ b/fs/nfsd/nfscache.c @@ -63,7 +63,6 @@ static unsigned int longest_chain; static unsigned int longest_chain_cachesize; static int nfsd_cache_append(struct svc_rqst *rqstp, struct kvec *vec); -static void cache_cleaner_func(struct work_struct *unused); static unsigned long nfsd_reply_cache_count(struct shrinker *shrink, struct shrink_control *sc); static unsigned long nfsd_reply_cache_scan(struct shrinker *shrink, @@ -76,13 +75,6 @@ static struct shrinker nfsd_reply_cache_shrinker = { }; /* - * locking for the reply cache: - * A cache entry is "single use" if c_state == RC_INPROG - * Otherwise, it when accessing _prev or _next, the lock must be held. - */ -static DECLARE_DELAYED_WORK(cache_cleaner, cache_cleaner_func); - -/* * Put a cap on the size of the DRC based on the amount of available * low memory in the machine. * @@ -203,7 +195,6 @@ void nfsd_reply_cache_shutdown(void) unsigned int i; unregister_shrinker(&nfsd_reply_cache_shrinker); - cancel_delayed_work_sync(&cache_cleaner); for (i = 0; i < drc_hashsize; i++) { struct list_head *head = &drc_hashtbl[i].lru_head; @@ -232,7 +223,6 @@ lru_put_end(struct nfsd_drc_bucket *b, struct svc_cacherep *rp) { rp->c_timestamp = jiffies; list_move_tail(&rp->c_lru, &b->lru_head); - schedule_delayed_work(&cache_cleaner, RC_EXPIRE); } static long @@ -266,7 +256,6 @@ prune_cache_entries(void) { unsigned int i; long freed = 0; - bool cancel = true; for (i = 0; i < drc_hashsize; i++) { struct nfsd_drc_bucket *b = &drc_hashtbl[i]; @@ -275,26 +264,11 @@ prune_cache_entries(void) continue; spin_lock(&b->cache_lock); freed += prune_bucket(b); - if (!list_empty(&b->lru_head)) - cancel = false; spin_unlock(&b->cache_lock); } - - /* - * Conditionally rearm the job to run in RC_EXPIRE since we just - * ran the pruner. - */ - if (!cancel) - mod_delayed_work(system_wq, &cache_cleaner, RC_EXPIRE); return freed; } -static void -cache_cleaner_func(struct work_struct *unused) -{ - prune_cache_entries(); -} - static unsigned long nfsd_reply_cache_count(struct shrinker *shrink, struct shrink_control *sc) {