From patchwork Mon Sep 11 09:44:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13379784 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93186CA0EC7 for ; Mon, 11 Sep 2023 21:05:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238660AbjIKUyL (ORCPT ); Mon, 11 Sep 2023 16:54:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236060AbjIKJt5 (ORCPT ); Mon, 11 Sep 2023 05:49:57 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22796E40 for ; Mon, 11 Sep 2023 02:49:35 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id d9443c01a7336-1bf7b5e1f06so7712375ad.0 for ; Mon, 11 Sep 2023 02:49:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1694425774; x=1695030574; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CtzA3GFH4NUmzMiy9OwB7HPEUi8j2KPzX996I7+AsIY=; b=e8jTyELsuImvVjGWPgjt0PJLYA3E0Oty3t0rdMHv1Rjk6jhy/h1TkzY8gxMBeqBGx4 q/fR+F770L3MLEL5mmrv4PnjqpgDaGz18xIIDCEYsNPyFAZrw/sEWi2XHApA3VmUaL4q VSU3XOyuf+1PWgCuGS6xKls/R9z661XrqM2Dc9FR8Jb7L7tLrIG7z7BJz/YwW55Ql0wM zgPq5b3ZIYVqaA5/qcBQYXI5hR16E5MPNDe5Ph6IqbcQKMYrA9ztZlm1O49NagCPc3HF mt9Bu+ttsTkU/7xLvGewJ6TdZnEzs7IzVBwG+ei51eKL/65KEzPTqb2PQQ9uujDopNsu fPCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694425774; x=1695030574; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CtzA3GFH4NUmzMiy9OwB7HPEUi8j2KPzX996I7+AsIY=; b=XoDUykCQOf7VS/QzSU0PDwlFrVAsRIyoEBDb8yIs2aPr3omno0yPUsiHaDbsWdmjpZ 4kgxGvBXuFLakg8VP4eS8HgQ9oug8rYV5+LK0die7+yqUUQDjVDhD/tOm4hPc3OK4txg U6fKh2ai/UGy7txH47nin6B4cAk6NA60zVpz8UlVbF41tDR10/K0u6AW+LvEjdwFlOEE iiG0craFNs9Ag4nuJs4tBY8SUQBwKAF6omTkp8biqZCGG7L+g8n+R+gd+f84lMtirEE8 n4pBGGHhZxfqGh6ML8TrWNjVzEAhpkkxArcke4p9CYY+Nc0kkBHXw2X3Hu0JOO5MNbFU DemQ== X-Gm-Message-State: AOJu0YxlwJ80dZXEBOdtf6CMC/klf0izGqR6zEEh9hh0Qajhfmv3/J+h trB/c6WMOI+FlD8ubT4y2fziag== X-Google-Smtp-Source: AGHT+IGPH5rXG2DhyxB6chvRHKVjx+5Bo9SHNmAMgS3dYgtQNb6hmClqPMLOMBQwFJcONSVD+tzP0w== X-Received: by 2002:a17:903:2448:b0:1c1:ee23:bb75 with SMTP id l8-20020a170903244800b001c1ee23bb75mr10949821pls.1.1694425774641; Mon, 11 Sep 2023 02:49:34 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id az7-20020a170902a58700b001bdc2fdcf7esm5988188plb.129.2023.09.11.02.49.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Sep 2023 02:49:34 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, david@fromorbit.com, tkhai@ya.ru, vbabka@suse.cz, roman.gushchin@linux.dev, djwong@kernel.org, brauner@kernel.org, paulmck@kernel.org, tytso@mit.edu, steven.price@arm.com, cel@kernel.org, senozhatsky@chromium.org, yujie.liu@intel.com, gregkh@linuxfoundation.org, muchun.song@linux.dev Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Qi Zheng , Muchun Song , Alexander Viro Subject: [PATCH v6 29/45] mbcache: dynamically allocate the mbcache shrinker Date: Mon, 11 Sep 2023 17:44:28 +0800 Message-Id: <20230911094444.68966-30-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20230911094444.68966-1-zhengqi.arch@bytedance.com> References: <20230911094444.68966-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org In preparation for implementing lockless slab shrink, use new APIs to dynamically allocate the mbcache shrinker, so that it can be freed asynchronously via RCU. Then it doesn't need to wait for RCU read-side critical section when releasing the struct mb_cache. Signed-off-by: Qi Zheng Reviewed-by: Muchun Song CC: Alexander Viro CC: Christian Brauner --- fs/mbcache.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/fs/mbcache.c b/fs/mbcache.c index 2a4b8b549e93..82aa7a35db26 100644 --- a/fs/mbcache.c +++ b/fs/mbcache.c @@ -37,7 +37,7 @@ struct mb_cache { struct list_head c_list; /* Number of entries in cache */ unsigned long c_entry_count; - struct shrinker c_shrink; + struct shrinker *c_shrink; /* Work for shrinking when the cache has too many entries */ struct work_struct c_shrink_work; }; @@ -293,8 +293,7 @@ EXPORT_SYMBOL(mb_cache_entry_touch); static unsigned long mb_cache_count(struct shrinker *shrink, struct shrink_control *sc) { - struct mb_cache *cache = container_of(shrink, struct mb_cache, - c_shrink); + struct mb_cache *cache = shrink->private_data; return cache->c_entry_count; } @@ -333,8 +332,7 @@ static unsigned long mb_cache_shrink(struct mb_cache *cache, static unsigned long mb_cache_scan(struct shrinker *shrink, struct shrink_control *sc) { - struct mb_cache *cache = container_of(shrink, struct mb_cache, - c_shrink); + struct mb_cache *cache = shrink->private_data; return mb_cache_shrink(cache, sc->nr_to_scan); } @@ -377,15 +375,19 @@ struct mb_cache *mb_cache_create(int bucket_bits) for (i = 0; i < bucket_count; i++) INIT_HLIST_BL_HEAD(&cache->c_hash[i]); - cache->c_shrink.count_objects = mb_cache_count; - cache->c_shrink.scan_objects = mb_cache_scan; - cache->c_shrink.seeks = DEFAULT_SEEKS; - if (register_shrinker(&cache->c_shrink, "mbcache-shrinker")) { + cache->c_shrink = shrinker_alloc(0, "mbcache-shrinker"); + if (!cache->c_shrink) { kfree(cache->c_hash); kfree(cache); goto err_out; } + cache->c_shrink->count_objects = mb_cache_count; + cache->c_shrink->scan_objects = mb_cache_scan; + cache->c_shrink->private_data = cache; + + shrinker_register(cache->c_shrink); + INIT_WORK(&cache->c_shrink_work, mb_cache_shrink_worker); return cache; @@ -406,7 +408,7 @@ void mb_cache_destroy(struct mb_cache *cache) { struct mb_cache_entry *entry, *next; - unregister_shrinker(&cache->c_shrink); + shrinker_free(cache->c_shrink); /* * We don't bother with any locking. Cache must not be used at this