From patchwork Thu Aug 24 03:42:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13363483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CF4FEE49A5 for ; Thu, 24 Aug 2023 03:50:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239837AbjHXDtz (ORCPT ); Wed, 23 Aug 2023 23:49:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239989AbjHXDta (ORCPT ); Wed, 23 Aug 2023 23:49:30 -0400 Received: from mail-oa1-x2d.google.com (mail-oa1-x2d.google.com [IPv6:2001:4860:4864:20::2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87E2D1FC9 for ; Wed, 23 Aug 2023 20:48:09 -0700 (PDT) Received: by mail-oa1-x2d.google.com with SMTP id 586e51a60fabf-1c134602a55so1134471fac.1 for ; Wed, 23 Aug 2023 20:48:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1692848887; x=1693453687; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LxPdqdZaLdhGYYJIXR2HiqrFCPctwXJ/6zdagwsVwGU=; b=PJuxMtfgXiGgGBUiCdmwFzQZzSKHmeg0qoDKufcfTS784L/xDIwTD1+lLPGVYKdHY2 rBP0cjJkTTzpPnAs1x87JmXYN/ZYYtC+gXSknA7fDxhp7i0MEmm+MWDsSg/8O5gfGyTx MpXxiJzAYAQERb3t8MJ6Qpur+mk5F0VpMByzvS3sTwsWDwNng4+smpLsPG6NN5jPn6KN G1sHrCWSckkOsM06kv0UN/eMfulOnTn3juEsShLfG3ofrqlX0FOYySZ4kFESBb9Gbc98 6q26WY2vUTZC1oZG+U7DUHUgJ2+CwxyrSKSk9eO7DDG8HhjgmC6fXTySg2XkeiSj+d5c EsIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692848887; x=1693453687; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LxPdqdZaLdhGYYJIXR2HiqrFCPctwXJ/6zdagwsVwGU=; b=c2/qjOoFdT0WlcH0SchlQWkYqf63qKDm6lnQCVmmQmPJZw9e5kMpzjeDR/bxQstiEi iAVD7vuoJvsSYQZqO9BuZvmBCs6aV6Oe6T8EzTXY9N45VSDeSoYVzeMA6r/y78zh3jmA 2OxDdjUKi7cseZL/A+4lToCH+uQee2zh9hXLVCJKofJmDUTSFcvzCSFWgf49PVgqYbhQ an97zUwg4xgb5XklDDFgAlby9NGAsLro8dtux6F8sulbZFGCZyofdy32spR/G+/86t9W U7d9aYM+6X688IJknI5hVb2yf2I9Sr0ykifCyCvFrEa4yfJMsct6YpRbUvIyrT0pq+CB uo3w== X-Gm-Message-State: AOJu0Yw5wgpGIGvv7jnpBMJTpM5Sdo6sCZjj9mj3rBI2U0ApgoLV3Vh3 Ru/NncqfiakM/6OH15TROkfzt/gSokcjbZmPbHA= X-Google-Smtp-Source: AGHT+IFCmSzrTq4gj/MOmL4UkrAItcPBwWXNEE2+RqRXlnnu5/fpsCgGwGOaIZ24w8cH0PGv9ZafUw== X-Received: by 2002:a05:6870:56a8:b0:1bb:b9d1:37fe with SMTP id p40-20020a05687056a800b001bbb9d137femr16277998oao.2.1692848887374; Wed, 23 Aug 2023 20:48:07 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id t6-20020a63b246000000b005579f12a238sm10533157pgo.86.2023.08.23.20.47.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Aug 2023 20:48:07 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, david@fromorbit.com, tkhai@ya.ru, vbabka@suse.cz, roman.gushchin@linux.dev, djwong@kernel.org, brauner@kernel.org, paulmck@kernel.org, tytso@mit.edu, steven.price@arm.com, cel@kernel.org, senozhatsky@chromium.org, yujie.liu@intel.com, gregkh@linuxfoundation.org, muchun.song@linux.dev Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Qi Zheng , Muchun Song , Alexander Viro Subject: [PATCH v5 29/45] mbcache: dynamically allocate the mbcache shrinker Date: Thu, 24 Aug 2023 11:42:48 +0800 Message-Id: <20230824034304.37411-30-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20230824034304.37411-1-zhengqi.arch@bytedance.com> References: <20230824034304.37411-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org In preparation for implementing lockless slab shrink, use new APIs to dynamically allocate the mbcache shrinker, so that it can be freed asynchronously via RCU. Then it doesn't need to wait for RCU read-side critical section when releasing the struct mb_cache. Signed-off-by: Qi Zheng Reviewed-by: Muchun Song CC: Alexander Viro CC: Christian Brauner --- fs/mbcache.c | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/fs/mbcache.c b/fs/mbcache.c index 2a4b8b549e93..0d1e24e9a5e3 100644 --- a/fs/mbcache.c +++ b/fs/mbcache.c @@ -37,7 +37,7 @@ struct mb_cache { struct list_head c_list; /* Number of entries in cache */ unsigned long c_entry_count; - struct shrinker c_shrink; + struct shrinker *c_shrink; /* Work for shrinking when the cache has too many entries */ struct work_struct c_shrink_work; }; @@ -293,8 +293,7 @@ EXPORT_SYMBOL(mb_cache_entry_touch); static unsigned long mb_cache_count(struct shrinker *shrink, struct shrink_control *sc) { - struct mb_cache *cache = container_of(shrink, struct mb_cache, - c_shrink); + struct mb_cache *cache = shrink->private_data; return cache->c_entry_count; } @@ -333,8 +332,7 @@ static unsigned long mb_cache_shrink(struct mb_cache *cache, static unsigned long mb_cache_scan(struct shrinker *shrink, struct shrink_control *sc) { - struct mb_cache *cache = container_of(shrink, struct mb_cache, - c_shrink); + struct mb_cache *cache = shrink->private_data; return mb_cache_shrink(cache, sc->nr_to_scan); } @@ -377,15 +375,20 @@ struct mb_cache *mb_cache_create(int bucket_bits) for (i = 0; i < bucket_count; i++) INIT_HLIST_BL_HEAD(&cache->c_hash[i]); - cache->c_shrink.count_objects = mb_cache_count; - cache->c_shrink.scan_objects = mb_cache_scan; - cache->c_shrink.seeks = DEFAULT_SEEKS; - if (register_shrinker(&cache->c_shrink, "mbcache-shrinker")) { + cache->c_shrink = shrinker_alloc(0, "mbcache-shrinker"); + if (!cache->c_shrink) { kfree(cache->c_hash); kfree(cache); goto err_out; } + cache->c_shrink->count_objects = mb_cache_count; + cache->c_shrink->scan_objects = mb_cache_scan; + cache->c_shrink->seeks = DEFAULT_SEEKS; + cache->c_shrink->private_data = cache; + + shrinker_register(cache->c_shrink); + INIT_WORK(&cache->c_shrink_work, mb_cache_shrink_worker); return cache; @@ -406,7 +409,7 @@ void mb_cache_destroy(struct mb_cache *cache) { struct mb_cache_entry *entry, *next; - unregister_shrinker(&cache->c_shrink); + shrinker_free(cache->c_shrink); /* * We don't bother with any locking. Cache must not be used at this