From patchwork Wed Aug 16 08:34:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13354732 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 098ACC001DF for ; Wed, 16 Aug 2023 08:35:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 86A738D0029; Wed, 16 Aug 2023 04:35:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 81A918D0001; Wed, 16 Aug 2023 04:35:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 695428D0029; Wed, 16 Aug 2023 04:35:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 532B98D0001 for ; Wed, 16 Aug 2023 04:35:48 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 2783940E4C for ; Wed, 16 Aug 2023 08:35:48 +0000 (UTC) X-FDA: 81129309576.08.09E2D94 Received: from mail-oi1-f179.google.com (mail-oi1-f179.google.com [209.85.167.179]) by imf09.hostedemail.com (Postfix) with ESMTP id 2F3A014001E for ; Wed, 16 Aug 2023 08:35:45 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=PyfuIsbS; spf=pass (imf09.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.167.179 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692174946; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0XOawoAw7JBbYg8TgQbZYqEagcfiYqNfqAyY2NKDB1A=; b=gIKi1h8YqZfP30MkHlZe5K/17y7YZV2LPwB3ydlTRiaY2zCU8GFeUROkwDN7FHVq9nALL3 7ey8utO1I6rBDYhThK0V2scoGuBB5CrdrkAUne448ThAuPJKoEj509YvbD7N1p8NpFS7Mx hJPaTm7G9tMDcopHR3Im5cVdQPaXmsk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692174946; a=rsa-sha256; cv=none; b=Rf6kX4qOSe3jx3k3WAlo9qU9BG2+2u9Y3JT0XTv1LeziH4HrK3a1p6pcv9f9X/b0bhE2Lt f93qoN8HFjuSkdlk6WZQjmr02gI6kZKNTUSuxGaV96WmUyn+EAnmdErrLu1IXCyi1OKpSQ fFchUcA3wDhgS3XSPELeqxC1uG4QWlg= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=PyfuIsbS; spf=pass (imf09.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.167.179 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com Received: by mail-oi1-f179.google.com with SMTP id 5614622812f47-3a3ee866e00so1189912b6e.0 for ; Wed, 16 Aug 2023 01:35:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1692174945; x=1692779745; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0XOawoAw7JBbYg8TgQbZYqEagcfiYqNfqAyY2NKDB1A=; b=PyfuIsbShqpwpbjl4vofYUk+TfdwWstY542diGxwFJ36y7DMcAR/z7hAGvCREmmlaN vNd/GwqFF0aMnpzZBe8NO2dDgD0aqGC2f32Lg6N7HlIg59atjCIL3TLR01ZUU5oMB70U npk1lFX01zank9iFM2udD48/KeFpXVK8djnz15tDqMHZ4HYfGcA2gjs7RsdlD9F0BcP4 430guculcU9h+D7LkNvnSKTLGUJZ3iPK9Qqr5PwInblId0avPRzq2nxZBd1PCi9k9TEU zBkK2NTo4hdnb5g8bNED4hAfSEL+/FPKfrg1qFRMak7Kh3O1iQw8QnUbQJJNLja/4F+f 8AfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692174945; x=1692779745; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0XOawoAw7JBbYg8TgQbZYqEagcfiYqNfqAyY2NKDB1A=; b=PYpNG27QHI2j7YJw1WzlQDEK8a3RE+EA94G8zCGqFW1+j4+8FVlajYB7HjnO2KZWhZ taHbHmVdhephmk6q6HgG79sKh7DKQ9FdUGx3ti2UA/vPwi61FNKoes9VPyf3/+NUPldl T4yiudGUx5By7y3QMIrxjSfduPRwpMeBojPrQYBKeH4oCoV3UiSoTdyyu7oye4MIlSOh iY65CFm3E+qttC+L7kBODeu+Luur1vgMHaWTYSZbK0Sc74td87ZsjyCq42LleSTNtZ2D FfWrJHZrdocxF9FOYvPiNwo9aUSei+34MTigl8glbIdj7Pa0846eJBof+iznf5v9h3ex EGXA== X-Gm-Message-State: AOJu0YxIbj9kqf9p4nAjY8TAeNQhDyfGv2ch+H1nDe6o+RT7rX8KLRqF qyDolfKSL78cretOhKhf4RuEvQ== X-Google-Smtp-Source: AGHT+IGA1qVTmfQFv7cpPNrqVYl64zBDExPD4YUQImIxFOPGaIX4C9laxn2JW3k95O2LVpSAmFJCaQ== X-Received: by 2002:a05:6808:1816:b0:3a7:8e1b:9d4f with SMTP id bh22-20020a056808181600b003a78e1b9d4fmr1282984oib.3.1692174945208; Wed, 16 Aug 2023 01:35:45 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id p16-20020a639510000000b005658d3a46d7sm7506333pgd.84.2023.08.16.01.35.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 01:35:44 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, david@fromorbit.com, tkhai@ya.ru, vbabka@suse.cz, roman.gushchin@linux.dev, djwong@kernel.org, brauner@kernel.org, paulmck@kernel.org, tytso@mit.edu, steven.price@arm.com, cel@kernel.org, senozhatsky@chromium.org, yujie.liu@intel.com, gregkh@linuxfoundation.org, muchun.song@linux.dev, joel@joelfernandes.org, christian.koenig@amd.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, dri-devel@lists.freedesktop.org, linux-fsdevel@vger.kernel.org, Qi Zheng , Muchun Song Subject: [PATCH 1/5] mm: move some shrinker-related function declarations to mm/internal.h Date: Wed, 16 Aug 2023 16:34:15 +0800 Message-Id: <20230816083419.41088-2-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20230816083419.41088-1-zhengqi.arch@bytedance.com> References: <20230816083419.41088-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 X-Stat-Signature: 7ngeigfrmk1btxdtx7ehcgsehrahm669 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 2F3A014001E X-Rspam-User: X-HE-Tag: 1692174945-755973 X-HE-Meta: U2FsdGVkX19YrW/CgOpqGTetIac9h/kQ0kp3Jq7oOAJwrIrr59VKhpl5r08okfJ8RchGAl9APPOm+GoVY+dwAcBtTZtTHds7alaA3Q9AxuKa2ltvrJUTVnP5Ko/JemtVharZo0OuccY1ZFedLK2vwhU17CeGasShdKwS25HA7DtNUiMJOvspXNb63gtV7HZjtjblFxQW2GwUK77sxkQ7ebw+p97hxHbGclLxfiQhZ7xmEBMx/7HZimVJ7R6VB7CuLQvmZl6RuvwWJYvOz/Pc8InkB9Q4OYo07Rt/om1J17kzIGhooCb2rdLmbhAz+t5c4FMoDQMh+IllBTYxXpI/bxvI4S89NecC4JkyxUPntKTSvozqTvOBuNton/ReldkrclLcorGkyNOZBwNxHopO7d9dxgnL0axOBJRXsY2jflYfZxB7dD/lbtlJzSHvE+cSlfh9a3dWoGKYUpgbhnU/qmbzwWJ+B3FBh4GarayUkzHrGFvSx2HoinTxY8CkBNJkX2XO0WujE+p6343dI78z5yUum7c9LBmuSEwxFUGa/FGxaQdIeVRgJbPci1OpaU2BZgBNe5RFn//YckJaEHTUhIuWdKHg5013ty7fTz7Gy7T/Rk7SzZyUjcETQK9j+0ppz/i65g6H1pQVpuNHXFiCPYzWnAmLUw3HiJpFhrrQMgRp3UOu4XySsOup4TEX9EebupAk7xZ2BhDQv2+vWrYgh5bvf8Fe0VEGikzsgRzc7K0J1ylBenzHp2Sx+d3Q5ruiUbaNmqaXWmTQJ4WngmIA4EUkau7XaZMV+EiKMWiBzQd62jxOCmPVhvUF2XgBFPH3Ij2MyFpYQ56DbRD/QfgcLvjkIcK+w46NIbLbPMBF/espwPAPXsZN3PQRatCGsYM9zuVLrrPvbJJz+zYTi6g3wSrKD8yD00RFMvNq3xqL5RK6fXxRA1d4Y40vxVdLgMHSBFyd+8+WXQ6a7AX/Cnw 7Pv8EAhE r3qpVXmsOQwJTP5B8aT8vZUxUgOxTPT+6QVr2FZL49/wrwKmAuVmPuFex8mgTAZOdGcufjY9fjlkWz+pFGaqKpKVkgFHT3vtv45O9jof3MyPsViME8uyz6XpYaEQy1Nyd49KNKdQSleOaRaVgbl8X4G+CuYy3o6bgXY6JvZ5OjOjsVXkTTXj40Dv/shLoaoJ2y8SEeakdXSNfXHJp9LZ6VgLt01CEdyB0SXCBxPHDTYNIRSz/+6u3DMu4wj5wAzBw31WNPBT9MPyTQkUXyEFy3a4UAaH1e6eL+NeKrHjh8zkJ69ihgHdeSqTAClZpD+bKrm7NcWN9zi3G/b/YoqMIe19+9kugEMYbyoNvxRbODyO29C/8QOnJ/suaavqb+9F5t8MFStfSOapt+ngvGDz++pB60ySgAmSx0B2M X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The following functions are only used inside the mm subsystem, so it's better to move their declarations to the mm/internal.h file. 1. shrinker_debugfs_add() 2. shrinker_debugfs_detach() 3. shrinker_debugfs_remove() Signed-off-by: Qi Zheng Reviewed-by: Muchun Song --- include/linux/shrinker.h | 19 ------------------- mm/internal.h | 26 ++++++++++++++++++++++++++ 2 files changed, 26 insertions(+), 19 deletions(-) diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 224293b2dd06..8dc15aa37410 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -106,28 +106,9 @@ extern void free_prealloced_shrinker(struct shrinker *shrinker); extern void synchronize_shrinkers(void); #ifdef CONFIG_SHRINKER_DEBUG -extern int shrinker_debugfs_add(struct shrinker *shrinker); -extern struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker, - int *debugfs_id); -extern void shrinker_debugfs_remove(struct dentry *debugfs_entry, - int debugfs_id); extern int __printf(2, 3) shrinker_debugfs_rename(struct shrinker *shrinker, const char *fmt, ...); #else /* CONFIG_SHRINKER_DEBUG */ -static inline int shrinker_debugfs_add(struct shrinker *shrinker) -{ - return 0; -} -static inline struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker, - int *debugfs_id) -{ - *debugfs_id = -1; - return NULL; -} -static inline void shrinker_debugfs_remove(struct dentry *debugfs_entry, - int debugfs_id) -{ -} static inline __printf(2, 3) int shrinker_debugfs_rename(struct shrinker *shrinker, const char *fmt, ...) { diff --git a/mm/internal.h b/mm/internal.h index 0b0029e4db87..dc9c81ff1b27 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1153,4 +1153,30 @@ struct vma_prepare { struct vm_area_struct *remove; struct vm_area_struct *remove2; }; + +/* shrinker related functions */ + +#ifdef CONFIG_SHRINKER_DEBUG +extern int shrinker_debugfs_add(struct shrinker *shrinker); +extern struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker, + int *debugfs_id); +extern void shrinker_debugfs_remove(struct dentry *debugfs_entry, + int debugfs_id); +#else /* CONFIG_SHRINKER_DEBUG */ +static inline int shrinker_debugfs_add(struct shrinker *shrinker) +{ + return 0; +} +static inline struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker, + int *debugfs_id) +{ + *debugfs_id = -1; + return NULL; +} +static inline void shrinker_debugfs_remove(struct dentry *debugfs_entry, + int debugfs_id) +{ +} +#endif /* CONFIG_SHRINKER_DEBUG */ + #endif /* __MM_INTERNAL_H */ From patchwork Wed Aug 16 08:34:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13354733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64A07C001DF for ; Wed, 16 Aug 2023 08:35:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E36B28D002A; Wed, 16 Aug 2023 04:35:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DBFBC8D0001; Wed, 16 Aug 2023 04:35:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BEB3B8D002A; Wed, 16 Aug 2023 04:35:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A75FF8D0001 for ; Wed, 16 Aug 2023 04:35:57 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 76BBD809FC for ; Wed, 16 Aug 2023 08:35:57 +0000 (UTC) X-FDA: 81129309954.22.257161B Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) by imf23.hostedemail.com (Postfix) with ESMTP id 40008140016 for ; Wed, 16 Aug 2023 08:35:54 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=AehnksrT; spf=pass (imf23.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.215.181 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692174955; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=D0SZ2YgF/vbKKSyxqDOkYgRmItvcqFiSGehnRKf1SzU=; b=IzUIwsb4aFeuoBCp07QBh162Vr0dZAAKixIMUg8Bs6ENITR/aCnahG7fRCnFPrNE8ziwIn 71C/xQRuSKOx/fcIqdhqR6Q9/ueBkpnfuowJRQXSMukQbRYAg91M4SSVh1y2zi/VR859rM PReuwk22OXNRtYybhXoMz0ME4IUh//k= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692174955; a=rsa-sha256; cv=none; b=lIhV3HNsgr4I8Jk49mWGkWGJfYzfq4ftxvrpif517x3KN7yZS4b4FxKu3ARqrR9oogpxrR FnDRrPD8L25Wsa8fjYipJHoAVGU+7ACiZGdgEnI5mYXSTbA/33OwK6UBD0CyehIgsCfOxi vMKZ9Fwb8Xkyvo1+E7YFLicyZjIuPvg= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=AehnksrT; spf=pass (imf23.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.215.181 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com Received: by mail-pg1-f181.google.com with SMTP id 41be03b00d2f7-565b053fe58so587652a12.1 for ; Wed, 16 Aug 2023 01:35:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1692174954; x=1692779754; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=D0SZ2YgF/vbKKSyxqDOkYgRmItvcqFiSGehnRKf1SzU=; b=AehnksrTtsgnr5OOEFCLMmBtGq07FocMKhFG/0UuFZ6NTuKejZsM5Sao0k/E1O2XX6 FaJb3eukI1cCxzq3EdOMjNCwxOp0M2CD7z19dUR/wVybqzm7b0fFrCBdQJW7MUfbAtJ6 r5ooriCswqmJveDk/BOtKc7D6+4D0dLcvrPV5W3e61xvFvQTSw4TyDklfsg8bm0+oQPF 5b4dhz640zbWaPid3vYYSOWMDpQHBeXABsnLM6NFSwdlS+Id/AdmL/lopLs32wJaAhiR kLWqzWkLNTtVPvWOR8QDPvsKf+W5DZDs21Sz0DE2NbHSCHXSExdFRBjjQRiXmm9RcE6y 2Jdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692174954; x=1692779754; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=D0SZ2YgF/vbKKSyxqDOkYgRmItvcqFiSGehnRKf1SzU=; b=BOUlT3nps58ts4SO6Gi3bjLP/kWT8Qhf2hOldQ7DuQu04h33wCoZZLEMW9+wfobljv 9eYPDcXyLVTk00ypodAhwiKtH1wuFnSX8ceHQPM4TJ+Z4Is1wmYt4EsseQBHNYgypoz9 Kakl/UBOLTvi420ZdRmZCxdIosRpAEGfn0rZclygVCuc2pRxumVky8MkaxTVUhwdMP2V LRyFfk4D+MLVb4tSNZgXTfvkJ19ddzBYIeE9AsmrsmqQnSJ8M8P2uvkgT1AVLWIQGDms 1prUOIQzVViY+kUOaFSDhZ0EJAwYx13iiJy+1HIXh8XhAU5JvdKAJbnh5lN9EXyZMk9Y pqtw== X-Gm-Message-State: AOJu0Yz8FtwEjFKoaGy+o8z9gHxRuJG6PYQeBUBuAasy/jp2A0l4sY5O Yu8Bz9GrS41sLUlhkTy4btG63A== X-Google-Smtp-Source: AGHT+IHKT5hqacruDeTp76Fyy+/NRAz4wd+ICAAur+YqRyZeEAqlCVrsEprRksT4cowNMBY/qANnqA== X-Received: by 2002:a05:6a20:a10c:b0:13a:3649:dc1a with SMTP id q12-20020a056a20a10c00b0013a3649dc1amr1604554pzk.0.1692174953649; Wed, 16 Aug 2023 01:35:53 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id p16-20020a639510000000b005658d3a46d7sm7506333pgd.84.2023.08.16.01.35.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 01:35:53 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, david@fromorbit.com, tkhai@ya.ru, vbabka@suse.cz, roman.gushchin@linux.dev, djwong@kernel.org, brauner@kernel.org, paulmck@kernel.org, tytso@mit.edu, steven.price@arm.com, cel@kernel.org, senozhatsky@chromium.org, yujie.liu@intel.com, gregkh@linuxfoundation.org, muchun.song@linux.dev, joel@joelfernandes.org, christian.koenig@amd.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, dri-devel@lists.freedesktop.org, linux-fsdevel@vger.kernel.org, Qi Zheng , Muchun Song Subject: [PATCH 2/5] mm: vmscan: move shrinker-related code into a separate file Date: Wed, 16 Aug 2023 16:34:16 +0800 Message-Id: <20230816083419.41088-3-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20230816083419.41088-1-zhengqi.arch@bytedance.com> References: <20230816083419.41088-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 X-Stat-Signature: 16d34d5wcn4cy7na9uijzyfqqq19e9w3 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 40008140016 X-Rspam-User: X-HE-Tag: 1692174954-593511 X-HE-Meta: U2FsdGVkX1+8SKcLwH0xVwd3EX9mxwHdeRGzHkdDpaQBWh78SzuxjqnRh+cDso08DsAo7CBHoXVUwPjX5WEADNWNOI1ggKnyzCRnS18UWZst/3HDY5Xq+DTOZjjiGzrBEToqES7UNVy0XcXDqE0G42E7RXt82hrreVuFeGd84bkYEZ4MAhTo0v3EAxR5zNrvDe2cZBD2FETF3cZ+4Jprm6D7I1MFqKw1zb1ag68ek661AAJRb2mHUupbCZEPoxqOI9WA3Ia33IiLlyjLMS1m/RKJl3MaE8Oh0niwoiyMKtf4oqu3ip8XvCqcbHrHDUiC5anLQUqsTzkigtGuB6/OckGmaPTUv0rXInh3cAvVMSVUlkQwyogC0g4ESf5t7rmn0SlugEbop63KpScl+CoXChu0wS3pCLb606A+KzFqEt+nq76yTiHQioKVfzas7KrfA8rcUdknN/hqqPT+cYXxofyw/aDDvlQo6zCMxij2IhiVwsNMpVjBp7U4PnPnywJ1QoEX9m/hfvDe0XrItgpDVw5OfR9Q+HO9PGeVphfESqAKvZHjLzqP8wh441rKXc29jzJivTlebhQru3CVD53a6iNoa4dFctMYQLlaynMGlmq/RwQPjoLVP9QaeFo14XGBinKKX9ESRaWrbvaSYX+go6Pnd4yjC+l8SS2h9aiiPfGWCIWxxJa4qTRWL5i9sMsY7qOz5ns+LvxwPz4EOmJaACsDBi3j5p8hA2PJbd8b4qaGBDuypZcoTMavruA+5C0VEp3cK2s3H267AA/lV6Ua77RDjXusHsXx4i2+EKO6gTEfT6hxlZWR6fufxxVTdLiyQJitdpR6nrB6uBRWqFCu4tW+3Z9YRzbenCmS/2BT0evnjZ/z4ht/E2LNfVNuLFCBJ2zFvfY6okjh/IhlNHhps1BsDO/raI/Ysrb5XxH3n+TOys07q0LZM8OzwX7cfnsyMjLX6FR92jPxlOhVIMU UMPdi+tl N5jNSmoVITvgRVhC1RVr6enGpOInIocAfJsWbfKamcBAgXKxsxMPWDWli5qYqN6oA8OfJNUrYkqW4+IPhkPl0dXmkNDZ1iTO0bV4Qs75NcRH/ernSe62tfHX88IxxGxRXaD7QmwkakqMWq7OoZIXEfYDVC4cx1dB1UaEce+Fvx71CMayEZkSc4mrANJDJExTDSCNKs+k10IiG9ykyAJ5xsJ2rOoQgY751yPOWvyaMb/t2QMwTK/kM0LH4pCsY/Vn1iMWlRe+3huJDtg/p/iGIUVeD7BXsGJLvAZX7otju0UdkaSzpgRaNzbB/egXiDtQyMdODNVBZxzYgEIg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The mm/vmscan.c file is too large, so separate the shrinker-related code from it into a separate file. No functional changes. Signed-off-by: Qi Zheng Reviewed-by: Muchun Song --- mm/Makefile | 4 +- mm/internal.h | 2 + mm/shrinker.c | 709 ++++++++++++++++++++++++++++++++++++++++++++++++++ mm/vmscan.c | 701 ------------------------------------------------- 4 files changed, 713 insertions(+), 703 deletions(-) create mode 100644 mm/shrinker.c diff --git a/mm/Makefile b/mm/Makefile index ec65984e2ade..33873c8aedb3 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -48,8 +48,8 @@ endif obj-y := filemap.o mempool.o oom_kill.o fadvise.o \ maccess.o page-writeback.o folio-compat.o \ - readahead.o swap.o truncate.o vmscan.o shmem.o \ - util.o mmzone.o vmstat.o backing-dev.o \ + readahead.o swap.o truncate.o vmscan.o shrinker.o \ + shmem.o util.o mmzone.o vmstat.o backing-dev.o \ mm_init.o percpu.o slab_common.o \ compaction.o show_mem.o shmem_quota.o\ interval_tree.o list_lru.o workingset.o \ diff --git a/mm/internal.h b/mm/internal.h index dc9c81ff1b27..5907eced8548 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1155,6 +1155,8 @@ struct vma_prepare { }; /* shrinker related functions */ +unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, + int priority); #ifdef CONFIG_SHRINKER_DEBUG extern int shrinker_debugfs_add(struct shrinker *shrinker); diff --git a/mm/shrinker.c b/mm/shrinker.c new file mode 100644 index 000000000000..043c87ccfab4 --- /dev/null +++ b/mm/shrinker.c @@ -0,0 +1,709 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include + +#include "internal.h" + +LIST_HEAD(shrinker_list); +DECLARE_RWSEM(shrinker_rwsem); + +#ifdef CONFIG_MEMCG +static int shrinker_nr_max; + +/* The shrinker_info is expanded in a batch of BITS_PER_LONG */ +static inline int shrinker_map_size(int nr_items) +{ + return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long)); +} + +static inline int shrinker_defer_size(int nr_items) +{ + return (round_up(nr_items, BITS_PER_LONG) * sizeof(atomic_long_t)); +} + +void free_shrinker_info(struct mem_cgroup *memcg) +{ + struct mem_cgroup_per_node *pn; + struct shrinker_info *info; + int nid; + + for_each_node(nid) { + pn = memcg->nodeinfo[nid]; + info = rcu_dereference_protected(pn->shrinker_info, true); + kvfree(info); + rcu_assign_pointer(pn->shrinker_info, NULL); + } +} + +int alloc_shrinker_info(struct mem_cgroup *memcg) +{ + struct shrinker_info *info; + int nid, size, ret = 0; + int map_size, defer_size = 0; + + down_write(&shrinker_rwsem); + map_size = shrinker_map_size(shrinker_nr_max); + defer_size = shrinker_defer_size(shrinker_nr_max); + size = map_size + defer_size; + for_each_node(nid) { + info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid); + if (!info) { + free_shrinker_info(memcg); + ret = -ENOMEM; + break; + } + info->nr_deferred = (atomic_long_t *)(info + 1); + info->map = (void *)info->nr_deferred + defer_size; + info->map_nr_max = shrinker_nr_max; + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); + } + up_write(&shrinker_rwsem); + + return ret; +} + +static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, + int nid) +{ + return rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, + lockdep_is_held(&shrinker_rwsem)); +} + +static int expand_one_shrinker_info(struct mem_cgroup *memcg, + int map_size, int defer_size, + int old_map_size, int old_defer_size, + int new_nr_max) +{ + struct shrinker_info *new, *old; + struct mem_cgroup_per_node *pn; + int nid; + int size = map_size + defer_size; + + for_each_node(nid) { + pn = memcg->nodeinfo[nid]; + old = shrinker_info_protected(memcg, nid); + /* Not yet online memcg */ + if (!old) + return 0; + + /* Already expanded this shrinker_info */ + if (new_nr_max <= old->map_nr_max) + continue; + + new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, nid); + if (!new) + return -ENOMEM; + + new->nr_deferred = (atomic_long_t *)(new + 1); + new->map = (void *)new->nr_deferred + defer_size; + new->map_nr_max = new_nr_max; + + /* map: set all old bits, clear all new bits */ + memset(new->map, (int)0xff, old_map_size); + memset((void *)new->map + old_map_size, 0, map_size - old_map_size); + /* nr_deferred: copy old values, clear all new values */ + memcpy(new->nr_deferred, old->nr_deferred, old_defer_size); + memset((void *)new->nr_deferred + old_defer_size, 0, + defer_size - old_defer_size); + + rcu_assign_pointer(pn->shrinker_info, new); + kvfree_rcu(old, rcu); + } + + return 0; +} + +static int expand_shrinker_info(int new_id) +{ + int ret = 0; + int new_nr_max = round_up(new_id + 1, BITS_PER_LONG); + int map_size, defer_size = 0; + int old_map_size, old_defer_size = 0; + struct mem_cgroup *memcg; + + if (!root_mem_cgroup) + goto out; + + lockdep_assert_held(&shrinker_rwsem); + + map_size = shrinker_map_size(new_nr_max); + defer_size = shrinker_defer_size(new_nr_max); + old_map_size = shrinker_map_size(shrinker_nr_max); + old_defer_size = shrinker_defer_size(shrinker_nr_max); + + memcg = mem_cgroup_iter(NULL, NULL, NULL); + do { + ret = expand_one_shrinker_info(memcg, map_size, defer_size, + old_map_size, old_defer_size, + new_nr_max); + if (ret) { + mem_cgroup_iter_break(NULL, memcg); + goto out; + } + } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); +out: + if (!ret) + shrinker_nr_max = new_nr_max; + + return ret; +} + +void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) +{ + if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) { + struct shrinker_info *info; + + rcu_read_lock(); + info = rcu_dereference(memcg->nodeinfo[nid]->shrinker_info); + if (!WARN_ON_ONCE(shrinker_id >= info->map_nr_max)) { + /* Pairs with smp mb in shrink_slab() */ + smp_mb__before_atomic(); + set_bit(shrinker_id, info->map); + } + rcu_read_unlock(); + } +} + +static DEFINE_IDR(shrinker_idr); + +static int prealloc_memcg_shrinker(struct shrinker *shrinker) +{ + int id, ret = -ENOMEM; + + if (mem_cgroup_disabled()) + return -ENOSYS; + + down_write(&shrinker_rwsem); + /* This may call shrinker, so it must use down_read_trylock() */ + id = idr_alloc(&shrinker_idr, shrinker, 0, 0, GFP_KERNEL); + if (id < 0) + goto unlock; + + if (id >= shrinker_nr_max) { + if (expand_shrinker_info(id)) { + idr_remove(&shrinker_idr, id); + goto unlock; + } + } + shrinker->id = id; + ret = 0; +unlock: + up_write(&shrinker_rwsem); + return ret; +} + +static void unregister_memcg_shrinker(struct shrinker *shrinker) +{ + int id = shrinker->id; + + BUG_ON(id < 0); + + lockdep_assert_held(&shrinker_rwsem); + + idr_remove(&shrinker_idr, id); +} + +static long xchg_nr_deferred_memcg(int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + struct shrinker_info *info; + + info = shrinker_info_protected(memcg, nid); + return atomic_long_xchg(&info->nr_deferred[shrinker->id], 0); +} + +static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + struct shrinker_info *info; + + info = shrinker_info_protected(memcg, nid); + return atomic_long_add_return(nr, &info->nr_deferred[shrinker->id]); +} + +void reparent_shrinker_deferred(struct mem_cgroup *memcg) +{ + int i, nid; + long nr; + struct mem_cgroup *parent; + struct shrinker_info *child_info, *parent_info; + + parent = parent_mem_cgroup(memcg); + if (!parent) + parent = root_mem_cgroup; + + /* Prevent from concurrent shrinker_info expand */ + down_read(&shrinker_rwsem); + for_each_node(nid) { + child_info = shrinker_info_protected(memcg, nid); + parent_info = shrinker_info_protected(parent, nid); + for (i = 0; i < child_info->map_nr_max; i++) { + nr = atomic_long_read(&child_info->nr_deferred[i]); + atomic_long_add(nr, &parent_info->nr_deferred[i]); + } + } + up_read(&shrinker_rwsem); +} +#else +static int prealloc_memcg_shrinker(struct shrinker *shrinker) +{ + return -ENOSYS; +} + +static void unregister_memcg_shrinker(struct shrinker *shrinker) +{ +} + +static long xchg_nr_deferred_memcg(int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + return 0; +} + +static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + return 0; +} +#endif /* CONFIG_MEMCG */ + +static long xchg_nr_deferred(struct shrinker *shrinker, + struct shrink_control *sc) +{ + int nid = sc->nid; + + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) + nid = 0; + + if (sc->memcg && + (shrinker->flags & SHRINKER_MEMCG_AWARE)) + return xchg_nr_deferred_memcg(nid, shrinker, + sc->memcg); + + return atomic_long_xchg(&shrinker->nr_deferred[nid], 0); +} + + +static long add_nr_deferred(long nr, struct shrinker *shrinker, + struct shrink_control *sc) +{ + int nid = sc->nid; + + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) + nid = 0; + + if (sc->memcg && + (shrinker->flags & SHRINKER_MEMCG_AWARE)) + return add_nr_deferred_memcg(nr, nid, shrinker, + sc->memcg); + + return atomic_long_add_return(nr, &shrinker->nr_deferred[nid]); +} + +#define SHRINK_BATCH 128 + +static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, + struct shrinker *shrinker, int priority) +{ + unsigned long freed = 0; + unsigned long long delta; + long total_scan; + long freeable; + long nr; + long new_nr; + long batch_size = shrinker->batch ? shrinker->batch + : SHRINK_BATCH; + long scanned = 0, next_deferred; + + freeable = shrinker->count_objects(shrinker, shrinkctl); + if (freeable == 0 || freeable == SHRINK_EMPTY) + return freeable; + + /* + * copy the current shrinker scan count into a local variable + * and zero it so that other concurrent shrinker invocations + * don't also do this scanning work. + */ + nr = xchg_nr_deferred(shrinker, shrinkctl); + + if (shrinker->seeks) { + delta = freeable >> priority; + delta *= 4; + do_div(delta, shrinker->seeks); + } else { + /* + * These objects don't require any IO to create. Trim + * them aggressively under memory pressure to keep + * them from causing refetches in the IO caches. + */ + delta = freeable / 2; + } + + total_scan = nr >> priority; + total_scan += delta; + total_scan = min(total_scan, (2 * freeable)); + + trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, + freeable, delta, total_scan, priority); + + /* + * Normally, we should not scan less than batch_size objects in one + * pass to avoid too frequent shrinker calls, but if the slab has less + * than batch_size objects in total and we are really tight on memory, + * we will try to reclaim all available objects, otherwise we can end + * up failing allocations although there are plenty of reclaimable + * objects spread over several slabs with usage less than the + * batch_size. + * + * We detect the "tight on memory" situations by looking at the total + * number of objects we want to scan (total_scan). If it is greater + * than the total number of objects on slab (freeable), we must be + * scanning at high prio and therefore should try to reclaim as much as + * possible. + */ + while (total_scan >= batch_size || + total_scan >= freeable) { + unsigned long ret; + unsigned long nr_to_scan = min(batch_size, total_scan); + + shrinkctl->nr_to_scan = nr_to_scan; + shrinkctl->nr_scanned = nr_to_scan; + ret = shrinker->scan_objects(shrinker, shrinkctl); + if (ret == SHRINK_STOP) + break; + freed += ret; + + count_vm_events(SLABS_SCANNED, shrinkctl->nr_scanned); + total_scan -= shrinkctl->nr_scanned; + scanned += shrinkctl->nr_scanned; + + cond_resched(); + } + + /* + * The deferred work is increased by any new work (delta) that wasn't + * done, decreased by old deferred work that was done now. + * + * And it is capped to two times of the freeable items. + */ + next_deferred = max_t(long, (nr + delta - scanned), 0); + next_deferred = min(next_deferred, (2 * freeable)); + + /* + * move the unused scan count back into the shrinker in a + * manner that handles concurrent updates. + */ + new_nr = add_nr_deferred(next_deferred, shrinker, shrinkctl); + + trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan); + return freed; +} + +#ifdef CONFIG_MEMCG +static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, + struct mem_cgroup *memcg, int priority) +{ + struct shrinker_info *info; + unsigned long ret, freed = 0; + int i; + + if (!mem_cgroup_online(memcg)) + return 0; + + if (!down_read_trylock(&shrinker_rwsem)) + return 0; + + info = shrinker_info_protected(memcg, nid); + if (unlikely(!info)) + goto unlock; + + for_each_set_bit(i, info->map, info->map_nr_max) { + struct shrink_control sc = { + .gfp_mask = gfp_mask, + .nid = nid, + .memcg = memcg, + }; + struct shrinker *shrinker; + + shrinker = idr_find(&shrinker_idr, i); + if (unlikely(!shrinker || !(shrinker->flags & SHRINKER_REGISTERED))) { + if (!shrinker) + clear_bit(i, info->map); + continue; + } + + /* Call non-slab shrinkers even though kmem is disabled */ + if (!memcg_kmem_online() && + !(shrinker->flags & SHRINKER_NONSLAB)) + continue; + + ret = do_shrink_slab(&sc, shrinker, priority); + if (ret == SHRINK_EMPTY) { + clear_bit(i, info->map); + /* + * After the shrinker reported that it had no objects to + * free, but before we cleared the corresponding bit in + * the memcg shrinker map, a new object might have been + * added. To make sure, we have the bit set in this + * case, we invoke the shrinker one more time and reset + * the bit if it reports that it is not empty anymore. + * The memory barrier here pairs with the barrier in + * set_shrinker_bit(): + * + * list_lru_add() shrink_slab_memcg() + * list_add_tail() clear_bit() + * + * set_bit() do_shrink_slab() + */ + smp_mb__after_atomic(); + ret = do_shrink_slab(&sc, shrinker, priority); + if (ret == SHRINK_EMPTY) + ret = 0; + else + set_shrinker_bit(memcg, nid, i); + } + freed += ret; + + if (rwsem_is_contended(&shrinker_rwsem)) { + freed = freed ? : 1; + break; + } + } +unlock: + up_read(&shrinker_rwsem); + return freed; +} +#else /* !CONFIG_MEMCG */ +static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, + struct mem_cgroup *memcg, int priority) +{ + return 0; +} +#endif /* CONFIG_MEMCG */ + +/** + * shrink_slab - shrink slab caches + * @gfp_mask: allocation context + * @nid: node whose slab caches to target + * @memcg: memory cgroup whose slab caches to target + * @priority: the reclaim priority + * + * Call the shrink functions to age shrinkable caches. + * + * @nid is passed along to shrinkers with SHRINKER_NUMA_AWARE set, + * unaware shrinkers will receive a node id of 0 instead. + * + * @memcg specifies the memory cgroup to target. Unaware shrinkers + * are called only if it is the root cgroup. + * + * @priority is sc->priority, we take the number of objects and >> by priority + * in order to get the scan target. + * + * Returns the number of reclaimed slab objects. + */ +unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, + int priority) +{ + unsigned long ret, freed = 0; + struct shrinker *shrinker; + + /* + * The root memcg might be allocated even though memcg is disabled + * via "cgroup_disable=memory" boot parameter. This could make + * mem_cgroup_is_root() return false, then just run memcg slab + * shrink, but skip global shrink. This may result in premature + * oom. + */ + if (!mem_cgroup_disabled() && !mem_cgroup_is_root(memcg)) + return shrink_slab_memcg(gfp_mask, nid, memcg, priority); + + if (!down_read_trylock(&shrinker_rwsem)) + goto out; + + list_for_each_entry(shrinker, &shrinker_list, list) { + struct shrink_control sc = { + .gfp_mask = gfp_mask, + .nid = nid, + .memcg = memcg, + }; + + ret = do_shrink_slab(&sc, shrinker, priority); + if (ret == SHRINK_EMPTY) + ret = 0; + freed += ret; + /* + * Bail out if someone want to register a new shrinker to + * prevent the registration from being stalled for long periods + * by parallel ongoing shrinking. + */ + if (rwsem_is_contended(&shrinker_rwsem)) { + freed = freed ? : 1; + break; + } + } + + up_read(&shrinker_rwsem); +out: + cond_resched(); + return freed; +} + +/* + * Add a shrinker callback to be called from the vm. + */ +static int __prealloc_shrinker(struct shrinker *shrinker) +{ + unsigned int size; + int err; + + if (shrinker->flags & SHRINKER_MEMCG_AWARE) { + err = prealloc_memcg_shrinker(shrinker); + if (err != -ENOSYS) + return err; + + shrinker->flags &= ~SHRINKER_MEMCG_AWARE; + } + + size = sizeof(*shrinker->nr_deferred); + if (shrinker->flags & SHRINKER_NUMA_AWARE) + size *= nr_node_ids; + + shrinker->nr_deferred = kzalloc(size, GFP_KERNEL); + if (!shrinker->nr_deferred) + return -ENOMEM; + + return 0; +} + +#ifdef CONFIG_SHRINKER_DEBUG +int prealloc_shrinker(struct shrinker *shrinker, const char *fmt, ...) +{ + va_list ap; + int err; + + va_start(ap, fmt); + shrinker->name = kvasprintf_const(GFP_KERNEL, fmt, ap); + va_end(ap); + if (!shrinker->name) + return -ENOMEM; + + err = __prealloc_shrinker(shrinker); + if (err) { + kfree_const(shrinker->name); + shrinker->name = NULL; + } + + return err; +} +#else +int prealloc_shrinker(struct shrinker *shrinker, const char *fmt, ...) +{ + return __prealloc_shrinker(shrinker); +} +#endif + +void free_prealloced_shrinker(struct shrinker *shrinker) +{ +#ifdef CONFIG_SHRINKER_DEBUG + kfree_const(shrinker->name); + shrinker->name = NULL; +#endif + if (shrinker->flags & SHRINKER_MEMCG_AWARE) { + down_write(&shrinker_rwsem); + unregister_memcg_shrinker(shrinker); + up_write(&shrinker_rwsem); + return; + } + + kfree(shrinker->nr_deferred); + shrinker->nr_deferred = NULL; +} + +void register_shrinker_prepared(struct shrinker *shrinker) +{ + down_write(&shrinker_rwsem); + list_add_tail(&shrinker->list, &shrinker_list); + shrinker->flags |= SHRINKER_REGISTERED; + shrinker_debugfs_add(shrinker); + up_write(&shrinker_rwsem); +} + +static int __register_shrinker(struct shrinker *shrinker) +{ + int err = __prealloc_shrinker(shrinker); + + if (err) + return err; + register_shrinker_prepared(shrinker); + return 0; +} + +#ifdef CONFIG_SHRINKER_DEBUG +int register_shrinker(struct shrinker *shrinker, const char *fmt, ...) +{ + va_list ap; + int err; + + va_start(ap, fmt); + shrinker->name = kvasprintf_const(GFP_KERNEL, fmt, ap); + va_end(ap); + if (!shrinker->name) + return -ENOMEM; + + err = __register_shrinker(shrinker); + if (err) { + kfree_const(shrinker->name); + shrinker->name = NULL; + } + return err; +} +#else +int register_shrinker(struct shrinker *shrinker, const char *fmt, ...) +{ + return __register_shrinker(shrinker); +} +#endif +EXPORT_SYMBOL(register_shrinker); + +/* + * Remove one + */ +void unregister_shrinker(struct shrinker *shrinker) +{ + struct dentry *debugfs_entry; + int debugfs_id; + + if (!(shrinker->flags & SHRINKER_REGISTERED)) + return; + + down_write(&shrinker_rwsem); + list_del(&shrinker->list); + shrinker->flags &= ~SHRINKER_REGISTERED; + if (shrinker->flags & SHRINKER_MEMCG_AWARE) + unregister_memcg_shrinker(shrinker); + debugfs_entry = shrinker_debugfs_detach(shrinker, &debugfs_id); + up_write(&shrinker_rwsem); + + shrinker_debugfs_remove(debugfs_entry, debugfs_id); + + kfree(shrinker->nr_deferred); + shrinker->nr_deferred = NULL; +} +EXPORT_SYMBOL(unregister_shrinker); + +/** + * synchronize_shrinkers - Wait for all running shrinkers to complete. + * + * This is equivalent to calling unregister_shrink() and register_shrinker(), + * but atomically and with less overhead. This is useful to guarantee that all + * shrinker invocations have seen an update, before freeing memory, similar to + * rcu. + */ +void synchronize_shrinkers(void) +{ + down_write(&shrinker_rwsem); + up_write(&shrinker_rwsem); +} +EXPORT_SYMBOL(synchronize_shrinkers); diff --git a/mm/vmscan.c b/mm/vmscan.c index c7c149cb8d66..f5df4f1bf620 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -35,7 +35,6 @@ #include #include #include -#include #include #include #include @@ -188,246 +187,7 @@ struct scan_control { */ int vm_swappiness = 60; -LIST_HEAD(shrinker_list); -DECLARE_RWSEM(shrinker_rwsem); - #ifdef CONFIG_MEMCG -static int shrinker_nr_max; - -/* The shrinker_info is expanded in a batch of BITS_PER_LONG */ -static inline int shrinker_map_size(int nr_items) -{ - return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long)); -} - -static inline int shrinker_defer_size(int nr_items) -{ - return (round_up(nr_items, BITS_PER_LONG) * sizeof(atomic_long_t)); -} - -static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, - int nid) -{ - return rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, - lockdep_is_held(&shrinker_rwsem)); -} - -static int expand_one_shrinker_info(struct mem_cgroup *memcg, - int map_size, int defer_size, - int old_map_size, int old_defer_size, - int new_nr_max) -{ - struct shrinker_info *new, *old; - struct mem_cgroup_per_node *pn; - int nid; - int size = map_size + defer_size; - - for_each_node(nid) { - pn = memcg->nodeinfo[nid]; - old = shrinker_info_protected(memcg, nid); - /* Not yet online memcg */ - if (!old) - return 0; - - /* Already expanded this shrinker_info */ - if (new_nr_max <= old->map_nr_max) - continue; - - new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, nid); - if (!new) - return -ENOMEM; - - new->nr_deferred = (atomic_long_t *)(new + 1); - new->map = (void *)new->nr_deferred + defer_size; - new->map_nr_max = new_nr_max; - - /* map: set all old bits, clear all new bits */ - memset(new->map, (int)0xff, old_map_size); - memset((void *)new->map + old_map_size, 0, map_size - old_map_size); - /* nr_deferred: copy old values, clear all new values */ - memcpy(new->nr_deferred, old->nr_deferred, old_defer_size); - memset((void *)new->nr_deferred + old_defer_size, 0, - defer_size - old_defer_size); - - rcu_assign_pointer(pn->shrinker_info, new); - kvfree_rcu(old, rcu); - } - - return 0; -} - -void free_shrinker_info(struct mem_cgroup *memcg) -{ - struct mem_cgroup_per_node *pn; - struct shrinker_info *info; - int nid; - - for_each_node(nid) { - pn = memcg->nodeinfo[nid]; - info = rcu_dereference_protected(pn->shrinker_info, true); - kvfree(info); - rcu_assign_pointer(pn->shrinker_info, NULL); - } -} - -int alloc_shrinker_info(struct mem_cgroup *memcg) -{ - struct shrinker_info *info; - int nid, size, ret = 0; - int map_size, defer_size = 0; - - down_write(&shrinker_rwsem); - map_size = shrinker_map_size(shrinker_nr_max); - defer_size = shrinker_defer_size(shrinker_nr_max); - size = map_size + defer_size; - for_each_node(nid) { - info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid); - if (!info) { - free_shrinker_info(memcg); - ret = -ENOMEM; - break; - } - info->nr_deferred = (atomic_long_t *)(info + 1); - info->map = (void *)info->nr_deferred + defer_size; - info->map_nr_max = shrinker_nr_max; - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); - } - up_write(&shrinker_rwsem); - - return ret; -} - -static int expand_shrinker_info(int new_id) -{ - int ret = 0; - int new_nr_max = round_up(new_id + 1, BITS_PER_LONG); - int map_size, defer_size = 0; - int old_map_size, old_defer_size = 0; - struct mem_cgroup *memcg; - - if (!root_mem_cgroup) - goto out; - - lockdep_assert_held(&shrinker_rwsem); - - map_size = shrinker_map_size(new_nr_max); - defer_size = shrinker_defer_size(new_nr_max); - old_map_size = shrinker_map_size(shrinker_nr_max); - old_defer_size = shrinker_defer_size(shrinker_nr_max); - - memcg = mem_cgroup_iter(NULL, NULL, NULL); - do { - ret = expand_one_shrinker_info(memcg, map_size, defer_size, - old_map_size, old_defer_size, - new_nr_max); - if (ret) { - mem_cgroup_iter_break(NULL, memcg); - goto out; - } - } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); -out: - if (!ret) - shrinker_nr_max = new_nr_max; - - return ret; -} - -void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) -{ - if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) { - struct shrinker_info *info; - - rcu_read_lock(); - info = rcu_dereference(memcg->nodeinfo[nid]->shrinker_info); - if (!WARN_ON_ONCE(shrinker_id >= info->map_nr_max)) { - /* Pairs with smp mb in shrink_slab() */ - smp_mb__before_atomic(); - set_bit(shrinker_id, info->map); - } - rcu_read_unlock(); - } -} - -static DEFINE_IDR(shrinker_idr); - -static int prealloc_memcg_shrinker(struct shrinker *shrinker) -{ - int id, ret = -ENOMEM; - - if (mem_cgroup_disabled()) - return -ENOSYS; - - down_write(&shrinker_rwsem); - /* This may call shrinker, so it must use down_read_trylock() */ - id = idr_alloc(&shrinker_idr, shrinker, 0, 0, GFP_KERNEL); - if (id < 0) - goto unlock; - - if (id >= shrinker_nr_max) { - if (expand_shrinker_info(id)) { - idr_remove(&shrinker_idr, id); - goto unlock; - } - } - shrinker->id = id; - ret = 0; -unlock: - up_write(&shrinker_rwsem); - return ret; -} - -static void unregister_memcg_shrinker(struct shrinker *shrinker) -{ - int id = shrinker->id; - - BUG_ON(id < 0); - - lockdep_assert_held(&shrinker_rwsem); - - idr_remove(&shrinker_idr, id); -} - -static long xchg_nr_deferred_memcg(int nid, struct shrinker *shrinker, - struct mem_cgroup *memcg) -{ - struct shrinker_info *info; - - info = shrinker_info_protected(memcg, nid); - return atomic_long_xchg(&info->nr_deferred[shrinker->id], 0); -} - -static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, - struct mem_cgroup *memcg) -{ - struct shrinker_info *info; - - info = shrinker_info_protected(memcg, nid); - return atomic_long_add_return(nr, &info->nr_deferred[shrinker->id]); -} - -void reparent_shrinker_deferred(struct mem_cgroup *memcg) -{ - int i, nid; - long nr; - struct mem_cgroup *parent; - struct shrinker_info *child_info, *parent_info; - - parent = parent_mem_cgroup(memcg); - if (!parent) - parent = root_mem_cgroup; - - /* Prevent from concurrent shrinker_info expand */ - down_read(&shrinker_rwsem); - for_each_node(nid) { - child_info = shrinker_info_protected(memcg, nid); - parent_info = shrinker_info_protected(parent, nid); - for (i = 0; i < child_info->map_nr_max; i++) { - nr = atomic_long_read(&child_info->nr_deferred[i]); - atomic_long_add(nr, &parent_info->nr_deferred[i]); - } - } - up_read(&shrinker_rwsem); -} /* Returns true for reclaim through cgroup limits or cgroup interfaces. */ static bool cgroup_reclaim(struct scan_control *sc) @@ -468,27 +228,6 @@ static bool writeback_throttling_sane(struct scan_control *sc) return false; } #else -static int prealloc_memcg_shrinker(struct shrinker *shrinker) -{ - return -ENOSYS; -} - -static void unregister_memcg_shrinker(struct shrinker *shrinker) -{ -} - -static long xchg_nr_deferred_memcg(int nid, struct shrinker *shrinker, - struct mem_cgroup *memcg) -{ - return 0; -} - -static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, - struct mem_cgroup *memcg) -{ - return 0; -} - static bool cgroup_reclaim(struct scan_control *sc) { return false; @@ -557,39 +296,6 @@ static void flush_reclaim_state(struct scan_control *sc) } } -static long xchg_nr_deferred(struct shrinker *shrinker, - struct shrink_control *sc) -{ - int nid = sc->nid; - - if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) - nid = 0; - - if (sc->memcg && - (shrinker->flags & SHRINKER_MEMCG_AWARE)) - return xchg_nr_deferred_memcg(nid, shrinker, - sc->memcg); - - return atomic_long_xchg(&shrinker->nr_deferred[nid], 0); -} - - -static long add_nr_deferred(long nr, struct shrinker *shrinker, - struct shrink_control *sc) -{ - int nid = sc->nid; - - if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) - nid = 0; - - if (sc->memcg && - (shrinker->flags & SHRINKER_MEMCG_AWARE)) - return add_nr_deferred_memcg(nr, nid, shrinker, - sc->memcg); - - return atomic_long_add_return(nr, &shrinker->nr_deferred[nid]); -} - static bool can_demote(int nid, struct scan_control *sc) { if (!numa_demotion_enabled) @@ -671,413 +377,6 @@ static unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, return size; } -/* - * Add a shrinker callback to be called from the vm. - */ -static int __prealloc_shrinker(struct shrinker *shrinker) -{ - unsigned int size; - int err; - - if (shrinker->flags & SHRINKER_MEMCG_AWARE) { - err = prealloc_memcg_shrinker(shrinker); - if (err != -ENOSYS) - return err; - - shrinker->flags &= ~SHRINKER_MEMCG_AWARE; - } - - size = sizeof(*shrinker->nr_deferred); - if (shrinker->flags & SHRINKER_NUMA_AWARE) - size *= nr_node_ids; - - shrinker->nr_deferred = kzalloc(size, GFP_KERNEL); - if (!shrinker->nr_deferred) - return -ENOMEM; - - return 0; -} - -#ifdef CONFIG_SHRINKER_DEBUG -int prealloc_shrinker(struct shrinker *shrinker, const char *fmt, ...) -{ - va_list ap; - int err; - - va_start(ap, fmt); - shrinker->name = kvasprintf_const(GFP_KERNEL, fmt, ap); - va_end(ap); - if (!shrinker->name) - return -ENOMEM; - - err = __prealloc_shrinker(shrinker); - if (err) { - kfree_const(shrinker->name); - shrinker->name = NULL; - } - - return err; -} -#else -int prealloc_shrinker(struct shrinker *shrinker, const char *fmt, ...) -{ - return __prealloc_shrinker(shrinker); -} -#endif - -void free_prealloced_shrinker(struct shrinker *shrinker) -{ -#ifdef CONFIG_SHRINKER_DEBUG - kfree_const(shrinker->name); - shrinker->name = NULL; -#endif - if (shrinker->flags & SHRINKER_MEMCG_AWARE) { - down_write(&shrinker_rwsem); - unregister_memcg_shrinker(shrinker); - up_write(&shrinker_rwsem); - return; - } - - kfree(shrinker->nr_deferred); - shrinker->nr_deferred = NULL; -} - -void register_shrinker_prepared(struct shrinker *shrinker) -{ - down_write(&shrinker_rwsem); - list_add_tail(&shrinker->list, &shrinker_list); - shrinker->flags |= SHRINKER_REGISTERED; - shrinker_debugfs_add(shrinker); - up_write(&shrinker_rwsem); -} - -static int __register_shrinker(struct shrinker *shrinker) -{ - int err = __prealloc_shrinker(shrinker); - - if (err) - return err; - register_shrinker_prepared(shrinker); - return 0; -} - -#ifdef CONFIG_SHRINKER_DEBUG -int register_shrinker(struct shrinker *shrinker, const char *fmt, ...) -{ - va_list ap; - int err; - - va_start(ap, fmt); - shrinker->name = kvasprintf_const(GFP_KERNEL, fmt, ap); - va_end(ap); - if (!shrinker->name) - return -ENOMEM; - - err = __register_shrinker(shrinker); - if (err) { - kfree_const(shrinker->name); - shrinker->name = NULL; - } - return err; -} -#else -int register_shrinker(struct shrinker *shrinker, const char *fmt, ...) -{ - return __register_shrinker(shrinker); -} -#endif -EXPORT_SYMBOL(register_shrinker); - -/* - * Remove one - */ -void unregister_shrinker(struct shrinker *shrinker) -{ - struct dentry *debugfs_entry; - int debugfs_id; - - if (!(shrinker->flags & SHRINKER_REGISTERED)) - return; - - down_write(&shrinker_rwsem); - list_del(&shrinker->list); - shrinker->flags &= ~SHRINKER_REGISTERED; - if (shrinker->flags & SHRINKER_MEMCG_AWARE) - unregister_memcg_shrinker(shrinker); - debugfs_entry = shrinker_debugfs_detach(shrinker, &debugfs_id); - up_write(&shrinker_rwsem); - - shrinker_debugfs_remove(debugfs_entry, debugfs_id); - - kfree(shrinker->nr_deferred); - shrinker->nr_deferred = NULL; -} -EXPORT_SYMBOL(unregister_shrinker); - -/** - * synchronize_shrinkers - Wait for all running shrinkers to complete. - * - * This is equivalent to calling unregister_shrink() and register_shrinker(), - * but atomically and with less overhead. This is useful to guarantee that all - * shrinker invocations have seen an update, before freeing memory, similar to - * rcu. - */ -void synchronize_shrinkers(void) -{ - down_write(&shrinker_rwsem); - up_write(&shrinker_rwsem); -} -EXPORT_SYMBOL(synchronize_shrinkers); - -#define SHRINK_BATCH 128 - -static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, - struct shrinker *shrinker, int priority) -{ - unsigned long freed = 0; - unsigned long long delta; - long total_scan; - long freeable; - long nr; - long new_nr; - long batch_size = shrinker->batch ? shrinker->batch - : SHRINK_BATCH; - long scanned = 0, next_deferred; - - freeable = shrinker->count_objects(shrinker, shrinkctl); - if (freeable == 0 || freeable == SHRINK_EMPTY) - return freeable; - - /* - * copy the current shrinker scan count into a local variable - * and zero it so that other concurrent shrinker invocations - * don't also do this scanning work. - */ - nr = xchg_nr_deferred(shrinker, shrinkctl); - - if (shrinker->seeks) { - delta = freeable >> priority; - delta *= 4; - do_div(delta, shrinker->seeks); - } else { - /* - * These objects don't require any IO to create. Trim - * them aggressively under memory pressure to keep - * them from causing refetches in the IO caches. - */ - delta = freeable / 2; - } - - total_scan = nr >> priority; - total_scan += delta; - total_scan = min(total_scan, (2 * freeable)); - - trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, - freeable, delta, total_scan, priority); - - /* - * Normally, we should not scan less than batch_size objects in one - * pass to avoid too frequent shrinker calls, but if the slab has less - * than batch_size objects in total and we are really tight on memory, - * we will try to reclaim all available objects, otherwise we can end - * up failing allocations although there are plenty of reclaimable - * objects spread over several slabs with usage less than the - * batch_size. - * - * We detect the "tight on memory" situations by looking at the total - * number of objects we want to scan (total_scan). If it is greater - * than the total number of objects on slab (freeable), we must be - * scanning at high prio and therefore should try to reclaim as much as - * possible. - */ - while (total_scan >= batch_size || - total_scan >= freeable) { - unsigned long ret; - unsigned long nr_to_scan = min(batch_size, total_scan); - - shrinkctl->nr_to_scan = nr_to_scan; - shrinkctl->nr_scanned = nr_to_scan; - ret = shrinker->scan_objects(shrinker, shrinkctl); - if (ret == SHRINK_STOP) - break; - freed += ret; - - count_vm_events(SLABS_SCANNED, shrinkctl->nr_scanned); - total_scan -= shrinkctl->nr_scanned; - scanned += shrinkctl->nr_scanned; - - cond_resched(); - } - - /* - * The deferred work is increased by any new work (delta) that wasn't - * done, decreased by old deferred work that was done now. - * - * And it is capped to two times of the freeable items. - */ - next_deferred = max_t(long, (nr + delta - scanned), 0); - next_deferred = min(next_deferred, (2 * freeable)); - - /* - * move the unused scan count back into the shrinker in a - * manner that handles concurrent updates. - */ - new_nr = add_nr_deferred(next_deferred, shrinker, shrinkctl); - - trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan); - return freed; -} - -#ifdef CONFIG_MEMCG -static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, - struct mem_cgroup *memcg, int priority) -{ - struct shrinker_info *info; - unsigned long ret, freed = 0; - int i; - - if (!mem_cgroup_online(memcg)) - return 0; - - if (!down_read_trylock(&shrinker_rwsem)) - return 0; - - info = shrinker_info_protected(memcg, nid); - if (unlikely(!info)) - goto unlock; - - for_each_set_bit(i, info->map, info->map_nr_max) { - struct shrink_control sc = { - .gfp_mask = gfp_mask, - .nid = nid, - .memcg = memcg, - }; - struct shrinker *shrinker; - - shrinker = idr_find(&shrinker_idr, i); - if (unlikely(!shrinker || !(shrinker->flags & SHRINKER_REGISTERED))) { - if (!shrinker) - clear_bit(i, info->map); - continue; - } - - /* Call non-slab shrinkers even though kmem is disabled */ - if (!memcg_kmem_online() && - !(shrinker->flags & SHRINKER_NONSLAB)) - continue; - - ret = do_shrink_slab(&sc, shrinker, priority); - if (ret == SHRINK_EMPTY) { - clear_bit(i, info->map); - /* - * After the shrinker reported that it had no objects to - * free, but before we cleared the corresponding bit in - * the memcg shrinker map, a new object might have been - * added. To make sure, we have the bit set in this - * case, we invoke the shrinker one more time and reset - * the bit if it reports that it is not empty anymore. - * The memory barrier here pairs with the barrier in - * set_shrinker_bit(): - * - * list_lru_add() shrink_slab_memcg() - * list_add_tail() clear_bit() - * - * set_bit() do_shrink_slab() - */ - smp_mb__after_atomic(); - ret = do_shrink_slab(&sc, shrinker, priority); - if (ret == SHRINK_EMPTY) - ret = 0; - else - set_shrinker_bit(memcg, nid, i); - } - freed += ret; - - if (rwsem_is_contended(&shrinker_rwsem)) { - freed = freed ? : 1; - break; - } - } -unlock: - up_read(&shrinker_rwsem); - return freed; -} -#else /* CONFIG_MEMCG */ -static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, - struct mem_cgroup *memcg, int priority) -{ - return 0; -} -#endif /* CONFIG_MEMCG */ - -/** - * shrink_slab - shrink slab caches - * @gfp_mask: allocation context - * @nid: node whose slab caches to target - * @memcg: memory cgroup whose slab caches to target - * @priority: the reclaim priority - * - * Call the shrink functions to age shrinkable caches. - * - * @nid is passed along to shrinkers with SHRINKER_NUMA_AWARE set, - * unaware shrinkers will receive a node id of 0 instead. - * - * @memcg specifies the memory cgroup to target. Unaware shrinkers - * are called only if it is the root cgroup. - * - * @priority is sc->priority, we take the number of objects and >> by priority - * in order to get the scan target. - * - * Returns the number of reclaimed slab objects. - */ -static unsigned long shrink_slab(gfp_t gfp_mask, int nid, - struct mem_cgroup *memcg, - int priority) -{ - unsigned long ret, freed = 0; - struct shrinker *shrinker; - - /* - * The root memcg might be allocated even though memcg is disabled - * via "cgroup_disable=memory" boot parameter. This could make - * mem_cgroup_is_root() return false, then just run memcg slab - * shrink, but skip global shrink. This may result in premature - * oom. - */ - if (!mem_cgroup_disabled() && !mem_cgroup_is_root(memcg)) - return shrink_slab_memcg(gfp_mask, nid, memcg, priority); - - if (!down_read_trylock(&shrinker_rwsem)) - goto out; - - list_for_each_entry(shrinker, &shrinker_list, list) { - struct shrink_control sc = { - .gfp_mask = gfp_mask, - .nid = nid, - .memcg = memcg, - }; - - ret = do_shrink_slab(&sc, shrinker, priority); - if (ret == SHRINK_EMPTY) - ret = 0; - freed += ret; - /* - * Bail out if someone want to register a new shrinker to - * prevent the registration from being stalled for long periods - * by parallel ongoing shrinking. - */ - if (rwsem_is_contended(&shrinker_rwsem)) { - freed = freed ? : 1; - break; - } - } - - up_read(&shrinker_rwsem); -out: - cond_resched(); - return freed; -} - static unsigned long drop_slab_node(int nid) { unsigned long freed = 0; From patchwork Wed Aug 16 08:34:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13354734 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8F09C41513 for ; Wed, 16 Aug 2023 08:36:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 50ECB8D002B; Wed, 16 Aug 2023 04:36:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 497F58D0001; Wed, 16 Aug 2023 04:36:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3127E8D002B; Wed, 16 Aug 2023 04:36:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 1FA5A8D0001 for ; Wed, 16 Aug 2023 04:36:05 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id ED9D81A01D1 for ; Wed, 16 Aug 2023 08:36:04 +0000 (UTC) X-FDA: 81129310248.30.8731609 Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) by imf04.hostedemail.com (Postfix) with ESMTP id 1A3954000A for ; Wed, 16 Aug 2023 08:36:02 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=Cya0Fxlq; spf=pass (imf04.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.182 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692174963; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=v13l7IPlOPgFsnMxGKXfyNRR5BVZeBzBNEaAsOIHoYU=; b=eKd6zURJRW6UUZggBj0cCMs9HrtRyk6MaicJIsdKakit78tOrsIjkyXRdBboBED3jw7nzc I4zev/FpDNYttZNbuV8PHIyXxMMw6F2PNj51/lglHGj65lroN+dgjjtiU0J/jmh7xQYQvw biI4lo64Lzxah4bpgwrs1UmCuFIjKkc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692174963; a=rsa-sha256; cv=none; b=0cWe6FEUQVOp5VO1M0xg0CCinUmhqiz0EYKgzOAhg2uKBciGLA4QJht0dLMX7JDYaE6KUJ 0qLvRDOXSTu+sUrmQLSl/+vTqY/e3dZC/Ba9RCdW9KkwFBaKEdpnxLEVhupzwDX5ACJocT Dg9CdvzA9knC/QfgESMP7wpzrVTL7CI= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=Cya0Fxlq; spf=pass (imf04.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.182 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com Received: by mail-pf1-f182.google.com with SMTP id d2e1a72fcca58-6873f64a290so1768640b3a.0 for ; Wed, 16 Aug 2023 01:36:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1692174962; x=1692779762; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=v13l7IPlOPgFsnMxGKXfyNRR5BVZeBzBNEaAsOIHoYU=; b=Cya0FxlqQyLZEroXNZk/xmAruZD4t2yH2du/O66Ed0VRPZDEOyZPRffBcEvP7HuEOA LBh84tW86g14ySan22a5QNsb0cry8ZMRVhKdpQ7IwM6x6CXd+aHB1uLMFM7LKnBIEidW MxKYco8X9KuAaEu/tQZmOH7YKbH1kkg+D4smGVZHtQo4/2ttPh2MOdtMrmR6FWZ71O5C hxTJnXYDLk9lIPZ5IpP3Xo1WYZhQHzfJJDwDlHN+js+BDpEOp9bf7emx7MSjJ4NzGyId DfLKmdnvHJtnO3OOV1eURaCwflNGnEsTjNw1DynpEsLeKdmNr/5gemJjZDlP6LR1sQGo VwFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692174962; x=1692779762; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=v13l7IPlOPgFsnMxGKXfyNRR5BVZeBzBNEaAsOIHoYU=; b=G5HjysZU0HXQyrs11nWIGDerpJu+m6kciRKj3OmSoMx8QTRGUgH4zj8xX6MCW/e7TR y6Y6HQGjG/SDw+Fgo41XiUYJn8a10ffZQAw11cm1Am6hFKtIeoI8HxXOMGvQsjns67Uc tdCGwnImtsIdyyQsMDeRh5KhF4oaZOQsl+F6jUB30a4BctqLua8Gq+rzy1hfrSJ3YlxS /njmefd3hTGjbZNJqe6xV+hizaaF78p+bivg0h+MWk78QxsPvy7xLJ8YssuC1k2reZiC VLQ1wX3DYPTHGmddXAsU68p5OtsTvoIOdCAri3FObWKwc0W5ZegoYOSLeYYdW++lSj9p oF9g== X-Gm-Message-State: AOJu0Yytr9l7BCYHFPSj8+k6+yjNsQYvxDXF9z1ISbcQRc0Numvi5USp 7xm27uJ3dnR+la/uiEsCT/q1hw== X-Google-Smtp-Source: AGHT+IE+pqjEsxtDBKCnfOYW5hSpBYIW97yeRg+vb7eGqw4ZPe5gd676obnQEAvZzfvbXIOzGXycpA== X-Received: by 2002:a05:6a20:4281:b0:145:3bd9:1377 with SMTP id o1-20020a056a20428100b001453bd91377mr2045479pzj.5.1692174962033; Wed, 16 Aug 2023 01:36:02 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id p16-20020a639510000000b005658d3a46d7sm7506333pgd.84.2023.08.16.01.35.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 01:36:01 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, david@fromorbit.com, tkhai@ya.ru, vbabka@suse.cz, roman.gushchin@linux.dev, djwong@kernel.org, brauner@kernel.org, paulmck@kernel.org, tytso@mit.edu, steven.price@arm.com, cel@kernel.org, senozhatsky@chromium.org, yujie.liu@intel.com, gregkh@linuxfoundation.org, muchun.song@linux.dev, joel@joelfernandes.org, christian.koenig@amd.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, dri-devel@lists.freedesktop.org, linux-fsdevel@vger.kernel.org, Qi Zheng , Muchun Song Subject: [PATCH 3/5] mm: shrinker: remove redundant shrinker_rwsem in debugfs operations Date: Wed, 16 Aug 2023 16:34:17 +0800 Message-Id: <20230816083419.41088-4-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20230816083419.41088-1-zhengqi.arch@bytedance.com> References: <20230816083419.41088-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1A3954000A X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 5kq7nsg6ytnyxk67y7kkc5p4wky3akyp X-HE-Tag: 1692174962-376 X-HE-Meta: U2FsdGVkX1/0LnSLmosAnLfKVKBV4LQkHD1+I7F70vibgpgM2/S1QbTHYj74LL7EgKLKSRudDRCo93F8R3iT/aXp6hgi+4Q67a8jZPNaRsZ7pCAClH7XVn+RbYm2OsuXS7uXD5h5Im6I2KZbgMllPhlT5DGyzHzBq4FFaVHpYKRYIkjzVXdamYp0JP6wPdxdkxrt87hLc3CVDNRJzQjZlBevLKZBbH/5wLfUYgik6JhaeL0TVfC0JdlnEKkMHEgvMLr8TZhlWaP/Qo8X8AnmWm6dDSqzLKSuljFU1aDeC8Ojq7pJp4xgZSL6wWKBHL0m2d3AbEvOUId2mlUd0O9Lyqw6Dy5aP6PZujyUX3JoPFibuO78SXXSqu1APaOnD/13YcHOURoiCJFY8skTK2w0/oazyzmwOQQolQ1WJJ1L/ELMLt44OShrH7vhzLR1TY62H0gDRVKxtKbnobgf+rY48q9pkK8qjZYULOrX9c9PDQdvM88GLhy202kXiee0j0BonjXYQDTqtzZop2H41EkrcNTUiIisOqaQd8vSu1uO27+cs/ggFd0cYLx5gP13zhNeljYvx5K3+NdbFpRA5AZZKOjoKhAKNyBEhgnz8vUVbyZbQtZjQlhWRNtKpMvPHAKvw1lvGwA0QtrUNreQSg4A/GHcx2fPAv30nBTre4OxciRwrZHpCz/WBfJ3nJjipx/QBDW5MbY6X93YVKtqMR/6VOs7H5vCuGHfmbGrPE1a7O1ofUfFhwOmOSTriqqHkGssfHe/7VMQj9s4L8pz/3KDSTVJ0mIRPXd6kCuzG+j+R6tyZB6ur39qrd0/BwlVcNLCxgya2wDZK7pYm0UtN7clmAZXXkFhw07onvM/ux2ttf5ES+XN5hEiQFf2hiY4D5wc/NPaMfHZa8EPQ19NIcY/wo8uR24vwsxv/lZktkDIJKFQRd2HMFUZB/2zxcjPIskBEUa3bvWrzeNjQ9YKpBe sKis4ZFQ uhrOEW1vjSa3RUp4lN0mREBVaOVBTZb7yTxpyHJjFhSxEXU+5QTsxeAvBJd9TBkGyKxs4bK49snYmDBvfotg7EMOzThe08Klt31Cnb4na6I1UKeDGB+cnl0CMecvAqvMmLXY3ZKvy87SDQmeudugg/5NAt7i2P40Ndlgu+3EXOVXSnjNlmhrKCVORsY6b5knZVYrGy7OP8Xm9CRCr4dhStf+mioP9L/zNC1zLG91iGXVraAp4H6tHXuLb5ml6pMexlIgo2gQrDrP2Cq+j10nKutRKTSkkCMIGkyAjqOV6nH3DWiNdYcFdmVfowm9f8W5nNyb/cDnqQErksJoJceLVbqzVl4pZNj+nDxVsl6SeFEw95nId3PmVawGVWHCMVVh3yf9bcqBMLAoWZpOoghGNZSHOGK5HjSFYDRgxZzPcQIDXQTk+KfFH3LVe4TiQygFrQCSXiNQSm6JmijTBGSlwL61IbJ+x5V0F0J05Y6e+0bko4jCuggvVGAUdp1vW8DxDqFsdVgfWE+TJKh9DHErARa2WJoGW/p3tXM3ariZ93RVw0zEEaf9NFpbf/MY1EiQrvc1aKSVfuhCJ2A4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The debugfs_remove_recursive() will wait for debugfs_file_put() to return, so the shrinker will not be freed when doing debugfs operations (such as shrinker_debugfs_count_show() and shrinker_debugfs_scan_write()), so there is no need to hold shrinker_rwsem during debugfs operations. Signed-off-by: Qi Zheng Reviewed-by: Muchun Song --- mm/shrinker_debug.c | 16 +--------------- 1 file changed, 1 insertion(+), 15 deletions(-) diff --git a/mm/shrinker_debug.c b/mm/shrinker_debug.c index 3ab53fad8876..61702bdc1af4 100644 --- a/mm/shrinker_debug.c +++ b/mm/shrinker_debug.c @@ -49,17 +49,12 @@ static int shrinker_debugfs_count_show(struct seq_file *m, void *v) struct mem_cgroup *memcg; unsigned long total; bool memcg_aware; - int ret, nid; + int ret = 0, nid; count_per_node = kcalloc(nr_node_ids, sizeof(unsigned long), GFP_KERNEL); if (!count_per_node) return -ENOMEM; - ret = down_read_killable(&shrinker_rwsem); - if (ret) { - kfree(count_per_node); - return ret; - } rcu_read_lock(); memcg_aware = shrinker->flags & SHRINKER_MEMCG_AWARE; @@ -92,7 +87,6 @@ static int shrinker_debugfs_count_show(struct seq_file *m, void *v) } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); rcu_read_unlock(); - up_read(&shrinker_rwsem); kfree(count_per_node); return ret; @@ -117,7 +111,6 @@ static ssize_t shrinker_debugfs_scan_write(struct file *file, struct mem_cgroup *memcg = NULL; int nid; char kbuf[72]; - ssize_t ret; read_len = size < (sizeof(kbuf) - 1) ? size : (sizeof(kbuf) - 1); if (copy_from_user(kbuf, buf, read_len)) @@ -146,12 +139,6 @@ static ssize_t shrinker_debugfs_scan_write(struct file *file, return -EINVAL; } - ret = down_read_killable(&shrinker_rwsem); - if (ret) { - mem_cgroup_put(memcg); - return ret; - } - sc.nid = nid; sc.memcg = memcg; sc.nr_to_scan = nr_to_scan; @@ -159,7 +146,6 @@ static ssize_t shrinker_debugfs_scan_write(struct file *file, shrinker->scan_objects(shrinker, &sc); - up_read(&shrinker_rwsem); mem_cgroup_put(memcg); return size; From patchwork Wed Aug 16 08:34:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13354735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA6BFC001E0 for ; Wed, 16 Aug 2023 08:36:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 07B4B8D002C; Wed, 16 Aug 2023 04:36:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 02B3E8D0001; Wed, 16 Aug 2023 04:36:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E0EBE8D002C; Wed, 16 Aug 2023 04:36:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id CEE628D0001 for ; Wed, 16 Aug 2023 04:36:14 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id A17DF1C9E27 for ; Wed, 16 Aug 2023 08:36:14 +0000 (UTC) X-FDA: 81129310668.27.2DDC54B Received: from mail-oi1-f169.google.com (mail-oi1-f169.google.com [209.85.167.169]) by imf11.hostedemail.com (Postfix) with ESMTP id C753E40009 for ; Wed, 16 Aug 2023 08:36:11 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=ajh7xjzp; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf11.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.167.169 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692174971; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qRHd4JcGnkoybpwH6/5aA4cQ+LBMzk7z4xqKJvv8dhI=; b=fRj1apGmw/paQGX5xO+HwEnxk7YijlnSDp2IRPdsLDW8sZcT70HLq5nMVtJ9tuhDH7Zxt3 t0Nu72rhp88yc/slm7vf3mUaNu6QhHKkIqtdSAhDOECGq4Iu94U+knm/JFepOl+VHR25u/ xKxyzde7NVhutWUGkvQGVkR5TwPHjr0= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=ajh7xjzp; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf11.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.167.169 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692174971; a=rsa-sha256; cv=none; b=1VvXKfh2vqIHfJ9EzcCnUoiqVY/OX/5Ci07KsVSuSmXHB1lWPMXlkwt5ix0GS2dOL6u6/b sZDd2hALqkYLw6w4CzebBe80AiGnXWnkylj2rCCFWgUcHDiUuiyDDV51vnglLVOLrD2ypt a1gXgty+bDV2KwXTxJ5v6FDRphVWd1M= Received: by mail-oi1-f169.google.com with SMTP id 5614622812f47-3a7491aa219so1184173b6e.1 for ; Wed, 16 Aug 2023 01:36:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1692174971; x=1692779771; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qRHd4JcGnkoybpwH6/5aA4cQ+LBMzk7z4xqKJvv8dhI=; b=ajh7xjzpXNgMTh4unZjbjB0bFFDGBpv7e78SZ7k2dZ45J7+SIUcZkV4QPiWIFHULos 50eHEit2nZduxy3147qh5g3HarPg4p/RgWl6Jmf9wh6iH+CmLIStC6E2bi35cMhgi2LM 52E12LBqoCpz4VTJt5gTqma9BJ/dr3BXi/KRETgyDFVNXdiGoACcNnI+ppP92DELRr8K +eyQs2ExWZadFGr4AlXFWhYSf5l5QwIaiph4OI8Pudlh1EgEb6iBulipasm/vqmB9pVe fFsAy2Hiq8rs5UzlNJgQ3I0OD+5UH7s/6OOoXIqviS65Mv7ff9yPSTE8+2+usf2p6Qzs qxtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692174971; x=1692779771; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qRHd4JcGnkoybpwH6/5aA4cQ+LBMzk7z4xqKJvv8dhI=; b=SZ3al4j4vOMAGc1WMKr8UZ84me5To52JKxArzKt30Fu78fU2rQAwV4tx5XQfyHbzWw e++v6BR9sURtDdVVk2yNA+kRm8D9X6rtciMEa1WCXW0NwuFO2myRMUP06Uzmsa8nWrNG XkXThuOqeQEWQAxV5qEEDBeCleL/TtjjN/g4qA6baEkCIKcK5adzwPV8eH8M38hYNjUn 6gBMkIgncRKi78ukd3hI4RPhvAbD6pyV6mCDwTMTw5Gl6er68qYJN9BEV+9OHabV99WQ ay6BmmyI19JxBTJMBu6JM7AEca2/RVNwyoBMmCopiGkfkVee5uuIZqdQUc7reTGeTUAX wZYA== X-Gm-Message-State: AOJu0Ywj6gekqROgrEhbM9QsSGLEx+Fco2anuXPzAGYUUqufhOlmTs/i NzyasC4CnBOao2Xi+7v87fJ1mQ== X-Google-Smtp-Source: AGHT+IF28Gu3pzoNkt0lhF4apUrH5iwZUnFBt5ekqtY4PdfHts1SFMZwDvJgI46TmbtZtTo+TVsxpg== X-Received: by 2002:a05:6808:2e4e:b0:3a1:a62a:9ed0 with SMTP id gp14-20020a0568082e4e00b003a1a62a9ed0mr1225989oib.1.1692174970946; Wed, 16 Aug 2023 01:36:10 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id p16-20020a639510000000b005658d3a46d7sm7506333pgd.84.2023.08.16.01.36.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 01:36:10 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, david@fromorbit.com, tkhai@ya.ru, vbabka@suse.cz, roman.gushchin@linux.dev, djwong@kernel.org, brauner@kernel.org, paulmck@kernel.org, tytso@mit.edu, steven.price@arm.com, cel@kernel.org, senozhatsky@chromium.org, yujie.liu@intel.com, gregkh@linuxfoundation.org, muchun.song@linux.dev, joel@joelfernandes.org, christian.koenig@amd.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, dri-devel@lists.freedesktop.org, linux-fsdevel@vger.kernel.org, Qi Zheng , Muchun Song Subject: [PATCH 4/5] drm/ttm: introduce pool_shrink_rwsem Date: Wed, 16 Aug 2023 16:34:18 +0800 Message-Id: <20230816083419.41088-5-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20230816083419.41088-1-zhengqi.arch@bytedance.com> References: <20230816083419.41088-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: C753E40009 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: y9dt8giibipxp9kdxorgqp41eq64o6wo X-HE-Tag: 1692174971-880207 X-HE-Meta: U2FsdGVkX18rbnAohcZ/m4nChyZGFsR+/Et1wiKb5Kic6TTsA98iNC+0RRt+3PGVtfc1py19q2pA3xEOUiqryTbEQ/XGxct8t8ABFBs4h6EOpNZQ4MU87UOHjqFiesuUOrqVlFNDicjPMf/ySS2/b9wW5KiIxT7B3YK1k6OsO4o4rmpt5BsW4zVyr3/CVk65UqxCsaGA8LBVssChtmw/3r20sv+atVn+2PUCpxLkb0ydk8IvvfwfM4h6aREfBqRndmUkbXjZBEvdbjp2++Nn5vsZ35KbPJgLS4mxqL13UONccgZbx0ntnbyiiWfwb32kHuuiW+CvWmBxXeYWOC1L+tSR3g9bK+R0OvCzYEoXX+00AJNB6g6KpejWKdTjcWcCm6uFd5aG7EBPUJ8rKLdbZGUxFhPlH1us2yHHvv0/evIJ72xOCb4CTO2wDNTFzMaitZAC2e4uLw+W3XnE81Fut4n//6qDtgmij3yhAzGiuacnTP0dgl4woUAwU1G8lAcaO0h5uHmK0Jk5Uj0YAU3FlxgFp40Lbwy+CbInPoEZw465wBcF6NHkCcAph8D7ADe4VFwSfULzz+SXUbGMTC0X74Y5IRCIOZXJBxHRstOPI4/0EConlH2cF5SBH0BI6AAUd3M3gdFMOLwTVpX3kvpmu/13lkj//7cse9MwbgepvABuDYcje9+qKnVoj43m2pLWb5YbCu9UdW+1amiq5KGxXpYlT2E4L6U9KGcwBniqVwrq9iVzqqLRCu/+qTS56N2Gnp2xyPp4WH6biu1JoCTE639/FHz4S+pHbEyjHNKk5YazGCzQj+EBuwDUi9E+eCYUkVxElzahaUUvbJ006VYv3okn6n8QFBz/lODgrrArF7G2VFHW3kr01HRjeFFS00qQr0HGYevRPBkxJvMZRYaMAIro5E/Mf+l3fsheg/upDQtUIj8djYGMPSKusbsPnRy6Dg8r872Dt9popRH3BxO OW6HTnSn WoV3ORVP11PMasP4hD+I5M0H3uayIheG78AVghgqmMEQk841/lqC2IDWhas5HyfNPva+59J4pY/Rs+coLEwI10ADZrx17VJh8hffpWDcTNKfTrT/HZyzczunojHAbmZLth8OF2VxJw71ktypcJbYSi/OTONAFMj8tGGvSPJzGqaYrCVKQAtC5fPJjH0z6AC4+CyC1jYaGJRbPStaoypzlk1p/x7c3ZBWWu57ypCfuIZiLFzespUIBDgKdxnuememGwRUXrGzmuzU8hmTNSOGrgObRZpCf4E1LUOP+a7vq0exu9/2NIBkiBj3maiE83Wo76jBoT7+uPN2gIT3iODATh7i1RP4tNbI6VyMBPtYL1JgexjnFfOT8JOMWKJ2L6bHxnQXVc15eDDUCfRlVF/uURJhr9gfGxMmtTvu5c8v3xDZzGen++Wn2L4W6eGueVtRTUrVffR+HlLL2gWMZZqMZ/GD4bg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, the synchronize_shrinkers() is only used by TTM pool. It only requires that no shrinkers run in parallel. After we use RCU+refcount method to implement the lockless slab shrink, we can not use shrinker_rwsem or synchronize_rcu() to guarantee that all shrinker invocations have seen an update before freeing memory. So we introduce a new pool_shrink_rwsem to implement a private synchronize_shrinkers(), so as to achieve the same purpose. Signed-off-by: Qi Zheng Reviewed-by: Muchun Song --- drivers/gpu/drm/ttm/ttm_pool.c | 15 +++++++++++++++ include/linux/shrinker.h | 1 - mm/shrinker.c | 15 --------------- 3 files changed, 15 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index cddb9151d20f..713b1c0a70e1 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -74,6 +74,7 @@ static struct ttm_pool_type global_dma32_uncached[MAX_ORDER + 1]; static spinlock_t shrinker_lock; static struct list_head shrinker_list; static struct shrinker mm_shrinker; +static DECLARE_RWSEM(pool_shrink_rwsem); /* Allocate pages of size 1 << order with the given gfp_flags */ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, @@ -317,6 +318,7 @@ static unsigned int ttm_pool_shrink(void) unsigned int num_pages; struct page *p; + down_read(&pool_shrink_rwsem); spin_lock(&shrinker_lock); pt = list_first_entry(&shrinker_list, typeof(*pt), shrinker_list); list_move_tail(&pt->shrinker_list, &shrinker_list); @@ -329,6 +331,7 @@ static unsigned int ttm_pool_shrink(void) } else { num_pages = 0; } + up_read(&pool_shrink_rwsem); return num_pages; } @@ -572,6 +575,18 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev, } EXPORT_SYMBOL(ttm_pool_init); +/** + * synchronize_shrinkers - Wait for all running shrinkers to complete. + * + * This is useful to guarantee that all shrinker invocations have seen an + * update, before freeing memory, similar to rcu. + */ +static void synchronize_shrinkers(void) +{ + down_write(&pool_shrink_rwsem); + up_write(&pool_shrink_rwsem); +} + /** * ttm_pool_fini - Cleanup a pool * diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 8dc15aa37410..6b5843c3b827 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -103,7 +103,6 @@ extern int __printf(2, 3) register_shrinker(struct shrinker *shrinker, const char *fmt, ...); extern void unregister_shrinker(struct shrinker *shrinker); extern void free_prealloced_shrinker(struct shrinker *shrinker); -extern void synchronize_shrinkers(void); #ifdef CONFIG_SHRINKER_DEBUG extern int __printf(2, 3) shrinker_debugfs_rename(struct shrinker *shrinker, diff --git a/mm/shrinker.c b/mm/shrinker.c index 043c87ccfab4..a16cd448b924 100644 --- a/mm/shrinker.c +++ b/mm/shrinker.c @@ -692,18 +692,3 @@ void unregister_shrinker(struct shrinker *shrinker) shrinker->nr_deferred = NULL; } EXPORT_SYMBOL(unregister_shrinker); - -/** - * synchronize_shrinkers - Wait for all running shrinkers to complete. - * - * This is equivalent to calling unregister_shrink() and register_shrinker(), - * but atomically and with less overhead. This is useful to guarantee that all - * shrinker invocations have seen an update, before freeing memory, similar to - * rcu. - */ -void synchronize_shrinkers(void) -{ - down_write(&shrinker_rwsem); - up_write(&shrinker_rwsem); -} -EXPORT_SYMBOL(synchronize_shrinkers); From patchwork Wed Aug 16 08:34:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13354736 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FF88C001DF for ; Wed, 16 Aug 2023 08:36:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C648A8D002D; Wed, 16 Aug 2023 04:36:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BEB9D8D0001; Wed, 16 Aug 2023 04:36:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A651C8D002D; Wed, 16 Aug 2023 04:36:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 8F8808D0001 for ; Wed, 16 Aug 2023 04:36:23 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6D15AB2406 for ; Wed, 16 Aug 2023 08:36:23 +0000 (UTC) X-FDA: 81129311046.06.1A69096 Received: from mail-io1-f52.google.com (mail-io1-f52.google.com [209.85.166.52]) by imf25.hostedemail.com (Postfix) with ESMTP id 900B3A0010 for ; Wed, 16 Aug 2023 08:36:21 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=UFJQxcDf; spf=pass (imf25.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.166.52 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692174981; a=rsa-sha256; cv=none; b=ohGRoSHM32o+id2JaEJMEa5V4YINGLhh4xMLT94jsjymrJXa/UzRuQ3CcjoOtUiJH5B+dC OMkl306RX9xgCFnY+62j3Ttp/X7TPJW7zW8DQp07Bgsbfv70VQ5k/o6uUlObEDV49lx3CO yfdMnGXa8RIFPdtr0YYYeKHuVn0SHds= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=UFJQxcDf; spf=pass (imf25.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.166.52 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692174981; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=woVtApjKtbzIe2/hEV7galerWEkdSpR+3F7GcJJJS+s=; b=VEnI4ZUgkuNbSuSW+vgedtzv9ROjzPifYbzIpCChxPHixiE7wOc+9W4v+6H53gylw6j7r2 R86N7GHjBqr76DlG+dWr0PgsQLo77mPaAPvkAXUVISzc/QVkMGjh/o8L8k+82nZrEKpUqu iiRAZOu7j2xza6YiZ0rh8TTRmwcxztY= Received: by mail-io1-f52.google.com with SMTP id ca18e2360f4ac-7748ca56133so80729339f.0 for ; Wed, 16 Aug 2023 01:36:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1692174980; x=1692779780; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=woVtApjKtbzIe2/hEV7galerWEkdSpR+3F7GcJJJS+s=; b=UFJQxcDfJufS+u4ZP03ThFdl1ypcosn80vvkH4QoVYrGHIGgyIwFNNdwKIZcDeZCQj AUnywyiNZlwo5JEIeF6kKvTCFfU6WFiIqamMhkl6qtZ/fRKZdTWKyBJiTBcPGoPVWwFk gGJbpAf4w/gCDa+BLf06yTO5dyPgUoMRSBuNIoaRONBJ/XR1u8OtD/UjK6euxqlOZf+e vaAZvH8YuOLdIVtsH491XO4/ROlhb4xps69xm3gh5rrF1PJsGbbhenUYBf9IVZGp8+lc 4H3N9OW6ekjZhSp6AsttSM1YAYgT4SuN5D7ogcHHmeys41oqqNJGHnG+nP2Z2NZ74u3M A6Tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692174980; x=1692779780; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=woVtApjKtbzIe2/hEV7galerWEkdSpR+3F7GcJJJS+s=; b=cpaw4/9RhiK6sR0Azzpp4uw1x3WbyiBQhuls9Uo93dICvrC2eIVx8wXBTJOXozV/VE 0pRWX9VJlJ4BqRE8pTo47Zv6PhZI4C0fHnze4rOHKIhl7fr6dkAxP8X5lTzqzx//GMXt WNhvdTKw/IJTPOzuZq6twvXlWCmd+BKWrZ2SmVZ9Biohi9EINhoVM1r2x4aMVKtigK75 3p3tqCAFVR8DYtOe6JrI0MrlYNe9CTx35KgwQ+nRA8gQo5bPexAUpvgD8wswiLcoarqm MshAxAiQz480s8CqHtDE8Dql8711mL9cDUxiVrQUec7Pd+DpeTDTWRBOYbsgFDwW4ERh w/HQ== X-Gm-Message-State: AOJu0YzwaAw3sMbDVuDRDbOEVQihxPVCKigz7fY+s7gigfFhodMWUZy5 zlIypFXymU6k0x2jwgOtJZduxg== X-Google-Smtp-Source: AGHT+IEQ4qXRi8kiLipV9oxETxH879Tvy9+vsKRQ6VK+eYHALO/sHbhIYHG1ERD0ezaGVdvnTAWMiw== X-Received: by 2002:a05:6e02:549:b0:349:385e:287e with SMTP id i9-20020a056e02054900b00349385e287emr1762872ils.1.1692174980399; Wed, 16 Aug 2023 01:36:20 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id p16-20020a639510000000b005658d3a46d7sm7506333pgd.84.2023.08.16.01.36.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 01:36:19 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, david@fromorbit.com, tkhai@ya.ru, vbabka@suse.cz, roman.gushchin@linux.dev, djwong@kernel.org, brauner@kernel.org, paulmck@kernel.org, tytso@mit.edu, steven.price@arm.com, cel@kernel.org, senozhatsky@chromium.org, yujie.liu@intel.com, gregkh@linuxfoundation.org, muchun.song@linux.dev, joel@joelfernandes.org, christian.koenig@amd.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, dri-devel@lists.freedesktop.org, linux-fsdevel@vger.kernel.org, Qi Zheng , Muchun Song Subject: [PATCH 5/5] mm: shrinker: add a secondary array for shrinker_info::{map, nr_deferred} Date: Wed, 16 Aug 2023 16:34:19 +0800 Message-Id: <20230816083419.41088-6-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20230816083419.41088-1-zhengqi.arch@bytedance.com> References: <20230816083419.41088-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 900B3A0010 X-Stat-Signature: zqmqb8a6qwik3umey7f7pimttufx4nfw X-Rspam-User: X-HE-Tag: 1692174981-330310 X-HE-Meta: U2FsdGVkX1+V8uMiEu/dtFlgwS2iA6n09Jdxcj65LP7/QQyUdJnAsSRUOcdzyrjXf36DQjyY+Cbmzp5ZAlDYxIGlYvhnqFJ1Yv//qIOKT6m6qq8qYKvT08tNqSqHu79VspcKyXF6pozkJ0D7/NGWee7sZvE0jde+olV8XXp/qSNi6+xv/qXGmcYxKrCsqWULW22Jb/0ODkSyfN99wJXjzwii7rlfA2v1Zs3JhaqUPN1hiZCoLKxOboY5ea0qIAnqt5dZCVHkXOTLvdXT765TXUcxihhGkCTMUO5nfWNu/RGbK7v0MlifqHR0SB+p7EDKfpH7RP27eArTUPYJMIiCjVKNrImpBhZJXHO/y5piimwJ1EcpDf0Yr4WR85HDrU48kmgK4gxZcSaU3IDIl5Q18zzPVl4TJz7IgKN5h8Rzng3tdp4pCcGU0U0BzatoJ1u/avUh2aiaf2KiepjmR8Mz+EfTvuKiEUJsFdRfysSXNrzwUOUAaxCPqXEAVW7wdz8mdxcOMY/cuK0JI3eFlvEvDEYHOdTykJ2O+GWXEppoATkUFoBNz7APKdeTvcm0k0Krb0t4TstAn1cgvyfSibdeEhvrvhQrDtq3nHmjGs1oU0UTKOCpIJqgKV5NfH8ruFTUdqnZ3jHVTuIs1cJbv3rEyNjeQj++5NM8KctFlqyKZ/QV6H7PrNFSMTMkqpbDZiiJhKG01p77+qIj2jbAy7XsaGl2Ch8Gms6rJGPevfQWxhL0k42tmsWuoIJI7XoSnI8kQQxR1ZNKLZ4eHxYvGgpJ4Q+LzdxXSiS5rQhE0O5DRCfAedwL21+lphderlLBOtnJdVhqi0YJxCeSfbrzKiToyURmccO7k3qV1cUmTs3/VLJlsB0iOH04vkkxg9mKaVC5B4szQBbz1AgxQ9n3+Y/frWUrzvGPGJ+KXth22CD2JnzdCA084KeTC4e8fh9pPcY65IAUoINYjgot73eahe5 hgmqm1Wj XFhYn0jgvTzFq6e7AZf3v+LfJmQorDyCeOn6d7SL4eMiLB6fvIxyo7+ZaYRWN/Lc7DeNsCReqfNS8V0h7tfBc36w1rPkGU+KWIOJORnbeRjFUko9exZ4j7TI/wM73W/Vt1rAe22lh6R1SZ5Ps8TpRg5BNvNa0Z0zF6KgX6Goo3nUumg17Z/XBNOARA39uz7BLjMgNKzWeZLaz3OXUpj0SibmnTDvrVVIGlgiGX90Yq2SVBjFWPz6l+HB/jtv3O46jIBaPErx9+BrVl566BM6SBPcEA5T8q2lI8+YNnp5xiN4rxOfeA+aMMQTCwtdEDs2+5iWN2A1Cm4CzQ1W2zOrq1B6BPcvFE9ddEIeQlbLWoNgshEWkl3nIBCwe3RAHJPz/942fGZFppcKqf1gSJLvB3h5KF56nNvOmgjIU4SSnz8V3x2dja7cQbWFnZIOfLL2NLzBBrKP98CQX/WzhT3CvIrwAgcuyFMMIhaueL3Ohjy5yggPrt6N19JP0QCpmI65VGQt8rIDkHTBPTac= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, we maintain two linear arrays per node per memcg, which are shrinker_info::map and shrinker_info::nr_deferred. And we need to resize them when the shrinker_nr_max is exceeded, that is, allocate a new array, and then copy the old array to the new array, and finally free the old array by RCU. For shrinker_info::map, we do set_bit() under the RCU lock, so we may set the value into the old map which is about to be freed. This may cause the value set to be lost. The current solution is not to copy the old map when resizing, but to set all the corresponding bits in the new map to 1. This solves the data loss problem, but bring the overhead of more pointless loops while doing memcg slab shrink. For shrinker_info::nr_deferred, we will only modify it under the read lock of shrinker_rwsem, so it will not run concurrently with the resizing. But after we make memcg slab shrink lockless, there will be the same data loss problem as shrinker_info::map, and we can't work around it like the map. For such resizable arrays, the most straightforward idea is to change it to xarray, like we did for list_lru [1]. We need to do xa_store() in the list_lru_add()-->set_shrinker_bit(), but this will cause memory allocation, and the list_lru_add() doesn't accept failure. A possible solution is to pre-allocate, but the location of pre-allocation is not well determined (such as deferred_split_shrinker case). Therefore, this commit chooses to introduce the following secondary array for shrinker_info::{map, nr_deferred}: +---------------+--------+--------+-----+ | shrinker_info | unit 0 | unit 1 | ... | (secondary array) +---------------+--------+--------+-----+ | v +---------------+-----+ | nr_deferred[] | map | (leaf array) +---------------+-----+ (shrinker_info_unit) The leaf array is never freed unless the memcg is destroyed. The secondary array will be resized every time the shrinker id exceeds shrinker_nr_max. So the shrinker_info_unit can be indexed from both the old and the new shrinker_info->unit[x]. Then even if we get the old secondary array under the RCU lock, the found map and nr_deferred are also true, so the updated nr_deferred and map will not be lost. [1]. https://lore.kernel.org/all/20220228122126.37293-13-songmuchun@bytedance.com/ Signed-off-by: Qi Zheng Reviewed-by: Muchun Song --- include/linux/memcontrol.h | 12 +- include/linux/shrinker.h | 17 +++ mm/shrinker.c | 249 +++++++++++++++++++++++-------------- 3 files changed, 171 insertions(+), 107 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 11810a2cfd2d..b49515bb6fbd 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -21,6 +21,7 @@ #include #include #include +#include struct mem_cgroup; struct obj_cgroup; @@ -88,17 +89,6 @@ struct mem_cgroup_reclaim_iter { unsigned int generation; }; -/* - * Bitmap and deferred work of shrinker::id corresponding to memcg-aware - * shrinkers, which have elements charged to this memcg. - */ -struct shrinker_info { - struct rcu_head rcu; - atomic_long_t *nr_deferred; - unsigned long *map; - int map_nr_max; -}; - struct lruvec_stats_percpu { /* Local (CPU and cgroup) state */ long state[NR_VM_NODE_STAT_ITEMS]; diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 6b5843c3b827..8a3c99422fd3 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -5,6 +5,23 @@ #include #include +#define SHRINKER_UNIT_BITS BITS_PER_LONG + +/* + * Bitmap and deferred work of shrinker::id corresponding to memcg-aware + * shrinkers, which have elements charged to the memcg. + */ +struct shrinker_info_unit { + atomic_long_t nr_deferred[SHRINKER_UNIT_BITS]; + DECLARE_BITMAP(map, SHRINKER_UNIT_BITS); +}; + +struct shrinker_info { + struct rcu_head rcu; + int map_nr_max; + struct shrinker_info_unit *unit[]; +}; + /* * This struct is used to pass information from page reclaim to the shrinkers. * We consolidate the values for easier extension later. diff --git a/mm/shrinker.c b/mm/shrinker.c index a16cd448b924..a7b5397a4fb9 100644 --- a/mm/shrinker.c +++ b/mm/shrinker.c @@ -12,15 +12,50 @@ DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG static int shrinker_nr_max; -/* The shrinker_info is expanded in a batch of BITS_PER_LONG */ -static inline int shrinker_map_size(int nr_items) +static inline int shrinker_unit_size(int nr_items) { - return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long)); + return (DIV_ROUND_UP(nr_items, SHRINKER_UNIT_BITS) * sizeof(struct shrinker_info_unit *)); } -static inline int shrinker_defer_size(int nr_items) +static inline void shrinker_unit_free(struct shrinker_info *info, int start) { - return (round_up(nr_items, BITS_PER_LONG) * sizeof(atomic_long_t)); + struct shrinker_info_unit **unit; + int nr, i; + + if (!info) + return; + + unit = info->unit; + nr = DIV_ROUND_UP(info->map_nr_max, SHRINKER_UNIT_BITS); + + for (i = start; i < nr; i++) { + if (!unit[i]) + break; + + kfree(unit[i]); + unit[i] = NULL; + } +} + +static inline int shrinker_unit_alloc(struct shrinker_info *new, + struct shrinker_info *old, int nid) +{ + struct shrinker_info_unit *unit; + int nr = DIV_ROUND_UP(new->map_nr_max, SHRINKER_UNIT_BITS); + int start = old ? DIV_ROUND_UP(old->map_nr_max, SHRINKER_UNIT_BITS) : 0; + int i; + + for (i = start; i < nr; i++) { + unit = kzalloc_node(sizeof(*unit), GFP_KERNEL, nid); + if (!unit) { + shrinker_unit_free(new, start); + return -ENOMEM; + } + + new->unit[i] = unit; + } + + return 0; } void free_shrinker_info(struct mem_cgroup *memcg) @@ -32,6 +67,7 @@ void free_shrinker_info(struct mem_cgroup *memcg) for_each_node(nid) { pn = memcg->nodeinfo[nid]; info = rcu_dereference_protected(pn->shrinker_info, true); + shrinker_unit_free(info, 0); kvfree(info); rcu_assign_pointer(pn->shrinker_info, NULL); } @@ -40,28 +76,27 @@ void free_shrinker_info(struct mem_cgroup *memcg) int alloc_shrinker_info(struct mem_cgroup *memcg) { struct shrinker_info *info; - int nid, size, ret = 0; - int map_size, defer_size = 0; + int nid, ret = 0; + int array_size = 0; down_write(&shrinker_rwsem); - map_size = shrinker_map_size(shrinker_nr_max); - defer_size = shrinker_defer_size(shrinker_nr_max); - size = map_size + defer_size; + array_size = shrinker_unit_size(shrinker_nr_max); for_each_node(nid) { - info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid); - if (!info) { - free_shrinker_info(memcg); - ret = -ENOMEM; - break; - } - info->nr_deferred = (atomic_long_t *)(info + 1); - info->map = (void *)info->nr_deferred + defer_size; + info = kvzalloc_node(sizeof(*info) + array_size, GFP_KERNEL, nid); + if (!info) + goto err; info->map_nr_max = shrinker_nr_max; + if (shrinker_unit_alloc(info, NULL, nid)) + goto err; rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); } up_write(&shrinker_rwsem); return ret; + +err: + free_shrinker_info(memcg); + return -ENOMEM; } static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, @@ -71,15 +106,12 @@ static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, lockdep_is_held(&shrinker_rwsem)); } -static int expand_one_shrinker_info(struct mem_cgroup *memcg, - int map_size, int defer_size, - int old_map_size, int old_defer_size, - int new_nr_max) +static int expand_one_shrinker_info(struct mem_cgroup *memcg, int new_size, + int old_size, int new_nr_max) { struct shrinker_info *new, *old; struct mem_cgroup_per_node *pn; int nid; - int size = map_size + defer_size; for_each_node(nid) { pn = memcg->nodeinfo[nid]; @@ -92,21 +124,17 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg, if (new_nr_max <= old->map_nr_max) continue; - new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, nid); + new = kvmalloc_node(sizeof(*new) + new_size, GFP_KERNEL, nid); if (!new) return -ENOMEM; - new->nr_deferred = (atomic_long_t *)(new + 1); - new->map = (void *)new->nr_deferred + defer_size; new->map_nr_max = new_nr_max; - /* map: set all old bits, clear all new bits */ - memset(new->map, (int)0xff, old_map_size); - memset((void *)new->map + old_map_size, 0, map_size - old_map_size); - /* nr_deferred: copy old values, clear all new values */ - memcpy(new->nr_deferred, old->nr_deferred, old_defer_size); - memset((void *)new->nr_deferred + old_defer_size, 0, - defer_size - old_defer_size); + memcpy(new->unit, old->unit, old_size); + if (shrinker_unit_alloc(new, old, nid)) { + kvfree(new); + return -ENOMEM; + } rcu_assign_pointer(pn->shrinker_info, new); kvfree_rcu(old, rcu); @@ -118,9 +146,8 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg, static int expand_shrinker_info(int new_id) { int ret = 0; - int new_nr_max = round_up(new_id + 1, BITS_PER_LONG); - int map_size, defer_size = 0; - int old_map_size, old_defer_size = 0; + int new_nr_max = round_up(new_id + 1, SHRINKER_UNIT_BITS); + int new_size, old_size = 0; struct mem_cgroup *memcg; if (!root_mem_cgroup) @@ -128,15 +155,12 @@ static int expand_shrinker_info(int new_id) lockdep_assert_held(&shrinker_rwsem); - map_size = shrinker_map_size(new_nr_max); - defer_size = shrinker_defer_size(new_nr_max); - old_map_size = shrinker_map_size(shrinker_nr_max); - old_defer_size = shrinker_defer_size(shrinker_nr_max); + new_size = shrinker_unit_size(new_nr_max); + old_size = shrinker_unit_size(shrinker_nr_max); memcg = mem_cgroup_iter(NULL, NULL, NULL); do { - ret = expand_one_shrinker_info(memcg, map_size, defer_size, - old_map_size, old_defer_size, + ret = expand_one_shrinker_info(memcg, new_size, old_size, new_nr_max); if (ret) { mem_cgroup_iter_break(NULL, memcg); @@ -150,17 +174,34 @@ static int expand_shrinker_info(int new_id) return ret; } +static inline int shrinker_id_to_index(int shrinker_id) +{ + return shrinker_id / SHRINKER_UNIT_BITS; +} + +static inline int shrinker_id_to_offset(int shrinker_id) +{ + return shrinker_id % SHRINKER_UNIT_BITS; +} + +static inline int calc_shrinker_id(int index, int offset) +{ + return index * SHRINKER_UNIT_BITS + offset; +} + void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) { if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) { struct shrinker_info *info; + struct shrinker_info_unit *unit; rcu_read_lock(); info = rcu_dereference(memcg->nodeinfo[nid]->shrinker_info); + unit = info->unit[shrinker_id_to_index(shrinker_id)]; if (!WARN_ON_ONCE(shrinker_id >= info->map_nr_max)) { /* Pairs with smp mb in shrink_slab() */ smp_mb__before_atomic(); - set_bit(shrinker_id, info->map); + set_bit(shrinker_id_to_offset(shrinker_id), unit->map); } rcu_read_unlock(); } @@ -209,26 +250,31 @@ static long xchg_nr_deferred_memcg(int nid, struct shrinker *shrinker, struct mem_cgroup *memcg) { struct shrinker_info *info; + struct shrinker_info_unit *unit; info = shrinker_info_protected(memcg, nid); - return atomic_long_xchg(&info->nr_deferred[shrinker->id], 0); + unit = info->unit[shrinker_id_to_index(shrinker->id)]; + return atomic_long_xchg(&unit->nr_deferred[shrinker_id_to_offset(shrinker->id)], 0); } static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, struct mem_cgroup *memcg) { struct shrinker_info *info; + struct shrinker_info_unit *unit; info = shrinker_info_protected(memcg, nid); - return atomic_long_add_return(nr, &info->nr_deferred[shrinker->id]); + unit = info->unit[shrinker_id_to_index(shrinker->id)]; + return atomic_long_add_return(nr, &unit->nr_deferred[shrinker_id_to_offset(shrinker->id)]); } void reparent_shrinker_deferred(struct mem_cgroup *memcg) { - int i, nid; + int nid, index, offset; long nr; struct mem_cgroup *parent; struct shrinker_info *child_info, *parent_info; + struct shrinker_info_unit *child_unit, *parent_unit; parent = parent_mem_cgroup(memcg); if (!parent) @@ -239,9 +285,13 @@ void reparent_shrinker_deferred(struct mem_cgroup *memcg) for_each_node(nid) { child_info = shrinker_info_protected(memcg, nid); parent_info = shrinker_info_protected(parent, nid); - for (i = 0; i < child_info->map_nr_max; i++) { - nr = atomic_long_read(&child_info->nr_deferred[i]); - atomic_long_add(nr, &parent_info->nr_deferred[i]); + for (index = 0; index < shrinker_id_to_index(child_info->map_nr_max); index++) { + child_unit = child_info->unit[index]; + parent_unit = parent_info->unit[index]; + for (offset = 0; offset < SHRINKER_UNIT_BITS; offset++) { + nr = atomic_long_read(&child_unit->nr_deferred[offset]); + atomic_long_add(nr, &parent_unit->nr_deferred[offset]); + } } } up_read(&shrinker_rwsem); @@ -407,7 +457,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, { struct shrinker_info *info; unsigned long ret, freed = 0; - int i; + int offset, index = 0; if (!mem_cgroup_online(memcg)) return 0; @@ -419,56 +469,63 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, if (unlikely(!info)) goto unlock; - for_each_set_bit(i, info->map, info->map_nr_max) { - struct shrink_control sc = { - .gfp_mask = gfp_mask, - .nid = nid, - .memcg = memcg, - }; - struct shrinker *shrinker; + for (; index < shrinker_id_to_index(info->map_nr_max); index++) { + struct shrinker_info_unit *unit; - shrinker = idr_find(&shrinker_idr, i); - if (unlikely(!shrinker || !(shrinker->flags & SHRINKER_REGISTERED))) { - if (!shrinker) - clear_bit(i, info->map); - continue; - } + unit = info->unit[index]; - /* Call non-slab shrinkers even though kmem is disabled */ - if (!memcg_kmem_online() && - !(shrinker->flags & SHRINKER_NONSLAB)) - continue; + for_each_set_bit(offset, unit->map, SHRINKER_UNIT_BITS) { + struct shrink_control sc = { + .gfp_mask = gfp_mask, + .nid = nid, + .memcg = memcg, + }; + struct shrinker *shrinker; + int shrinker_id = calc_shrinker_id(index, offset); - ret = do_shrink_slab(&sc, shrinker, priority); - if (ret == SHRINK_EMPTY) { - clear_bit(i, info->map); - /* - * After the shrinker reported that it had no objects to - * free, but before we cleared the corresponding bit in - * the memcg shrinker map, a new object might have been - * added. To make sure, we have the bit set in this - * case, we invoke the shrinker one more time and reset - * the bit if it reports that it is not empty anymore. - * The memory barrier here pairs with the barrier in - * set_shrinker_bit(): - * - * list_lru_add() shrink_slab_memcg() - * list_add_tail() clear_bit() - * - * set_bit() do_shrink_slab() - */ - smp_mb__after_atomic(); - ret = do_shrink_slab(&sc, shrinker, priority); - if (ret == SHRINK_EMPTY) - ret = 0; - else - set_shrinker_bit(memcg, nid, i); - } - freed += ret; + shrinker = idr_find(&shrinker_idr, shrinker_id); + if (unlikely(!shrinker || !(shrinker->flags & SHRINKER_REGISTERED))) { + if (!shrinker) + clear_bit(offset, unit->map); + continue; + } - if (rwsem_is_contended(&shrinker_rwsem)) { - freed = freed ? : 1; - break; + /* Call non-slab shrinkers even though kmem is disabled */ + if (!memcg_kmem_online() && + !(shrinker->flags & SHRINKER_NONSLAB)) + continue; + + ret = do_shrink_slab(&sc, shrinker, priority); + if (ret == SHRINK_EMPTY) { + clear_bit(offset, unit->map); + /* + * After the shrinker reported that it had no objects to + * free, but before we cleared the corresponding bit in + * the memcg shrinker map, a new object might have been + * added. To make sure, we have the bit set in this + * case, we invoke the shrinker one more time and reset + * the bit if it reports that it is not empty anymore. + * The memory barrier here pairs with the barrier in + * set_shrinker_bit(): + * + * list_lru_add() shrink_slab_memcg() + * list_add_tail() clear_bit() + * + * set_bit() do_shrink_slab() + */ + smp_mb__after_atomic(); + ret = do_shrink_slab(&sc, shrinker, priority); + if (ret == SHRINK_EMPTY) + ret = 0; + else + set_shrinker_bit(memcg, nid, shrinker_id); + } + freed += ret; + + if (rwsem_is_contended(&shrinker_rwsem)) { + freed = freed ? : 1; + goto unlock; + } } } unlock: