From patchwork Tue Nov 12 16:38:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13872507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0FF6D42BAA for ; Tue, 12 Nov 2024 16:39:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 720FE6B0102; Tue, 12 Nov 2024 11:39:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 680156B0105; Tue, 12 Nov 2024 11:39:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C47D6B00FD; Tue, 12 Nov 2024 11:39:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id AB26B6B00FE for ; Tue, 12 Nov 2024 11:39:51 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 58956402B8 for ; Tue, 12 Nov 2024 16:39:51 +0000 (UTC) X-FDA: 82778002944.06.1E6C78B Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf15.hostedemail.com (Postfix) with ESMTP id 048A7A002F for ; Tue, 12 Nov 2024 16:39:05 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=A0TDNiTg; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=keaat68Y; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=A0TDNiTg; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=keaat68Y; dmarc=none; spf=pass (imf15.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731429413; a=rsa-sha256; cv=none; b=INTYbOgacZQFXWgHjK8eRLlG4u3246tH4Qwhdba6Q2dFwt02zeHOfGp6t93LAXWfQIbapg 8fUU4SlXjeJw55QhFCB/IzFr6Y+erWx4N0+j1gjg4KMfBtKIhYO/palDqSuROC7gGlUEEy RGqN5x8SUsP+a8LXOPR7b+q2BLtEtsw= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=A0TDNiTg; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=keaat68Y; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=A0TDNiTg; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=keaat68Y; dmarc=none; spf=pass (imf15.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731429413; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=y22XbXJR2P9++1hguT48F7Lgc4EuibMWLg9xauV5uYM=; b=IPyehvxWQgy8zJ4nTYrFqKMgAWMf5knn2Ro9MlFg8lnro+26vcNgFtTK8XV+QnXHl4gUtP BTNztdmBkKtVLIpgQcKyOZctQ06VvZTsEfsp5YYaYB11hVcGuibYAXnHq0FJ7TVXcyc3Gr pSKxBy2TIcDGJRvyaSwQeKZ/NG0WMrg= Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 497541F385; Tue, 12 Nov 2024 16:39:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1731429587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=y22XbXJR2P9++1hguT48F7Lgc4EuibMWLg9xauV5uYM=; b=A0TDNiTgqn22pqszrdw4zns1Vgk/+p5cm7cJAunlvcOgTObjODwNC/7OjkHOXYfmobDy5Q rUXFBtp1x+ZVWfaDwMMGZvhD+Ge9LT675PEkLRLPdlTDXwBFvp3kpgScUcl58YqZcIhtuw o60+7pJImZocXqXibkQEcQmJCovi4Ug= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1731429587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=y22XbXJR2P9++1hguT48F7Lgc4EuibMWLg9xauV5uYM=; b=keaat68YbyurhBJWAh+InAMX50kUCNsE51iSn4LXmqWCuVax1pMxR2ZWPjrWpxfGFxgur0 KHPk/ShAg+TwTjDQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1731429587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=y22XbXJR2P9++1hguT48F7Lgc4EuibMWLg9xauV5uYM=; b=A0TDNiTgqn22pqszrdw4zns1Vgk/+p5cm7cJAunlvcOgTObjODwNC/7OjkHOXYfmobDy5Q rUXFBtp1x+ZVWfaDwMMGZvhD+Ge9LT675PEkLRLPdlTDXwBFvp3kpgScUcl58YqZcIhtuw o60+7pJImZocXqXibkQEcQmJCovi4Ug= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1731429587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=y22XbXJR2P9++1hguT48F7Lgc4EuibMWLg9xauV5uYM=; b=keaat68YbyurhBJWAh+InAMX50kUCNsE51iSn4LXmqWCuVax1pMxR2ZWPjrWpxfGFxgur0 KHPk/ShAg+TwTjDQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 27E6613A8C; Tue, 12 Nov 2024 16:39:47 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 0BduCdOEM2e6IwAAD6G6ig (envelope-from ); Tue, 12 Nov 2024 16:39:47 +0000 From: Vlastimil Babka Date: Tue, 12 Nov 2024 17:38:45 +0100 Subject: [PATCH RFC 1/6] mm/slub: add opt-in caching layer of percpu sheaves MIME-Version: 1.0 Message-Id: <20241112-slub-percpu-caches-v1-1-ddc0bdc27e05@suse.cz> References: <20241112-slub-percpu-caches-v1-0-ddc0bdc27e05@suse.cz> In-Reply-To: <20241112-slub-percpu-caches-v1-0-ddc0bdc27e05@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, "Paul E. McKenney" , Lorenzo Stoakes , Matthew Wilcox , Boqun Feng , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, Vlastimil Babka X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=39055; i=vbabka@suse.cz; h=from:subject:message-id; bh=DciTSGOm+arLIHt7zsMGKBEGrzIdhA9xzgbvN+05rTk=; b=owEBbQGS/pANAwAIAbvgsHXSRYiaAcsmYgBnM4TDORLCFsIksAOm72brgBdgUpJot1qH8Afyw fpcwjJaek+JATMEAAEIAB0WIQR7u8hBFZkjSJZITfG74LB10kWImgUCZzOEwwAKCRC74LB10kWI mu+DCACArvmGlEL0lGM9pwP/QRogUAIj8PQkPgl3q7cmiRWIZ0gXN7avxZWcnSBDCgQxBx/pwbk sddm5oHiTXhplh/D805Tjq92oAicFNCM76/6D8WGliCIkQz5CrS+Qm63LnZr5RKP6ESmGVVa8c4 W1Afk7mmuExS2cJd7qGZYz+49qoDtxKY4Cqyt3PDi9FZytPocgNaXcCqOGZYedl6qXHPY29oyEJ dyVA4GULHto3sJgnXruAOd5+xF2L/MzCT+lqrZqO3Lcd9TQMf1DcHuJyHuXrtjv7fa4shcYOQtD Ht0gua/nSlIZRf0e+P1k+Bl+Bm3yyLCsqK1vgvULHwEiwaYl X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Stat-Signature: 7qib8gzf65wriz8kbm66c9o4cp71awje X-Rspamd-Queue-Id: 048A7A002F X-Rspamd-Server: rspam08 X-Rspam-User: X-HE-Tag: 1731429545-330861 X-HE-Meta: U2FsdGVkX19CHouQJZCwRRCMJiA0TJbEQn0JbEZ7k7UUsZlIiekiltieg890KrlEOJpjmLlUIMHT2XHNPXQKlV/zBWCkOun/lJEbd6YIXihC3OuHMisbe8aD10J9ra9qK20JuvWLJSEDRCKdym3YSxvcFNEOP6Otj1kGct6x9wgXXIiAzSZo2gA63L1NIHfT6liwZ0Q1hJJV/WNCOg76rzEOvKVFo5FBB84Qp2xsP8b+Xuo/iCgtLjzxgekzUSkJDOak+2cAgcIk3kO0mfNrUBESo8MUfLbaaq25IlqccCot3j4uCo/FahCBYRJj2nNnj3IF6iD6ZJNJu7K8JloDVDROqrfN3Pnr5fyOK5Qu7ADHVUlLotvZ2zlwyVy8+apZ2pNdp1PgpyD6qi16JTU9r8+lvp4XtsZIv1feHI2X6R57Hb6pi8zu6X1ma/gW2nBzcySP1Nym4W30rBlrmyJzd4kfd8BmSvqfBwQnYbWGWChPEJDmhwByhZ2IqhGig3puAqobe9VcnLZQqDdkcJyRdEHCf1xCoQZbsqKGhZ8S9jcIg1VQWhSjJz3kT1yRLdFxLSXgzJGcK5vPdtWgGxnkZN0xCE+tOl+4OKxWO20QQhm9DVossiMCdm1edJfBqkApwuInCG3AH7qfRADscZ22IMtc29d1aUSnGs+IZMUodHenzOT+kPUsmTFBxKF+8xxg5ORA6TqhIlw+3vhI6O1AnLd9E8QFR62HLrpQB5QPGS5YAPX1mWXTk3t9cC0QfMahEhx85AuOo3x9FrBOQ1EJR+6TIP61bOaps5G3JxRtC4BsqnwZ1vfKu6vBCCC7I0hqLH9Cu9ba0ZKCMkodB8N/8nUGhlJE7O04uZ3/DNa05tYy6CnmkJSTy+ShtbXv2+cAfQN6uZ7ZkyrCejLH7zVvGqMwbH52gUnf9DGmb/3hhrytMhe3Mo7t5wY73M6Dc6R3hJA/F9Ign+fnwLavsH9 SuenStpN rcMVeoa1m3EXNN9UuIeFMM7SdKZv0+HR2Lc/hpjsUciZS3uI6RWyo2ePoElU8M/P7kQ+i+1xJk1IWn0odHfax+ouOrP0vARG+hOlWis5pB12MMjhfXvRAbI5JZo3tDexdMDf1MBECUWzz81PIFclQOD146rPRUEB+JkrN953iKy3JDeIBeJRi6IYS9WVLJxkP7Dhxhp6cSPIUUiLDEF2NapCqRBrB/9quSQ1spA/VDnc5qpd5T67O0vVnKiB3jhFVONHHmOxE1WIXgYJzUJt2qAmxIicd9NV1evgsvkO0m5SqosFD6jkcgRPDkPVlIGnMMWnd X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Specifying a non-zero value for a new struct kmem_cache_args field sheaf_capacity will setup a caching layer of percpu arrays called sheaves of given capacity for the created cache. Allocations from the cache will allocate via the percpu sheaves (main or spare) as long as they have no NUMA node preference. Frees will also refill the the sheaves. When both percpu sheaves are found empty during an allocation, an empty sheaf may be replaced with a full one from the per-node barn. If none are available and the allocation is allowed to block, an empty sheaf is refilled from slab(s) by an internal bulk alloc operation. When both percpu sheaves are full during freeing, the barn can replace a full one with an empty one, unless over a full sheaves limit. In that case a sheaf is flushed to slab(s) by an internal bulk free operation. Flushing sheaves and barns is also wired to the existing cpu flushing and cache shrinking operations. The sheaves do not distinguish NUMA locality of the cached objects. If an allocation is requested with kmem_cache_alloc_node() with a specific node (not NUMA_NO_NODE), sheaves are bypassed. The bulk operations exposed to slab users also try to utilize the sheaves as long as the necessary (full or empty) sheaves are available on the cpu or in the barn. Once depleted, they will fallback to bulk alloc/free to slabs directly to avoid double copying. Sysfs stat counters alloc_cpu_sheaf and free_cpu_sheaf count objects allocated or freed using the sheaves. Counters sheaf_refill, sheaf_flush_main and sheaf_flush_other count objects filled or flushed from or to slab pages, and can be used to assess how effective the caching is. The refill and flush operations will also count towards the usual alloc_fastpath/slowpath, free_fastpath/slowpath and other counters. Access to the percpu sheaves is protected by local_lock_irqsave() operations, each per-NUMA-node barns has a spin_lock. A current limitation is that when slub_debug is enabled for a cache with percpu sheaves, the objects in the array are considered as allocated from the slub_debug perspective, and the alloc/free debugging hooks occur when moving the objects between the array and slab pages. This means that e.g. an use-after-free that occurs for an object cached in the array is undetected. Collected alloc/free stacktraces might also be less useful. This limitation could be changed in the future. On the other hand, KASAN, kmemcg and other hooks are executed on actual allocations and frees by kmem_cache users even if those use the array, so their debugging or accounting accuracy should be unaffected. Signed-off-by: Vlastimil Babka --- include/linux/slab.h | 34 ++ mm/slab.h | 2 + mm/slab_common.c | 5 +- mm/slub.c | 982 ++++++++++++++++++++++++++++++++++++++++++++++++--- 4 files changed, 973 insertions(+), 50 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index b35e2db7eb0ecc4933126f56b2c3dbf369cbb44b..b13fb1c1f03c14a5b45bc6a64a2096883aef9f83 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -309,6 +309,40 @@ struct kmem_cache_args { * %NULL means no constructor. */ void (*ctor)(void *); + /** + * @sheaf_capacity: Enable sheaves of given capacity for the cache. + * + * With a non-zero value, allocations from the cache go through caching + * arrays called sheaves. Each cpu has a main sheaf that's always + * present, and a spare sheaf thay may be not present. When both become + * empty, there's an attempt to replace an empty sheaf with a full sheaf + * from the per-node barn. + * + * When no full sheaf is available, and gfp flags allow blocking, a + * sheaf is allocated and filled from slab(s) using bulk allocation. + * Otherwise the allocation falls back to the normal operation + * allocating a single object from a slab. + * + * Analogically when freeing and both percpu sheaves are full, the barn + * may replace it with an empty sheaf, unless it's over capacity. In + * that case a sheaf is bulk freed to slab pages. + * + * The sheaves does not distinguish NUMA placement of objects, so + * allocations via kmem_cache_alloc_node() with a node specified other + * than NUMA_NO_NODE will bypass them. + * + * Bulk allocation and free operations also try to use the cpu sheaves + * and barn, but fallback to using slab pages directly. + * + * Limitations: when slub_debug is enabled for the cache, all relevant + * actions (i.e. poisoning, obtaining stacktraces) and checks happen + * when objects move between sheaves and slab pages, which may result in + * e.g. not detecting a use-after-free while the object is in the array + * cache, and the stacktraces may be less useful. + * + * %0 means no sheaves will be created + */ + unsigned int sheaf_capacity; }; struct kmem_cache *__kmem_cache_create_args(const char *name, diff --git a/mm/slab.h b/mm/slab.h index 6c6fe6d630ce3d919c29bafd15b401324618da1a..001e0d55467bb4803b5dff718ba8e0c775f42b3f 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -254,6 +254,7 @@ struct kmem_cache { #ifndef CONFIG_SLUB_TINY struct kmem_cache_cpu __percpu *cpu_slab; #endif + struct slub_percpu_sheaves __percpu *cpu_sheaves; /* Used for retrieving partial slabs, etc. */ slab_flags_t flags; unsigned long min_partial; @@ -267,6 +268,7 @@ struct kmem_cache { /* Number of per cpu partial slabs to keep around */ unsigned int cpu_partial_slabs; #endif + unsigned int sheaf_capacity; struct kmem_cache_order_objects oo; /* Allocation and freeing of slabs */ diff --git a/mm/slab_common.c b/mm/slab_common.c index 893d320599151845973b4eee9c7accc0d807aa72..7939f3f017740e0ac49ffa971c45409d0fbe2f23 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -161,6 +161,9 @@ int slab_unmergeable(struct kmem_cache *s) return 1; #endif + if (s->cpu_sheaves) + return 1; + /* * We may have set a slab to be unmergeable during bootstrap. */ @@ -317,7 +320,7 @@ struct kmem_cache *__kmem_cache_create_args(const char *name, object_size - args->usersize < args->useroffset)) args->usersize = args->useroffset = 0; - if (!args->usersize) + if (!args->usersize && !args->sheaf_capacity) s = __kmem_cache_alias(name, object_size, args->align, flags, args->ctor); if (s) diff --git a/mm/slub.c b/mm/slub.c index 5b832512044e3ead8ccde2c02308bd8954246db5..7da08112213b203993b5159eb35a1ea640d706fe 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -347,8 +347,10 @@ static inline void debugfs_slab_add(struct kmem_cache *s) { } #endif enum stat_item { + ALLOC_PCS, /* Allocation from percpu sheaf */ ALLOC_FASTPATH, /* Allocation from cpu slab */ ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab */ + FREE_PCS, /* Free to percpu sheaf */ FREE_FASTPATH, /* Free to cpu slab */ FREE_SLOWPATH, /* Freeing not to cpu slab */ FREE_FROZEN, /* Freeing to frozen slab */ @@ -373,6 +375,12 @@ enum stat_item { CPU_PARTIAL_FREE, /* Refill cpu partial on free */ CPU_PARTIAL_NODE, /* Refill cpu partial from node partial */ CPU_PARTIAL_DRAIN, /* Drain cpu partial to node partial */ + SHEAF_FLUSH_MAIN, /* Objects flushed from main percpu sheaf */ + SHEAF_FLUSH_OTHER, /* Objects flushed from other sheaves */ + SHEAF_REFILL, /* Objects refilled to a sheaf */ + SHEAF_SWAP, /* Swapping main and spare sheaf */ + SHEAF_ALLOC, /* Allocation of an empty sheaf */ + SHEAF_FREE, /* Freeing of an empty sheaf */ NR_SLUB_STAT_ITEMS }; @@ -419,6 +427,35 @@ void stat_add(const struct kmem_cache *s, enum stat_item si, int v) #endif } +#define MAX_FULL_SHEAVES 10 +#define MAX_EMPTY_SHEAVES 10 + +struct node_barn { + spinlock_t lock; + struct list_head sheaves_full; + struct list_head sheaves_empty; + unsigned int nr_full; + unsigned int nr_empty; +}; + +struct slab_sheaf { + union { + struct rcu_head rcu_head; + struct list_head barn_list; + }; + struct kmem_cache *cache; + unsigned int size; + void *objects[]; +}; + +struct slub_percpu_sheaves { + local_lock_t lock; + struct slab_sheaf *main; /* never NULL when unlocked */ + struct slab_sheaf *spare; /* empty or full, may be NULL */ + struct slab_sheaf *rcu_free; + struct node_barn *barn; +}; + /* * The slab lists for all objects. */ @@ -431,6 +468,7 @@ struct kmem_cache_node { atomic_long_t total_objects; struct list_head full; #endif + struct node_barn *barn; }; static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node) @@ -454,12 +492,19 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node) */ static nodemask_t slab_nodes; -#ifndef CONFIG_SLUB_TINY /* * Workqueue used for flush_cpu_slab(). */ static struct workqueue_struct *flushwq; -#endif + +struct slub_flush_work { + struct work_struct work; + struct kmem_cache *s; + bool skip; +}; + +static DEFINE_MUTEX(flush_lock); +static DEFINE_PER_CPU(struct slub_flush_work, slub_flush); /******************************************************************** * Core slab cache functions @@ -2398,6 +2443,349 @@ static void *setup_object(struct kmem_cache *s, void *object) return object; } +static struct slab_sheaf *alloc_empty_sheaf(struct kmem_cache *s, gfp_t gfp) +{ + struct slab_sheaf *sheaf = kzalloc(struct_size(sheaf, objects, + s->sheaf_capacity), gfp); + + if (unlikely(!sheaf)) + return NULL; + + sheaf->cache = s; + + stat(s, SHEAF_ALLOC); + + return sheaf; +} + +static void free_empty_sheaf(struct kmem_cache *s, struct slab_sheaf *sheaf) +{ + kfree(sheaf); + + stat(s, SHEAF_FREE); +} + +static int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, + size_t size, void **p); + + +static int refill_sheaf(struct kmem_cache *s, struct slab_sheaf *sheaf, + gfp_t gfp) +{ + int to_fill = s->sheaf_capacity - sheaf->size; + int filled; + + if (!to_fill) + return 0; + + filled = __kmem_cache_alloc_bulk(s, gfp, to_fill, + &sheaf->objects[sheaf->size]); + + if (!filled) + return -ENOMEM; + + sheaf->size = s->sheaf_capacity; + + stat_add(s, SHEAF_REFILL, filled); + + return 0; +} + + +static struct slab_sheaf *alloc_full_sheaf(struct kmem_cache *s, gfp_t gfp) +{ + struct slab_sheaf *sheaf = alloc_empty_sheaf(s, gfp); + + if (!sheaf) + return NULL; + + if (refill_sheaf(s, sheaf, gfp)) { + free_empty_sheaf(s, sheaf); + return NULL; + } + + return sheaf; +} + +/* + * Maximum number of objects freed during a single flush of main pcs sheaf. + * Translates directly to an on-stack array size. + */ +#define PCS_BATCH_MAX 32U + +static void __kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p); + +static void sheaf_flush_main(struct kmem_cache *s) +{ + struct slub_percpu_sheaves *pcs; + unsigned int batch, remaining; + void *objects[PCS_BATCH_MAX]; + struct slab_sheaf *sheaf; + unsigned long flags; + +next_batch: + local_lock_irqsave(&s->cpu_sheaves->lock, flags); + pcs = this_cpu_ptr(s->cpu_sheaves); + sheaf = pcs->main; + + batch = min(PCS_BATCH_MAX, sheaf->size); + + sheaf->size -= batch; + memcpy(objects, sheaf->objects + sheaf->size, batch * sizeof(void *)); + + remaining = sheaf->size; + + local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + + __kmem_cache_free_bulk(s, batch, &objects[0]); + + stat_add(s, SHEAF_FLUSH_MAIN, batch); + + if (remaining) + goto next_batch; +} + +static void sheaf_flush(struct kmem_cache *s, struct slab_sheaf *sheaf) +{ + if (!sheaf->size) + return; + + stat_add(s, SHEAF_FLUSH_OTHER, sheaf->size); + + __kmem_cache_free_bulk(s, sheaf->size, &sheaf->objects[0]); + + sheaf->size = 0; +} + +/* + * Caller needs to make sure migration is disabled in order to fully flush + * single cpu's sheaves + * + * flushing operations are rare so let's keep it simple and flush to slabs + * directly, skipping the barn + */ +static void pcs_flush_all(struct kmem_cache *s) +{ + struct slub_percpu_sheaves *pcs; + struct slab_sheaf *spare, *rcu_free; + unsigned long flags; + + local_lock_irqsave(&s->cpu_sheaves->lock, flags); + pcs = this_cpu_ptr(s->cpu_sheaves); + + spare = pcs->spare; + pcs->spare = NULL; + + rcu_free = pcs->rcu_free; + pcs->rcu_free = NULL; + + local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + + if (spare) { + sheaf_flush(s, spare); + free_empty_sheaf(s, spare); + } + + // TODO: handle rcu_free + BUG_ON(rcu_free); + + sheaf_flush_main(s); +} + +static void __pcs_flush_all_cpu(struct kmem_cache *s, unsigned int cpu) +{ + struct slub_percpu_sheaves *pcs; + + pcs = per_cpu_ptr(s->cpu_sheaves, cpu); + + if (pcs->spare) { + sheaf_flush(s, pcs->spare); + free_empty_sheaf(s, pcs->spare); + pcs->spare = NULL; + } + + // TODO: handle rcu_free + BUG_ON(pcs->rcu_free); + + sheaf_flush_main(s); +} + +static void pcs_destroy(struct kmem_cache *s) +{ + int cpu; + + for_each_possible_cpu(cpu) { + struct slub_percpu_sheaves *pcs; + + pcs = per_cpu_ptr(s->cpu_sheaves, cpu); + + /* can happen when unwinding failed create */ + if (!pcs->main) + continue; + + WARN_ON(pcs->spare); + WARN_ON(pcs->rcu_free); + + if (!WARN_ON(pcs->main->size)) { + free_empty_sheaf(s, pcs->main); + pcs->main = NULL; + } + } + + free_percpu(s->cpu_sheaves); + s->cpu_sheaves = NULL; +} + +static struct slab_sheaf *barn_get_empty_sheaf(struct node_barn *barn) +{ + struct slab_sheaf *empty = NULL; + unsigned long flags; + + spin_lock_irqsave(&barn->lock, flags); + + if (barn->nr_empty) { + empty = list_first_entry(&barn->sheaves_empty, + struct slab_sheaf, barn_list); + list_del(&empty->barn_list); + barn->nr_empty--; + } + + spin_unlock_irqrestore(&barn->lock, flags); + + return empty; +} + +static int barn_put_empty_sheaf(struct node_barn *barn, + struct slab_sheaf *sheaf, bool ignore_limit) +{ + unsigned long flags; + int ret = 0; + + spin_lock_irqsave(&barn->lock, flags); + + if (!ignore_limit && barn->nr_empty >= MAX_EMPTY_SHEAVES) { + ret = -E2BIG; + } else { + list_add(&sheaf->barn_list, &barn->sheaves_empty); + barn->nr_empty++; + } + + spin_unlock_irqrestore(&barn->lock, flags); + return ret; +} + +static int barn_put_full_sheaf(struct node_barn *barn, struct slab_sheaf *sheaf, + bool ignore_limit) +{ + unsigned long flags; + int ret = 0; + + spin_lock_irqsave(&barn->lock, flags); + + if (!ignore_limit && barn->nr_full >= MAX_FULL_SHEAVES) { + ret = -E2BIG; + } else { + list_add(&sheaf->barn_list, &barn->sheaves_full); + barn->nr_full++; + } + + spin_unlock_irqrestore(&barn->lock, flags); + return ret; +} + +/* + * If a full sheaf is available, return it and put the supplied empty one to + * barn. We ignore the limit on empty sheaves as the number of sheaves doesn't + * change. + */ +static struct slab_sheaf * +barn_replace_empty_sheaf(struct node_barn *barn, struct slab_sheaf *empty) +{ + struct slab_sheaf *full = NULL; + unsigned long flags; + + spin_lock_irqsave(&barn->lock, flags); + + if (barn->nr_full) { + full = list_first_entry(&barn->sheaves_full, struct slab_sheaf, + barn_list); + list_del(&full->barn_list); + list_add(&empty->barn_list, &barn->sheaves_empty); + barn->nr_full--; + barn->nr_empty++; + } + + spin_unlock_irqrestore(&barn->lock, flags); + + return full; +} +/* + * If a empty sheaf is available, return it and put the supplied full one to + * barn. But if there are too many full sheaves, reject this with -E2BIG. + */ +static struct slab_sheaf * +barn_replace_full_sheaf(struct node_barn *barn, struct slab_sheaf *full) +{ + struct slab_sheaf *empty; + unsigned long flags; + + spin_lock_irqsave(&barn->lock, flags); + + if (barn->nr_full >= MAX_FULL_SHEAVES) { + empty = ERR_PTR(-E2BIG); + } else if (!barn->nr_empty) { + empty = ERR_PTR(-ENOMEM); + } else { + empty = list_first_entry(&barn->sheaves_empty, struct slab_sheaf, + barn_list); + list_del(&empty->barn_list); + list_add(&full->barn_list, &barn->sheaves_full); + barn->nr_empty--; + barn->nr_full++; + } + + spin_unlock_irqrestore(&barn->lock, flags); + + return empty; +} + +static void barn_init(struct node_barn *barn) +{ + spin_lock_init(&barn->lock); + INIT_LIST_HEAD(&barn->sheaves_full); + INIT_LIST_HEAD(&barn->sheaves_empty); + barn->nr_full = 0; + barn->nr_empty = 0; +} + +static void barn_shrink(struct kmem_cache *s, struct node_barn *barn) +{ + struct list_head empty_list; + struct list_head full_list; + struct slab_sheaf *sheaf, *sheaf2; + unsigned long flags; + + INIT_LIST_HEAD(&empty_list); + INIT_LIST_HEAD(&full_list); + + spin_lock_irqsave(&barn->lock, flags); + + list_splice_init(&barn->sheaves_full, &full_list); + barn->nr_full = 0; + list_splice_init(&barn->sheaves_empty, &empty_list); + barn->nr_empty = 0; + + spin_unlock_irqrestore(&barn->lock, flags); + + list_for_each_entry_safe(sheaf, sheaf2, &full_list, barn_list) { + sheaf_flush(s, sheaf); + list_move(&sheaf->barn_list, &empty_list); + } + + list_for_each_entry_safe(sheaf, sheaf2, &empty_list, barn_list) + free_empty_sheaf(s, sheaf); +} + /* * Slab allocation and freeing */ @@ -3271,11 +3659,42 @@ static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) put_partials_cpu(s, c); } -struct slub_flush_work { - struct work_struct work; - struct kmem_cache *s; - bool skip; -}; +static inline void flush_this_cpu_slab(struct kmem_cache *s) +{ + struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab); + + if (c->slab) + flush_slab(s, c); + + put_partials(s); +} + +static bool has_cpu_slab(int cpu, struct kmem_cache *s) +{ + struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); + + return c->slab || slub_percpu_partial(c); +} + +#else /* CONFIG_SLUB_TINY */ +static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) { } +static inline bool has_cpu_slab(int cpu, struct kmem_cache *s) { return false; } +static inline void flush_this_cpu_slab(struct kmem_cache *s) { } +#endif /* CONFIG_SLUB_TINY */ + +static bool has_pcs_used(int cpu, struct kmem_cache *s) +{ + struct slub_percpu_sheaves *pcs; + + if (!s->cpu_sheaves) + return false; + + pcs = per_cpu_ptr(s->cpu_sheaves, cpu); + + return (pcs->spare || pcs->rcu_free || pcs->main->size); +} + +static void pcs_flush_all(struct kmem_cache *s); /* * Flush cpu slab. @@ -3285,30 +3704,18 @@ struct slub_flush_work { static void flush_cpu_slab(struct work_struct *w) { struct kmem_cache *s; - struct kmem_cache_cpu *c; struct slub_flush_work *sfw; sfw = container_of(w, struct slub_flush_work, work); s = sfw->s; - c = this_cpu_ptr(s->cpu_slab); - if (c->slab) - flush_slab(s, c); - - put_partials(s); -} - -static bool has_cpu_slab(int cpu, struct kmem_cache *s) -{ - struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); + if (s->cpu_sheaves) + pcs_flush_all(s); - return c->slab || slub_percpu_partial(c); + flush_this_cpu_slab(s); } -static DEFINE_MUTEX(flush_lock); -static DEFINE_PER_CPU(struct slub_flush_work, slub_flush); - static void flush_all_cpus_locked(struct kmem_cache *s) { struct slub_flush_work *sfw; @@ -3319,7 +3726,7 @@ static void flush_all_cpus_locked(struct kmem_cache *s) for_each_online_cpu(cpu) { sfw = &per_cpu(slub_flush, cpu); - if (!has_cpu_slab(cpu, s)) { + if (!has_cpu_slab(cpu, s) && !has_pcs_used(cpu, s)) { sfw->skip = true; continue; } @@ -3355,19 +3762,14 @@ static int slub_cpu_dead(unsigned int cpu) struct kmem_cache *s; mutex_lock(&slab_mutex); - list_for_each_entry(s, &slab_caches, list) + list_for_each_entry(s, &slab_caches, list) { __flush_cpu_slab(s, cpu); + __pcs_flush_all_cpu(s, cpu); + } mutex_unlock(&slab_mutex); return 0; } -#else /* CONFIG_SLUB_TINY */ -static inline void flush_all_cpus_locked(struct kmem_cache *s) { } -static inline void flush_all(struct kmem_cache *s) { } -static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) { } -static inline int slub_cpu_dead(unsigned int cpu) { return 0; } -#endif /* CONFIG_SLUB_TINY */ - /* * Check if the objects in a per cpu structure fit numa * locality expectations. @@ -4095,6 +4497,173 @@ bool slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, return memcg_slab_post_alloc_hook(s, lru, flags, size, p); } +static __fastpath_inline +void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp) +{ + struct slub_percpu_sheaves *pcs; + unsigned long flags; + void *object; + + local_lock_irqsave(&s->cpu_sheaves->lock, flags); + pcs = this_cpu_ptr(s->cpu_sheaves); + + if (unlikely(pcs->main->size == 0)) { + + struct slab_sheaf *empty = NULL; + struct slab_sheaf *full; + bool can_alloc; + + if (pcs->spare && pcs->spare->size > 0) { + stat(s, SHEAF_SWAP); + swap(pcs->main, pcs->spare); + goto do_alloc; + } + + full = barn_replace_empty_sheaf(pcs->barn, pcs->main); + + if (full) { + pcs->main = full; + goto do_alloc; + } + + can_alloc = gfpflags_allow_blocking(gfp); + + if (can_alloc) { + if (pcs->spare) { + empty = pcs->spare; + pcs->spare = NULL; + } else { + empty = barn_get_empty_sheaf(pcs->barn); + } + } + + local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + + if (!can_alloc) + return NULL; + + if (empty) { + if (!refill_sheaf(s, empty, gfp)) { + full = empty; + } else { + /* + * we must be very low on memory so don't bother + * with the barn + */ + free_empty_sheaf(s, empty); + } + } else { + full = alloc_full_sheaf(s, gfp); + } + + if (!full) + return NULL; + + local_lock_irqsave(&s->cpu_sheaves->lock, flags); + pcs = this_cpu_ptr(s->cpu_sheaves); + + /* + * If we are returning empty sheaf, we either got it from the + * barn or had to allocate one. If we are returning a full + * sheaf, it's due to racing or being migrated to a different + * cpu. Breaching the barn's sheaf limits should be thus rare + * enough so just ignore them to simplify the recovery. + */ + + if (pcs->main->size == 0) { + barn_put_empty_sheaf(pcs->barn, pcs->main, true); + pcs->main = full; + goto do_alloc; + } + + if (!pcs->spare) { + pcs->spare = full; + goto do_alloc; + } + + if (pcs->spare->size == 0) { + barn_put_empty_sheaf(pcs->barn, pcs->spare, true); + pcs->spare = full; + goto do_alloc; + } + + barn_put_full_sheaf(pcs->barn, full, true); + } + +do_alloc: + object = pcs->main->objects[--pcs->main->size]; + + local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + + stat(s, ALLOC_PCS); + + return object; +} + +static __fastpath_inline +unsigned int alloc_from_pcs_bulk(struct kmem_cache *s, size_t size, void **p) +{ + struct slub_percpu_sheaves *pcs; + struct slab_sheaf *main; + unsigned long flags; + unsigned int allocated = 0; + unsigned int batch; + +next_batch: + local_lock_irqsave(&s->cpu_sheaves->lock, flags); + pcs = this_cpu_ptr(s->cpu_sheaves); + + if (unlikely(pcs->main->size == 0)) { + + struct slab_sheaf *full; + + if (pcs->spare && pcs->spare->size > 0) { + stat(s, SHEAF_SWAP); + swap(pcs->main, pcs->spare); + goto do_alloc; + } + + full = barn_replace_empty_sheaf(pcs->barn, pcs->main); + + if (full) { + pcs->main = full; + goto do_alloc; + } + + local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + + /* + * Once full sheaves in barn are depleted, let the bulk + * allocation continue from slab pages, otherwise we would just + * be copying arrays of pointers twice. + */ + return allocated; + } + +do_alloc: + + main = pcs->main; + batch = min(size, main->size); + + main->size -= batch; + memcpy(p, main->objects + main->size, batch * sizeof(void *)); + + local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + + stat_add(s, ALLOC_PCS, batch); + + allocated += batch; + + if (batch < size) { + p += batch; + size -= batch; + goto next_batch; + } + + return allocated; +} + + /* * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc) * have the fastpath folded into their functions. So no function call @@ -4119,7 +4688,11 @@ static __fastpath_inline void *slab_alloc_node(struct kmem_cache *s, struct list if (unlikely(object)) goto out; - object = __slab_alloc_node(s, gfpflags, node, addr, orig_size); + if (s->cpu_sheaves && (node == NUMA_NO_NODE)) + object = alloc_from_pcs(s, gfpflags); + + if (!object) + object = __slab_alloc_node(s, gfpflags, node, addr, orig_size); maybe_wipe_obj_freeptr(s, object); init = slab_want_init_on_alloc(gfpflags, s); @@ -4490,6 +5063,196 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, discard_slab(s, slab); } +/* + * Free an object to the percpu sheaves. + * The object is expected to have passed slab_free_hook() already. + */ +static __fastpath_inline +void free_to_pcs(struct kmem_cache *s, void *object) +{ + struct slub_percpu_sheaves *pcs; + unsigned long flags; + +restart: + local_lock_irqsave(&s->cpu_sheaves->lock, flags); + pcs = this_cpu_ptr(s->cpu_sheaves); + + if (unlikely(pcs->main->size == s->sheaf_capacity)) { + + struct slab_sheaf *empty; + + if (!pcs->spare) { + empty = barn_get_empty_sheaf(pcs->barn); + if (empty) { + pcs->spare = pcs->main; + pcs->main = empty; + goto do_free; + } + goto alloc_empty; + } + + if (pcs->spare->size < s->sheaf_capacity) { + stat(s, SHEAF_SWAP); + swap(pcs->main, pcs->spare); + goto do_free; + } + + empty = barn_replace_full_sheaf(pcs->barn, pcs->main); + + if (!IS_ERR(empty)) { + pcs->main = empty; + goto do_free; + } + + if (PTR_ERR(empty) == -E2BIG) { + /* Since we got here, spare exists and is full */ + struct slab_sheaf *to_flush = pcs->spare; + + pcs->spare = NULL; + local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + + sheaf_flush(s, to_flush); + empty = to_flush; + goto got_empty; + } + +alloc_empty: + local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + + empty = alloc_empty_sheaf(s, GFP_NOWAIT); + + if (!empty) { + sheaf_flush_main(s); + goto restart; + } + +got_empty: + local_lock_irqsave(&s->cpu_sheaves->lock, flags); + pcs = this_cpu_ptr(s->cpu_sheaves); + + /* + * if we put any sheaf to barn here, it's because we raced or + * have been migrated to a different cpu, which should be rare + * enough so just ignore the barn's limits to simplify + */ + if (unlikely(pcs->main->size < s->sheaf_capacity)) { + if (!pcs->spare) + pcs->spare = empty; + else + barn_put_empty_sheaf(pcs->barn, empty, true); + goto do_free; + } + + if (!pcs->spare) { + pcs->spare = pcs->main; + pcs->main = empty; + goto do_free; + } + + barn_put_full_sheaf(pcs->barn, pcs->main, true); + pcs->main = empty; + } + +do_free: + pcs->main->objects[pcs->main->size++] = object; + + local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + + stat(s, FREE_PCS); +} + +/* + * Bulk free objects to the percpu sheaves. + * Unlike free_to_pcs() this includes the calls to all necessary hooks + * and the fallback to freeing to slab pages. + */ +static void free_to_pcs_bulk(struct kmem_cache *s, size_t size, void **p) +{ + struct slub_percpu_sheaves *pcs; + struct slab_sheaf *main; + unsigned long flags; + unsigned int batch, i = 0; + bool init; + + init = slab_want_init_on_free(s); + + while (i < size) { + struct slab *slab = virt_to_slab(p[i]); + + memcg_slab_free_hook(s, slab, p + i, 1); + alloc_tagging_slab_free_hook(s, slab, p + i, 1); + + if (unlikely(!slab_free_hook(s, p[i], init, false))) { + p[i] = p[--size]; + if (!size) + return; + continue; + } + + i++; + } + +next_batch: + local_lock_irqsave(&s->cpu_sheaves->lock, flags); + pcs = this_cpu_ptr(s->cpu_sheaves); + + if (unlikely(pcs->main->size == s->sheaf_capacity)) { + + struct slab_sheaf *empty; + + if (!pcs->spare) { + empty = barn_get_empty_sheaf(pcs->barn); + if (empty) { + pcs->spare = pcs->main; + pcs->main = empty; + goto do_free; + } + goto no_empty; + } + + if (pcs->spare->size < s->sheaf_capacity) { + stat(s, SHEAF_SWAP); + swap(pcs->main, pcs->spare); + goto do_free; + } + + empty = barn_replace_full_sheaf(pcs->barn, pcs->main); + + if (!IS_ERR(empty)) { + pcs->main = empty; + goto do_free; + } + +no_empty: + local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + + /* + * if we depleted all empty sheaves in the barn or there are too + * many full sheaves, free the rest to slab pages + */ + + __kmem_cache_free_bulk(s, size, p); + return; + } + +do_free: + main = pcs->main; + batch = min(size, s->sheaf_capacity - main->size); + + memcpy(main->objects + main->size, p, batch * sizeof(void *)); + main->size += batch; + + local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + + stat_add(s, FREE_PCS, batch); + + if (batch < size) { + p += batch; + size -= batch; + goto next_batch; + } +} + #ifndef CONFIG_SLUB_TINY /* * Fastpath with forced inlining to produce a kfree and kmem_cache_free that @@ -4576,7 +5339,12 @@ void slab_free(struct kmem_cache *s, struct slab *slab, void *object, memcg_slab_free_hook(s, slab, &object, 1); alloc_tagging_slab_free_hook(s, slab, &object, 1); - if (likely(slab_free_hook(s, object, slab_want_init_on_free(s), false))) + if (unlikely(!slab_free_hook(s, object, slab_want_init_on_free(s), false))) + return; + + if (s->cpu_sheaves) + free_to_pcs(s, object); + else do_slab_free(s, slab, object, object, 1, addr); } @@ -4837,6 +5605,15 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) if (!size) return; + /* + * freeing to sheaves is so incompatible with the detached freelist so + * once we go that way, we have to do everything differently + */ + if (s && s->cpu_sheaves) { + free_to_pcs_bulk(s, size, p); + return; + } + do { struct detached_freelist df; @@ -4955,7 +5732,7 @@ static int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, int kmem_cache_alloc_bulk_noprof(struct kmem_cache *s, gfp_t flags, size_t size, void **p) { - int i; + unsigned int i = 0; if (!size) return 0; @@ -4964,9 +5741,21 @@ int kmem_cache_alloc_bulk_noprof(struct kmem_cache *s, gfp_t flags, size_t size, if (unlikely(!s)) return 0; - i = __kmem_cache_alloc_bulk(s, flags, size, p); - if (unlikely(i == 0)) - return 0; + if (s->cpu_sheaves) + i = alloc_from_pcs_bulk(s, size, p); + + if (i < size) { + unsigned int j = __kmem_cache_alloc_bulk(s, flags, size - i, p + i); + /* + * If we ran out of memory, don't bother with freeing back to + * the percpu sheaves, we have bigger problems. + */ + if (unlikely(j == 0)) { + if (i > 0) + __kmem_cache_free_bulk(s, i, p); + return 0; + } + } /* * memcg and kmem_cache debug support and memory initialization. @@ -4976,11 +5765,11 @@ int kmem_cache_alloc_bulk_noprof(struct kmem_cache *s, gfp_t flags, size_t size, slab_want_init_on_alloc(flags, s), s->object_size))) { return 0; } - return i; + + return size; } EXPORT_SYMBOL(kmem_cache_alloc_bulk_noprof); - /* * Object placement in a slab is made very easy because we always start at * offset 0. If we tune the size of the object to the alignment then we can @@ -5113,8 +5902,8 @@ static inline int calculate_order(unsigned int size) return -ENOSYS; } -static void -init_kmem_cache_node(struct kmem_cache_node *n) +static bool +init_kmem_cache_node(struct kmem_cache_node *n, struct node_barn *barn) { n->nr_partial = 0; spin_lock_init(&n->list_lock); @@ -5124,6 +5913,11 @@ init_kmem_cache_node(struct kmem_cache_node *n) atomic_long_set(&n->total_objects, 0); INIT_LIST_HEAD(&n->full); #endif + n->barn = barn; + if (barn) + barn_init(barn); + + return true; } #ifndef CONFIG_SLUB_TINY @@ -5154,6 +5948,30 @@ static inline int alloc_kmem_cache_cpus(struct kmem_cache *s) } #endif /* CONFIG_SLUB_TINY */ +static int init_percpu_sheaves(struct kmem_cache *s) +{ + int cpu; + + for_each_possible_cpu(cpu) { + struct slub_percpu_sheaves *pcs; + int nid; + + pcs = per_cpu_ptr(s->cpu_sheaves, cpu); + + local_lock_init(&pcs->lock); + + nid = cpu_to_mem(cpu); + + pcs->barn = get_node(s, nid)->barn; + pcs->main = alloc_empty_sheaf(s, GFP_KERNEL); + + if (!pcs->main) + return -ENOMEM; + } + + return 0; +} + static struct kmem_cache *kmem_cache_node; /* @@ -5189,7 +6007,7 @@ static void early_kmem_cache_node_alloc(int node) slab->freelist = get_freepointer(kmem_cache_node, n); slab->inuse = 1; kmem_cache_node->node[node] = n; - init_kmem_cache_node(n); + init_kmem_cache_node(n, NULL); inc_slabs_node(kmem_cache_node, node, slab->objects); /* @@ -5205,6 +6023,13 @@ static void free_kmem_cache_nodes(struct kmem_cache *s) struct kmem_cache_node *n; for_each_kmem_cache_node(s, node, n) { + if (n->barn) { + WARN_ON(n->barn->nr_full); + WARN_ON(n->barn->nr_empty); + kfree(n->barn); + n->barn = NULL; + } + s->node[node] = NULL; kmem_cache_free(kmem_cache_node, n); } @@ -5213,6 +6038,8 @@ static void free_kmem_cache_nodes(struct kmem_cache *s) void __kmem_cache_release(struct kmem_cache *s) { cache_random_seq_destroy(s); + if (s->cpu_sheaves) + pcs_destroy(s); #ifndef CONFIG_SLUB_TINY free_percpu(s->cpu_slab); #endif @@ -5225,20 +6052,27 @@ static int init_kmem_cache_nodes(struct kmem_cache *s) for_each_node_mask(node, slab_nodes) { struct kmem_cache_node *n; + struct node_barn *barn = NULL; if (slab_state == DOWN) { early_kmem_cache_node_alloc(node); continue; } + + if (s->cpu_sheaves) { + barn = kmalloc_node(sizeof(*barn), GFP_KERNEL, node); + + if (!barn) + return 0; + } + n = kmem_cache_alloc_node(kmem_cache_node, GFP_KERNEL, node); - - if (!n) { - free_kmem_cache_nodes(s); + if (!n) return 0; - } - init_kmem_cache_node(n); + init_kmem_cache_node(n, barn); + s->node[node] = n; } return 1; @@ -5494,6 +6328,8 @@ int __kmem_cache_shutdown(struct kmem_cache *s) flush_all_cpus_locked(s); /* Attempt to free all objects */ for_each_kmem_cache_node(s, node, n) { + if (n->barn) + barn_shrink(s, n->barn); free_partial(s, n); if (n->nr_partial || node_nr_slabs(n)) return 1; @@ -5680,6 +6516,9 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) for (i = 0; i < SHRINK_PROMOTE_MAX; i++) INIT_LIST_HEAD(promote + i); + if (n->barn) + barn_shrink(s, n->barn); + spin_lock_irqsave(&n->list_lock, flags); /* @@ -5792,12 +6631,24 @@ static int slab_mem_going_online_callback(void *arg) */ mutex_lock(&slab_mutex); list_for_each_entry(s, &slab_caches, list) { + struct node_barn *barn = NULL; + /* * The structure may already exist if the node was previously * onlined and offlined. */ if (get_node(s, nid)) continue; + + if (s->cpu_sheaves) { + barn = kmalloc_node(sizeof(*barn), GFP_KERNEL, nid); + + if (!barn) { + ret = -ENOMEM; + goto out; + } + } + /* * XXX: kmem_cache_alloc_node will fallback to other nodes * since memory is not yet available from the node that @@ -5808,7 +6659,9 @@ static int slab_mem_going_online_callback(void *arg) ret = -ENOMEM; goto out; } - init_kmem_cache_node(n); + + init_kmem_cache_node(n, barn); + s->node[nid] = n; } /* @@ -6026,6 +6879,16 @@ int do_kmem_cache_create(struct kmem_cache *s, const char *name, set_cpu_partial(s); + if (args->sheaf_capacity) { + s->cpu_sheaves = alloc_percpu(struct slub_percpu_sheaves); + if (!s->cpu_sheaves) { + err = -ENOMEM; + goto out; + } + // TODO: increase capacity to grow slab_sheaf up to next kmalloc size? + s->sheaf_capacity = args->sheaf_capacity; + } + #ifdef CONFIG_NUMA s->remote_node_defrag_ratio = 1000; #endif @@ -6042,6 +6905,12 @@ int do_kmem_cache_create(struct kmem_cache *s, const char *name, if (!alloc_kmem_cache_cpus(s)) goto out; + if (s->cpu_sheaves) { + err = init_percpu_sheaves(s); + if (err) + goto out; + } + /* Mutex is not taken during early boot */ if (slab_state <= UP) { err = 0; @@ -6060,7 +6929,6 @@ int do_kmem_cache_create(struct kmem_cache *s, const char *name, __kmem_cache_release(s); return err; } - #ifdef SLAB_SUPPORTS_SYSFS static int count_inuse(struct slab *slab) { @@ -6838,8 +7706,10 @@ static ssize_t text##_store(struct kmem_cache *s, \ } \ SLAB_ATTR(text); \ +STAT_ATTR(ALLOC_PCS, alloc_cpu_sheaf); STAT_ATTR(ALLOC_FASTPATH, alloc_fastpath); STAT_ATTR(ALLOC_SLOWPATH, alloc_slowpath); +STAT_ATTR(FREE_PCS, free_cpu_sheaf); STAT_ATTR(FREE_FASTPATH, free_fastpath); STAT_ATTR(FREE_SLOWPATH, free_slowpath); STAT_ATTR(FREE_FROZEN, free_frozen); @@ -6864,6 +7734,12 @@ STAT_ATTR(CPU_PARTIAL_ALLOC, cpu_partial_alloc); STAT_ATTR(CPU_PARTIAL_FREE, cpu_partial_free); STAT_ATTR(CPU_PARTIAL_NODE, cpu_partial_node); STAT_ATTR(CPU_PARTIAL_DRAIN, cpu_partial_drain); +STAT_ATTR(SHEAF_FLUSH_MAIN, sheaf_flush_main); +STAT_ATTR(SHEAF_FLUSH_OTHER, sheaf_flush_other); +STAT_ATTR(SHEAF_REFILL, sheaf_refill); +STAT_ATTR(SHEAF_SWAP, sheaf_swap); +STAT_ATTR(SHEAF_ALLOC, sheaf_alloc); +STAT_ATTR(SHEAF_FREE, sheaf_free); #endif /* CONFIG_SLUB_STATS */ #ifdef CONFIG_KFENCE @@ -6925,8 +7801,10 @@ static struct attribute *slab_attrs[] = { &remote_node_defrag_ratio_attr.attr, #endif #ifdef CONFIG_SLUB_STATS + &alloc_cpu_sheaf_attr.attr, &alloc_fastpath_attr.attr, &alloc_slowpath_attr.attr, + &free_cpu_sheaf_attr.attr, &free_fastpath_attr.attr, &free_slowpath_attr.attr, &free_frozen_attr.attr, @@ -6951,6 +7829,12 @@ static struct attribute *slab_attrs[] = { &cpu_partial_free_attr.attr, &cpu_partial_node_attr.attr, &cpu_partial_drain_attr.attr, + &sheaf_flush_main_attr.attr, + &sheaf_flush_other_attr.attr, + &sheaf_refill_attr.attr, + &sheaf_swap_attr.attr, + &sheaf_alloc_attr.attr, + &sheaf_free_attr.attr, #endif #ifdef CONFIG_FAILSLAB &failslab_attr.attr, From patchwork Tue Nov 12 16:38:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13872508 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 542D4D42BAC for ; Tue, 12 Nov 2024 16:39:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF8026B00FB; Tue, 12 Nov 2024 11:39:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8F3E66B0104; Tue, 12 Nov 2024 11:39:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 45D236B00FE; Tue, 12 Nov 2024 11:39:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id C70286B00FF for ; Tue, 12 Nov 2024 11:39:51 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 6DD9412029A for ; Tue, 12 Nov 2024 16:39:51 +0000 (UTC) X-FDA: 82778003364.22.1172A9B Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf08.hostedemail.com (Postfix) with ESMTP id 217A2160019 for ; Tue, 12 Nov 2024 16:39:20 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731429356; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WnRipa93ulhi0B0Xy66Lul4QOLPAkYXKi/yinwmo5Nk=; b=YNHF/5q5rDYT3woH4GY8yAq2veBjzxXqpEop5SggMF4C8Nf499Bksjfu3bETshL7sFwFTf cbTqbmNAayfIiPii+8aMNAQkuwIGGcTsFlMqn7oopSPe2pjckc+FLxfnZl/TiLH3aS63VX Zy4pO9fj/4jfhZkiEwMQ6oZp+FnRk8A= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731429356; a=rsa-sha256; cv=none; b=5Y4QKAafD2gaDS3gUeJkCIxQxGhKdJp75HD874zewXTI22NBbfb1ZxgZl1dPOOAJv38zAP UBneLDrG816SZMY6aHJd476C1u3s4eotaT0pbJ/dBgyT50BvE0ZuoT4vISTaFblnhCial1 Fa5Uh/0GHAdJ7WNJndnv+6OZFbV47OU= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 63FDE1F451; Tue, 12 Nov 2024 16:39:47 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 42AC613AA1; Tue, 12 Nov 2024 16:39:47 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id iM3+D9OEM2e6IwAAD6G6ig (envelope-from ); Tue, 12 Nov 2024 16:39:47 +0000 From: Vlastimil Babka Date: Tue, 12 Nov 2024 17:38:46 +0100 Subject: [PATCH RFC 2/6] mm/slub: add sheaf support for batching kfree_rcu() operations MIME-Version: 1.0 Message-Id: <20241112-slub-percpu-caches-v1-2-ddc0bdc27e05@suse.cz> References: <20241112-slub-percpu-caches-v1-0-ddc0bdc27e05@suse.cz> In-Reply-To: <20241112-slub-percpu-caches-v1-0-ddc0bdc27e05@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, "Paul E. McKenney" , Lorenzo Stoakes , Matthew Wilcox , Boqun Feng , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, Vlastimil Babka X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=13257; i=vbabka@suse.cz; h=from:subject:message-id; bh=B6/AZU7kSSC1Q0Ie5KGFcBQPZBd2OBt3E8M0Qc17hxM=; b=owEBbQGS/pANAwAIAbvgsHXSRYiaAcsmYgBnM4TFzJ97eeWCva+fkUseyqTp5tbDzppOh8Xm9 1MipIYG9RuJATMEAAEIAB0WIQR7u8hBFZkjSJZITfG74LB10kWImgUCZzOExQAKCRC74LB10kWI mrxfB/96uJraz/HqUAoR9bhmEZ/EDkDbjSClZuGtTdvfeUcOtvmTu1gwkjWpSwYCFTRhe7WSp7+ YEzBWskff2pK/QQbYJ2jWwEpGWYZ9/P9UpxDW5zcuoohVmeiFlYxDoS0swDbF1ginUD3H4gjBFt UoQglO3o5+GDx7GKMY+Wu4M6h+dSsPC4HoN0HYdJtEtoz1Oim+oaJuCHOkffBVVO17MHYKRY0sT R71yWXFmNCND91+1b9c+mNznMuuxqXX6vALlCFMR8gQNEeMlE56krR0L/oHYGgI+6dRLXUfv7aF qQLI2CdZyLy5ZxNpUyN0gXbpQzSBzW0bV/dYkYohWDe7grdO X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Action: no action X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 217A2160019 X-Stat-Signature: pfoxh93k76soginqwjbcx75ujhazapt9 X-Rspam-User: X-HE-Tag: 1731429560-899891 X-HE-Meta: U2FsdGVkX1+Ls0dEfU7o7Y7FO2EXkUaJ06Dt0mSyDHgj8KRoOL6Pf/bYQQdFlNBY41ds7I7Ax+KxigdSP9m5AZtlpBV4DVOIKsWVUZ67QXUf4gpx/4Cdv94G4KRoWWI+6xRdXhZ3EgLghSUmnb83o2hEBmaUPOtqZw+P/kGkCO6sCI35nUwG2DEjP0FUFE3y8h7XQJz4h5KYOM2LSntHOe7eso0kxGqNlUihWxK7KYeBicXbTFOUkzHLcgQ2Aq1sfdlCWOoiNl07fYp146oaOz5ajJYf0TSdH7qBiOKAW2OT5sZZZs7oLIB2c67CqeDd8mq5P0fTNlTnUW8Sc4NTiMoHM/UkRCU9F1Yg1UuxP0nlnnCCa50ODCP8ku3RUep/+Mj9jREl1UjIGssdJnO9o13xI92od8C3JFwk2zLwehN1yMgWSD+GkF+Sr8xG8F+fl9TpsqHtX7k2YqXgtsIXl1Eg2DLUMazACiLoN4hlOwnhvWWzumOvhZQHbXVWxtjYSSVCKUV9feUvu7x7E4bGi76mignUoh2+i6gkWSyalaVGFQYgC0mf/hGi3o0kzvpKazh2DguQlNx9FLdLioPn2Uq+aimPe9wEtbEL05+hJTYDfu5siQbEBz19IqP0n5iuXfZnnwovvOMhgfZrK5n6rJurQvvkRkL/bRlzMdfqx+p0AUFo4CZ57jCBt0bcbMtPms1qqAsibbXfKSKKs9Cs8mLE7vTbyeT1e2uCA8PCIyQSW+bw4GkPTmjt+ukjwfWBvXyUFh4V+zT1cw9nwAs//L8uoWy2XqQiY1AMgEpFssl4W/gtHinuBZgMpC1Hp7fww0L+0L/jB3AjPmRy8ST0Xg1VNB3/vC4iwfFJTzDoNcE2xG5qDVcAvKM45CdjgdzBV3ckDIT1TzqP2lXYetKyNswMsQANFUzcqSxaqn9BxCYUkni8vk1ao9Hijt+JTTWLVRFU0odQ6kZju8p5fe6 NL1s2txm HnXuRW0G+BnR6z83tUzspqG5T0QtW2LbzGZPI5fe5OCaPrKo+4BsTvrDrQh3kq7NzXx1qLp4kiC6pMDtq6yR7IOTxPU+wtLxrxkjA5sT15j30Xh3j38R7fngbcVvtu1W5WCfwR9klh8n8IxOJ3xEm6eJ4eAUNVt3i48nTDCnyWuC9TCo0rlfq6W+Jt0GFkY5OT9ZYDf4rRloXvvd1DVEvBU2RyE/cO2sbT1xAF/tZruz5W3Y= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Extend the sheaf infrastructure for more efficient kfree_rcu() handling. For caches where sheafs are initialized, on each cpu maintain a rcu_free sheaf in addition to main and spare sheaves. kfree_rcu() operations will try to put objects on this sheaf. Once full, the sheaf is detached and submitted to call_rcu() with a handler that will try to put in on the barn, or flush to slab pages using bulk free, when the barn is full. Then a new empty sheaf must be obtained to put more objects there. It's possible that no free sheafs are available to use for a new rcu_free sheaf, and the allocation in kfree_rcu() context can only use GFP_NOWAIT and thus may fail. In that case, fall back to the existing kfree_rcu() machinery. Because some intended users will need to perform additonal cleanups after the grace period and thus have custom rcu_call() callbacks today, add the possibility to specify a kfree_rcu() specific destructor. Because of the fall back possibility, the destructor now needs be invoked also from within RCU, so add __kvfree_rcu() that RCU can use instead of kvfree(). Expected advantages: - batching the kfree_rcu() operations, that could eventually replace the batching done in RCU itself - sheafs can be reused via barn instead of being flushed to slabs, which is more effective - this includes cases where only some cpus are allowed to process rcu callbacks (Android) Possible disadvantage: - objects might be waiting for more than their grace period (it is determined by the last object freed into the sheaf), increasing memory usage - but that might be true for the batching done by RCU as well? RFC LIMITATIONS: - only tree rcu is converted, not tiny - the rcu fallback might resort to kfree_bulk(), not kvfree(). Instead of adding a variant of kfree_bulk() with destructors, is there an easy way to disable the kfree_bulk() path in the fallback case? Signed-off-by: Vlastimil Babka --- include/linux/slab.h | 15 +++++ kernel/rcu/tree.c | 8 ++- mm/slab.h | 25 +++++++ mm/slab_common.c | 3 + mm/slub.c | 182 +++++++++++++++++++++++++++++++++++++++++++++++++-- 5 files changed, 227 insertions(+), 6 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index b13fb1c1f03c14a5b45bc6a64a2096883aef9f83..23904321992ad2eeb9389d0883cf4d5d5d71d896 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -343,6 +343,21 @@ struct kmem_cache_args { * %0 means no sheaves will be created */ unsigned int sheaf_capacity; + /** + * @sheaf_rcu_dtor: A destructor for objects freed by kfree_rcu() + * + * Only valid when non-zero @sheaf_capacity is specified. When freeing + * objects by kfree_rcu() in a cache with sheaves, the objects are put + * to a special percpu sheaf. When that sheaf is full, it's passed to + * call_rcu() and after a grace period the sheaf can be reused for new + * allocations. In case a cleanup is necessary after the grace period + * and before reusal, a pointer to such function can be given as + * @sheaf_rcu_dtor and will be called on each object in the rcu sheaf + * after the grace period passes and before the sheaf's reuse. + * + * %NULL means no destructor is called. + */ + void (*sheaf_rcu_dtor)(void *obj); }; struct kmem_cache *__kmem_cache_create_args(const char *name, diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index b1f883fcd9185a5e22c10102d1024c40688f57fb..42c994fdf9960bfed8d8bd697de90af72c1f4f58 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -65,6 +65,7 @@ #include #include #include "../time/tick-internal.h" +#include "../../mm/slab.h" #include "tree.h" #include "rcu.h" @@ -3420,7 +3421,7 @@ kvfree_rcu_list(struct rcu_head *head) trace_rcu_invoke_kvfree_callback(rcu_state.name, head, offset); if (!WARN_ON_ONCE(!__is_kvfree_rcu_offset(offset))) - kvfree(ptr); + __kvfree_rcu(ptr); rcu_lock_release(&rcu_callback_map); cond_resched_tasks_rcu_qs(); @@ -3797,6 +3798,9 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr) if (!head) might_sleep(); + if (kfree_rcu_sheaf(ptr)) + return; + // Queue the object but don't yet schedule the batch. if (debug_rcu_head_queue(ptr)) { // Probable double kfree_rcu(), just leak. @@ -3849,7 +3853,7 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr) if (!success) { debug_rcu_head_unqueue((struct rcu_head *) ptr); synchronize_rcu(); - kvfree(ptr); + __kvfree_rcu(ptr); } } EXPORT_SYMBOL_GPL(kvfree_call_rcu); diff --git a/mm/slab.h b/mm/slab.h index 001e0d55467bb4803b5dff718ba8e0c775f42b3f..4dc145c14dfd97677c74a767c22f5dd22f5d6451 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -276,6 +276,9 @@ struct kmem_cache { gfp_t allocflags; /* gfp flags to use on each alloc */ int refcount; /* Refcount for slab cache destroy */ void (*ctor)(void *object); /* Object constructor */ + void (*rcu_dtor)(void *object); /* Object destructor to execute after + * kfree_rcu grace period + */ unsigned int inuse; /* Offset to metadata */ unsigned int align; /* Alignment */ unsigned int red_left_pad; /* Left redzone padding size */ @@ -454,6 +457,28 @@ static inline bool is_kmalloc_normal(struct kmem_cache *s) return !(s->flags & (SLAB_CACHE_DMA|SLAB_ACCOUNT|SLAB_RECLAIM_ACCOUNT)); } +void __kvfree_rcu(void *obj); + +bool __kfree_rcu_sheaf(struct kmem_cache *s, void *obj); + +static inline bool kfree_rcu_sheaf(void *obj) +{ + struct kmem_cache *s; + struct folio *folio; + struct slab *slab; + + folio = virt_to_folio(obj); + if (unlikely(!folio_test_slab(folio))) + return false; + + slab = folio_slab(folio); + s = slab->slab_cache; + if (s->cpu_sheaves) + return __kfree_rcu_sheaf(s, obj); + + return false; +} + /* Legal flag mask for kmem_cache_create(), for various configurations */ #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \ SLAB_CACHE_DMA32 | SLAB_PANIC | \ diff --git a/mm/slab_common.c b/mm/slab_common.c index 7939f3f017740e0ac49ffa971c45409d0fbe2f23..d69ed1e7ea34f9657cb9514fb98a48647f01381b 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -236,6 +236,9 @@ static struct kmem_cache *create_cache(const char *name, !IS_ALIGNED(args->freeptr_offset, sizeof(freeptr_t)))) goto out; + if (args->sheaf_rcu_dtor && !args->sheaf_capacity) + goto out; + err = -ENOMEM; s = kmem_cache_zalloc(kmem_cache, GFP_KERNEL); if (!s) diff --git a/mm/slub.c b/mm/slub.c index 7da08112213b203993b5159eb35a1ea640d706fe..6811d766c0470cd7066c2574ad86e00405c916bb 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -351,6 +351,8 @@ enum stat_item { ALLOC_FASTPATH, /* Allocation from cpu slab */ ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab */ FREE_PCS, /* Free to percpu sheaf */ + FREE_RCU_SHEAF, /* Free to rcu_free sheaf */ + FREE_RCU_SHEAF_FAIL, /* Failed to free to a rcu_free sheaf */ FREE_FASTPATH, /* Free to cpu slab */ FREE_SLOWPATH, /* Freeing not to cpu slab */ FREE_FROZEN, /* Freeing to frozen slab */ @@ -2557,6 +2559,24 @@ static void sheaf_flush(struct kmem_cache *s, struct slab_sheaf *sheaf) sheaf->size = 0; } +static void __rcu_free_sheaf_prepare(struct kmem_cache *s, + struct slab_sheaf *sheaf); + +static void rcu_free_sheaf_nobarn(struct rcu_head *head) +{ + struct slab_sheaf *sheaf; + struct kmem_cache *s; + + sheaf = container_of(head, struct slab_sheaf, rcu_head); + s = sheaf->cache; + + __rcu_free_sheaf_prepare(s, sheaf); + + sheaf_flush(s, sheaf); + + free_empty_sheaf(s, sheaf); +} + /* * Caller needs to make sure migration is disabled in order to fully flush * single cpu's sheaves @@ -2586,8 +2606,8 @@ static void pcs_flush_all(struct kmem_cache *s) free_empty_sheaf(s, spare); } - // TODO: handle rcu_free - BUG_ON(rcu_free); + if (rcu_free) + call_rcu(&rcu_free->rcu_head, rcu_free_sheaf_nobarn); sheaf_flush_main(s); } @@ -2604,8 +2624,10 @@ static void __pcs_flush_all_cpu(struct kmem_cache *s, unsigned int cpu) pcs->spare = NULL; } - // TODO: handle rcu_free - BUG_ON(pcs->rcu_free); + if (pcs->rcu_free) { + call_rcu(&pcs->rcu_free->rcu_head, rcu_free_sheaf_nobarn); + pcs->rcu_free = NULL; + } sheaf_flush_main(s); } @@ -5161,6 +5183,121 @@ void free_to_pcs(struct kmem_cache *s, void *object) stat(s, FREE_PCS); } +static void __rcu_free_sheaf_prepare(struct kmem_cache *s, + struct slab_sheaf *sheaf) +{ + bool init = slab_want_init_on_free(s); + void **p = &sheaf->objects[0]; + unsigned int i = 0; + + while (i < sheaf->size) { + struct slab *slab = virt_to_slab(p[i]); + + if (s->rcu_dtor) + s->rcu_dtor(p[i]); + + memcg_slab_free_hook(s, slab, p + i, 1); + alloc_tagging_slab_free_hook(s, slab, p + i, 1); + + if (unlikely(!slab_free_hook(s, p[i], init, false))) { + p[i] = p[--sheaf->size]; + continue; + } + + i++; + } +} + +static void rcu_free_sheaf(struct rcu_head *head) +{ + struct slab_sheaf *sheaf; + struct node_barn *barn; + struct kmem_cache *s; + + sheaf = container_of(head, struct slab_sheaf, rcu_head); + + s = sheaf->cache; + + __rcu_free_sheaf_prepare(s, sheaf); + + barn = get_node(s, numa_mem_id())->barn; + + /* due to slab_free_hook() */ + if (unlikely(sheaf->size == 0)) + goto empty; + + if (!barn_put_full_sheaf(barn, sheaf, false)) + return; + + sheaf_flush(s, sheaf); + +empty: + if (!barn_put_empty_sheaf(barn, sheaf, false)) + return; + + free_empty_sheaf(s, sheaf); +} + +bool __kfree_rcu_sheaf(struct kmem_cache *s, void *obj) +{ + struct slub_percpu_sheaves *pcs; + struct slab_sheaf *rcu_sheaf; + unsigned long flags; + + local_lock_irqsave(&s->cpu_sheaves->lock, flags); + pcs = this_cpu_ptr(s->cpu_sheaves); + + if (unlikely(!pcs->rcu_free)) { + + struct slab_sheaf *empty; + + empty = barn_get_empty_sheaf(pcs->barn); + + if (empty) { + pcs->rcu_free = empty; + goto do_free; + } + + local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + + empty = alloc_empty_sheaf(s, GFP_NOWAIT); + + if (!empty) { + stat(s, FREE_RCU_SHEAF_FAIL); + return false; + } + + local_lock_irqsave(&s->cpu_sheaves->lock, flags); + pcs = this_cpu_ptr(s->cpu_sheaves); + + if (unlikely(pcs->rcu_free)) + barn_put_empty_sheaf(pcs->barn, empty, true); + else + pcs->rcu_free = empty; + } + +do_free: + + rcu_sheaf = pcs->rcu_free; + + rcu_sheaf->objects[rcu_sheaf->size++] = obj; + + if (likely(rcu_sheaf->size < s->sheaf_capacity)) { + local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + stat(s, FREE_RCU_SHEAF); + return true; + } + + pcs->rcu_free = NULL; + local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + + call_rcu(&rcu_sheaf->rcu_head, rcu_free_sheaf); + + stat(s, FREE_RCU_SHEAF); + + return true; +} + /* * Bulk free objects to the percpu sheaves. * Unlike free_to_pcs() this includes the calls to all necessary hooks @@ -5466,6 +5603,32 @@ static void free_large_kmalloc(struct folio *folio, void *object) folio_put(folio); } +void __kvfree_rcu(void *obj) +{ + struct folio *folio; + struct slab *slab; + struct kmem_cache *s; + + if (is_vmalloc_addr(obj)) { + vfree(obj); + return; + } + + folio = virt_to_folio(obj); + if (unlikely(!folio_test_slab(folio))) { + free_large_kmalloc(folio, obj); + return; + } + + slab = folio_slab(folio); + s = slab->slab_cache; + + if (s->rcu_dtor) + s->rcu_dtor(obj); + + slab_free(s, slab, obj, _RET_IP_); +} + /** * kfree - free previously allocated memory * @object: pointer returned by kmalloc() or kmem_cache_alloc() @@ -6326,6 +6489,11 @@ int __kmem_cache_shutdown(struct kmem_cache *s) struct kmem_cache_node *n; flush_all_cpus_locked(s); + + /* we might have rcu sheaves in flight */ + if (s->cpu_sheaves) + rcu_barrier(); + /* Attempt to free all objects */ for_each_kmem_cache_node(s, node, n) { if (n->barn) @@ -6887,6 +7055,8 @@ int do_kmem_cache_create(struct kmem_cache *s, const char *name, } // TODO: increase capacity to grow slab_sheaf up to next kmalloc size? s->sheaf_capacity = args->sheaf_capacity; + + s->rcu_dtor = args->sheaf_rcu_dtor; } #ifdef CONFIG_NUMA @@ -7710,6 +7880,8 @@ STAT_ATTR(ALLOC_PCS, alloc_cpu_sheaf); STAT_ATTR(ALLOC_FASTPATH, alloc_fastpath); STAT_ATTR(ALLOC_SLOWPATH, alloc_slowpath); STAT_ATTR(FREE_PCS, free_cpu_sheaf); +STAT_ATTR(FREE_RCU_SHEAF, free_rcu_sheaf); +STAT_ATTR(FREE_RCU_SHEAF_FAIL, free_rcu_sheaf_fail); STAT_ATTR(FREE_FASTPATH, free_fastpath); STAT_ATTR(FREE_SLOWPATH, free_slowpath); STAT_ATTR(FREE_FROZEN, free_frozen); @@ -7805,6 +7977,8 @@ static struct attribute *slab_attrs[] = { &alloc_fastpath_attr.attr, &alloc_slowpath_attr.attr, &free_cpu_sheaf_attr.attr, + &free_rcu_sheaf_attr.attr, + &free_rcu_sheaf_fail_attr.attr, &free_fastpath_attr.attr, &free_slowpath_attr.attr, &free_frozen_attr.attr, From patchwork Tue Nov 12 16:38:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13872509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59E0CD42BAB for ; Tue, 12 Nov 2024 16:40:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E36656B0103; Tue, 12 Nov 2024 11:39:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AECAC6B0096; Tue, 12 Nov 2024 11:39:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 703466B00FD; Tue, 12 Nov 2024 11:39:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id A774E6B00FB for ; Tue, 12 Nov 2024 11:39:51 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5600D1603F9 for ; Tue, 12 Nov 2024 16:39:51 +0000 (UTC) X-FDA: 82778003616.05.7809299 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf03.hostedemail.com (Postfix) with ESMTP id AD7D220004 for ; Tue, 12 Nov 2024 16:39:29 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=Oru17CWM; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=uFishl99; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=Oru17CWM; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=uFishl99; spf=pass (imf03.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731429446; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vGD6k8ZGUV4h33x+p3T4Fo15crfC0RACbnBYnZB+C5Q=; b=llPOTfYcpoEYAxcofLVz6Slaybe4piOJFL70bEKdygHEJRRAFCDyLfV6aTxO3S/s2UPPwQ u5tEKMKCPsWpv8KtFW/daD1h+ykyR0OMvSQZiNnDKHd0YujipOIkhfGHgLKizoZmeT/s5l joKw5I/pP+sg9+KVAl0VUdrOxswJkac= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731429446; a=rsa-sha256; cv=none; b=2iksrUhk9c/dceQf5oIeDbxLaP9spsFNKAyGsOFU37lob1KO1izufG33c4ElfJ7Cs4eo6W rVmeuJ6jHwsq3pjxckyUKvdDInJLyDcCCxreLkppEpsKTvQwQBDbuTkgDlL2tsO7cg+7y0 IDmpi9Cqr6kxOqWlJJevGTgpED2swwM= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=Oru17CWM; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=uFishl99; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=Oru17CWM; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=uFishl99; spf=pass (imf03.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 79E022125C; Tue, 12 Nov 2024 16:39:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1731429587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vGD6k8ZGUV4h33x+p3T4Fo15crfC0RACbnBYnZB+C5Q=; b=Oru17CWMMNAoa2qCEd8Tw3rDbFwwS5FtalLfzWmc8sbRhSn12+yoGKX6OO+1ot0T+anpVe eOsDO92T2G7TpqrisjSaTDNVHKdUV4+5tPCrBJ39K/+uCywizJNtgSb2CQiT+5uk0TvWed zDFdDWLjBzNA7WjGEKq83Txx4bH1lIU= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1731429587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vGD6k8ZGUV4h33x+p3T4Fo15crfC0RACbnBYnZB+C5Q=; b=uFishl99Dqd2FDrHQvEUljWOvbDB7te1IrlUN+r6T+UrQ1awQa1NaSCMZVkVXcih1TyCe1 k6+LarO+3dEkCzAw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1731429587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vGD6k8ZGUV4h33x+p3T4Fo15crfC0RACbnBYnZB+C5Q=; b=Oru17CWMMNAoa2qCEd8Tw3rDbFwwS5FtalLfzWmc8sbRhSn12+yoGKX6OO+1ot0T+anpVe eOsDO92T2G7TpqrisjSaTDNVHKdUV4+5tPCrBJ39K/+uCywizJNtgSb2CQiT+5uk0TvWed zDFdDWLjBzNA7WjGEKq83Txx4bH1lIU= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1731429587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vGD6k8ZGUV4h33x+p3T4Fo15crfC0RACbnBYnZB+C5Q=; b=uFishl99Dqd2FDrHQvEUljWOvbDB7te1IrlUN+r6T+UrQ1awQa1NaSCMZVkVXcih1TyCe1 k6+LarO+3dEkCzAw== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 5D31F13AC3; Tue, 12 Nov 2024 16:39:47 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id QMxuFtOEM2e6IwAAD6G6ig (envelope-from ); Tue, 12 Nov 2024 16:39:47 +0000 From: Vlastimil Babka Date: Tue, 12 Nov 2024 17:38:47 +0100 Subject: [PATCH RFC 3/6] maple_tree: use percpu sheaves for maple_node_cache MIME-Version: 1.0 Message-Id: <20241112-slub-percpu-caches-v1-3-ddc0bdc27e05@suse.cz> References: <20241112-slub-percpu-caches-v1-0-ddc0bdc27e05@suse.cz> In-Reply-To: <20241112-slub-percpu-caches-v1-0-ddc0bdc27e05@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, "Paul E. McKenney" , Lorenzo Stoakes , Matthew Wilcox , Boqun Feng , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, Vlastimil Babka X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=1640; i=vbabka@suse.cz; h=from:subject:message-id; bh=GNEwsdpTw+alaKMVcf/e8GCDO0WuhBFhaIiXyrGwzUU=; b=owEBbQGS/pANAwAIAbvgsHXSRYiaAcsmYgBnM4TI07ZzZ8MVpE6qqbUQruxLEDeCS+rtR9kUE 8Eo0qeSMFKJATMEAAEIAB0WIQR7u8hBFZkjSJZITfG74LB10kWImgUCZzOEyAAKCRC74LB10kWI moliB/4ku9iAnntGedIvuMOTgOtHGpi71febt1h0Mg0DbTCsyHdWwtgl9EuIjYk1j0G/Yywp7/L TSVrVWBrY1fGHCtF3j2IugkLOrljU6a6uF0EoCBGr+oMRPwGRd3MIWRZ+zZ9na+2GnPqK2hbqMj rT7vQD4S9OMEOY1QHsi0FfQhAH0YzEWV1epnNHABBV1nnJAoerCAPCPirN8Ao/sGN+eVRY6y3Kd UtsszCx2c0yQmT8OdtzJYPeRnl2rWDZfsdVuC00GO8wPx0YLzfRWSidzrAfd0G8lW6MkMQm+kWh WrXADGJR5y8Cx+vF6VoOEsE6KlDfNnGz8pcUZg8dVOt+CPH9 X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: AD7D220004 X-Stat-Signature: re8iqw4hhfwdp8rbotpiz7hbo7tmphyp X-HE-Tag: 1731429569-658333 X-HE-Meta: U2FsdGVkX1936SPbgrxOcLckPimjQh70xhDUQTs+9j5TgJAADpOoY/UkznOKfbUZfxWQXP2Oag4uyHdew26n5ENP1jLPADwYhCJdm9lS4BXh4Nr9LVuAL3Svv5MIwExs9YUTr/bh9lQwJjafowi59sKNHcP+LuBG6V2UVxjeypDunJZWLdsIBmU1asu+1h1Y1hyfn512jBSsQGKdaiGeaufkCzKVaKOeIgU2tCK8ki+z7b9a4WBR0PQSjUW3kveVemeYAPEWUoQRKZKdPZiVw3TxQtsLZeQtRxyBh2GQF5+E5wuzvWlKHXMLZfvQmmKYbDt1IvwN3t/W+yII1ZcoCrlYMmhbJo1UJEGKhlQwXzLg218p26Dzn7WL/8FbBfELZ8EuvyIwOazV3sqdTj9hCxptK9982Y0jTdtdEIcYahr+mvCs3T4LH++3JpJ+HQGZGWOeGZLwqXx2IgCGKat6F0PGfYtuhMIi6NNtsHCcXxoAL3Hae56Ct1enT2zy4mNkw5GWU0cp7RXgFcr6b+SR3ullLkfmC0bVfXPHwsLvUNdWuKNcgZR/C8zN2fsZVpY1zysrPLG8uI1LYx+KhSay7GyG3MmXCRcGjbp5a1P9o35zP5ORo5Zm+YCznRI+43I2PU/FjPWPnu2F9k8eYdju1KJzPuVQZPCPIz6skB++lzgqvKAgQqPHmHugABGbSi+DvNRS/vpRCjD9mxEryWaqseV4L8F4B3BAb4SQyxlaah5ZNjbEnu221PEQeBgrWZZjVoWDFrQpz9S/w2NQvBm1+ZqPEaXLk76qNlSAVKYz57AHO6spQAWHb9TnYzRUXKF6YFzOa0LgcGYPGccd7nkNb4K+JQLdZhQpvK2xfH8BPBJdePjeiIqVKr+DA9KePiVSsK+qi0/plP2ukVKsN75uY4OV/f1mr1PjBbXDr7ZNVkmOuLM4IXtaPRJXpW0Wgw/fvyJknVAhoD5pg9jhHfb WmlfmjyH 4AQvv7nfR07vVPclIvTc94oW1xzw9nH552sp1YJANOwv97VpzZXUWe54Le6Antjs+9TJdLzEbFnqUrZxq9w4vuJJj/mJ6Nh0ViWdWrh7vXKE+F+4+eYYZUx0f46McH6r5x9usp1eZaqZrGo4NySJOhZmEn5UMqv1T3eTYP0uIoqn/+rWa2bsU4ydFVB7iOKWzRT8ngQYcuPtKkvamTCNJzEzCus822EHlBFWyAB3axLVbVhzMAiECqs05fbP631s9Pt/3uhiREEasq3pEuXJrnvv9vNLFaTm4ODYj/JJaEgC+XJHz7lLlOG/1zSn5jGz/cpoOMSlO6/5d0Kz7GORcNM1kAde14cwKLBgxHwu7MQ041BJIocHdCDI0Yg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Setup the maple_node_cache with percpu sheaves of size 32 to hopefully improve its performance. Change the single node rcu freeing in ma_free_rcu() to use kfree_rcu() instead of the custom callback, which allows the rcu_free sheaf batching to be used. Note there are other users of mt_free_rcu() where larger parts of maple tree are submitted to call_rcu() as a whole, and that cannot use the rcu_free sheaf, but it's still possible for maple nodes freed this way to be reused via the barn, even if only some cpus are allowed to process rcu callbacks. Signed-off-by: Vlastimil Babka --- lib/maple_tree.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 3619301dda2ebeaaba8a73837389b6ee3c7e1a3f..c69365e17fcbfe963dcedd0de07335fc6bbdfb27 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -194,7 +194,7 @@ static void mt_free_rcu(struct rcu_head *head) static void ma_free_rcu(struct maple_node *node) { WARN_ON(node->parent != ma_parent_ptr(node)); - call_rcu(&node->rcu, mt_free_rcu); + kfree_rcu(node, rcu); } static void mas_set_height(struct ma_state *mas) @@ -6299,9 +6299,14 @@ bool mas_nomem(struct ma_state *mas, gfp_t gfp) void __init maple_tree_init(void) { + struct kmem_cache_args args = { + .align = sizeof(struct maple_node), + .sheaf_capacity = 32, + }; + maple_node_cache = kmem_cache_create("maple_node", - sizeof(struct maple_node), sizeof(struct maple_node), - SLAB_PANIC, NULL); + sizeof(struct maple_node), &args, + SLAB_PANIC); } /** From patchwork Tue Nov 12 16:38:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13872506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 530CDD42BAA for ; Tue, 12 Nov 2024 16:39:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4610E6B00FF; Tue, 12 Nov 2024 11:39:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 375576B0104; Tue, 12 Nov 2024 11:39:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 129CB6B0102; Tue, 12 Nov 2024 11:39:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9C56F6B0096 for ; Tue, 12 Nov 2024 11:39:51 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 50087C0187 for ; Tue, 12 Nov 2024 16:39:51 +0000 (UTC) X-FDA: 82778003280.10.3222AB1 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf29.hostedemail.com (Postfix) with ESMTP id 5964D12001F for ; Tue, 12 Nov 2024 16:38:52 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=kvkqbo5c; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=GSk0LM88; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=kvkqbo5c; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=GSk0LM88; spf=pass (imf29.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731429501; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SpHz+sHwXWPoDofFQG0LgmUZpXU7k9OQyeDihc++kFA=; b=OEd0dK2OQsL/h82gKowPupaz8KZT3g2NPRFdl29jugPTx/DXxzWX0pGpozn08DfPuK5p4C qdmWNfNoNPbbaUkTBjrTY7wErUx2qTNrTH+6/L1JV2f0aeCWO2IaS8++6jwqhwjpRzrGkT T6sGw3o27T9cW7Zdk7+gDGkswt+OuR4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731429501; a=rsa-sha256; cv=none; b=FWlh8XxzBzVc6MAbTre9eTfG8rmrMZAfQVDyJ7XrTGpMKD/6VYFFzM3RII2RdnZBdQ+xX6 kHfwa8dhHPNONuoXvLZ3B04Euv0afmun5HB5xS1B7+dnDJJa72n736BjiTSsw0bNNAYy0M nybYat5DYzsJPSVyH0AWYBnsHChjOJw= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=kvkqbo5c; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=GSk0LM88; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=kvkqbo5c; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=GSk0LM88; spf=pass (imf29.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 949F12125D; Tue, 12 Nov 2024 16:39:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1731429587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SpHz+sHwXWPoDofFQG0LgmUZpXU7k9OQyeDihc++kFA=; b=kvkqbo5cMpaS3GfGvzzfDZeh+huJRJkRuyNTVHynFI3Wo3g/2JgvdqVxLB4rHPUH9bGXmc LGj99fzR2/M42vpnh18UZX5tpMRXeQ8PseNL0T8Tzgm5XW+SBq3u7hOMQhop7yq/9hxmkB mElK9vvBzv4eqNzo5RbP85Th6L4sHr8= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1731429587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SpHz+sHwXWPoDofFQG0LgmUZpXU7k9OQyeDihc++kFA=; b=GSk0LM88ofSjfpcfJlljrgmXshtZbauPYbiegzNpaRcSfQUrivxtQ8Pwup+l5w6xVlXNsX 8FBJqKCoLAxDG4CQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1731429587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SpHz+sHwXWPoDofFQG0LgmUZpXU7k9OQyeDihc++kFA=; b=kvkqbo5cMpaS3GfGvzzfDZeh+huJRJkRuyNTVHynFI3Wo3g/2JgvdqVxLB4rHPUH9bGXmc LGj99fzR2/M42vpnh18UZX5tpMRXeQ8PseNL0T8Tzgm5XW+SBq3u7hOMQhop7yq/9hxmkB mElK9vvBzv4eqNzo5RbP85Th6L4sHr8= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1731429587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SpHz+sHwXWPoDofFQG0LgmUZpXU7k9OQyeDihc++kFA=; b=GSk0LM88ofSjfpcfJlljrgmXshtZbauPYbiegzNpaRcSfQUrivxtQ8Pwup+l5w6xVlXNsX 8FBJqKCoLAxDG4CQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 77C1113301; Tue, 12 Nov 2024 16:39:47 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id sOHxHNOEM2e6IwAAD6G6ig (envelope-from ); Tue, 12 Nov 2024 16:39:47 +0000 From: Vlastimil Babka Date: Tue, 12 Nov 2024 17:38:48 +0100 Subject: [PATCH RFC 4/6] mm, vma: use sheaves for vm_area_struct cache MIME-Version: 1.0 Message-Id: <20241112-slub-percpu-caches-v1-4-ddc0bdc27e05@suse.cz> References: <20241112-slub-percpu-caches-v1-0-ddc0bdc27e05@suse.cz> In-Reply-To: <20241112-slub-percpu-caches-v1-0-ddc0bdc27e05@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, "Paul E. McKenney" , Lorenzo Stoakes , Matthew Wilcox , Boqun Feng , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, Vlastimil Babka X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=3057; i=vbabka@suse.cz; h=from:subject:message-id; bh=dcUmI/Co/tcz1EOGwJFhsClJvt8T9OCwC5fGPuq0Ncw=; b=owEBbQGS/pANAwAIAbvgsHXSRYiaAcsmYgBnM4TLe/L4WzzKctMjTmceg/hAmTUewfV1I8r17 2hD6G0cBk2JATMEAAEIAB0WIQR7u8hBFZkjSJZITfG74LB10kWImgUCZzOEywAKCRC74LB10kWI mqaxB/9mPs/2SlMtyHeKo4Yp33+gLG5rO3/xEXoM4p8PzNzwd9kDkeGh9LvjpkeZFOfiK9ajRln YHzQzS+l2oB3kBbQP6QQIXITaLRl1Y3m00P1hvonUOrwvhq6IShaqFtYIDX4ZF7sxqHgYW9bwN5 BDkfyktC03/RrtuSROT7ImNG9Fw9ZVaWv1EftiuQa8wf7pYnW/1trRWsCZHTkzU9CYR7gbDiFiq ppE7t4i37j6h9RR5VTjBmtfSDosFmUrtVeLQffEUtbBj+V6hibYXUrXZHkUehovaRTRu44a0apX A8dlvsDgf0w/EcVXcpR2gEqQyrSv5vK8QIXpJckNd7qpLeYs X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam10 X-Stat-Signature: rz4i935jmz5dwcmxspt59u4kiqrsynjy X-Rspamd-Queue-Id: 5964D12001F X-Rspam-User: X-HE-Tag: 1731429532-202017 X-HE-Meta: U2FsdGVkX19349ztpLT0FGPQK/K1/p9OtYg1ICMCAfwfS7q7hIh0q3TqvWlEn16xTUSt+C3fRs/v8/AF58CW6Lgagt6Ds5YF/zDNGJQPS16recnj7IIHb4wo/DQQZaF4YwgdHueqsNI1clG0kQETtkJlRCjlSyR4TL9V0gSE4KJlalGL9ovGl+IOTP0wauUCybPCfXIdTo5juVhayREzQ985jOU3fO4y0e8ITBNsofFY7cnZVYNYltVhUc7rK85pvlJ7myTboAE8g21x0R/UsyBxVq3JglnH2N1IoNTIq1k7C04K78fhlTpXNL0CyvtrT5HhQAdQvfqi4yHCXcHa1TloTKnvdHX2UHWazp145umTi2caJwNZHhp75A0jVBw9qWmTkvXWYSYi8vMwOWjynzRAipWZkxSSsVjBu1TAu+eO837MGeQ+08f77y3FFCxojtEwR/h/eT44LnZpq3XDisEySmBGA2uoWB0obHIKicStJwoBAxvuDwFNbVqLLuWthIVXvBWv98hTZj6X9yEq4Z6Sl+tKBsX4igyelixHoC1KMjWFdlVNNEiucBlBlhs267EdYc7id6WkI0/Px22xoTZYBMc60f7fJQDaaCrUeOhxyrsJnYSk2SXa89+PB83GzFjQiXnFCSE3dvzliycLa+enCNQJsNfHOhG3uki41ZYP152iFqNVrSLe+u1xVJbPPry/1IguEjZ3iB5LX4vwda/QsnZUmb/+UvzqEEdus6yP+SPAZDuphng3V13uMPjryx659kdeqwJdY7cEtz4oDOoP+JMv0vp0usx0URQebxZ/Ew544MPx8a8f6bZI16hV3xOW6s86L1Gzq0wYch7N2wuCjx5PxFqjRZfO/KzYUc4wRhcGa1IqmHzZitaPBOLWDpajlpzoxC/S4maa4PYn2t0JWjp++VC4samVlKne0ogMkS31HZAhDvHmeL4sEZulQvd4+l8fmuXBi/jdFuB o8I6PWQL Bi3UY1tc55TYrnrUnk7zuH6PWAP92eNJ4+2hPM5evIqTYFSCalZwcXe/TxZDy+a2Sb27rlnpkNuuR+aKXOczbXtFgZdpuI9xPPXCFjDWfAB0/BQ1VQyySnVMcOPTMBHF4EdbqRZ2BdcqbvdiFUWDxMxmHtYkYeEu++qYjukGrPDOEtN/PPKq7+MECrKsKj1mO1fxmSPz+ryNb5MSc+JsRMA4+Lt16SiuypXugNhsfWz+rSNUMT4Aw/PKMrNyn/klIYTQJtWOhknJmPDZeI9p6/yPBFMbvjbfMZwJdQZnWAkuaPhmYN/BwKliRb4svRBeQgFRSbDyXx0a34DDySSGRm2iS4pYSGZ2Qr73WesMs+Xt36e4DCOlygervEA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Create the vm_area_struct cache with percpu sheaves of size 32 to hopefully improve its performance. For CONFIG_PER_VMA_LOCK, change the vma freeing from custom call_rcu() callback to kfree_rcu() which will perform rcu_free sheaf batching. Since there may be additional structures attached and they are freed only after the grace period, create a __vma_area_rcu_free_dtor() to do that. Note I have not investigated whether vma_numab_state_free() or free_anon_vma_name() must really need to wait for the grace period. For vma_lock_free() ideally we wouldn't free it at all when freeing the vma to the sheaf (or even slab page), but that would require using also a ctor for vmas to allocate the vma lock, and reintroducing dtor support for deallocating the lock when freeing slab pages containing the vmas. The plan is to move vma_lock into vma itself anyway, so if the rest can be freed immediately, the whole destructor support won't be needed anymore. Signed-off-by: Vlastimil Babka --- kernel/fork.c | 27 +++++++++++++++++++-------- 1 file changed, 19 insertions(+), 8 deletions(-) diff --git a/kernel/fork.c b/kernel/fork.c index 22f43721d031d48fd5be2606e86642334be9735f..9b1ae5aaf6a58fded6c9ac378809296825eba9fa 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -516,22 +516,24 @@ void __vm_area_free(struct vm_area_struct *vma) kmem_cache_free(vm_area_cachep, vma); } -#ifdef CONFIG_PER_VMA_LOCK -static void vm_area_free_rcu_cb(struct rcu_head *head) +static void __vma_area_rcu_free_dtor(void *ptr) { - struct vm_area_struct *vma = container_of(head, struct vm_area_struct, - vm_rcu); + struct vm_area_struct *vma = ptr; /* The vma should not be locked while being destroyed. */ +#ifdef CONFIG_PER_VMA_LOCK VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock->lock), vma); - __vm_area_free(vma); -} #endif + vma_numab_state_free(vma); + free_anon_vma_name(vma); + vma_lock_free(vma); +} + void vm_area_free(struct vm_area_struct *vma) { #ifdef CONFIG_PER_VMA_LOCK - call_rcu(&vma->vm_rcu, vm_area_free_rcu_cb); + kfree_rcu(vma, vm_rcu); #else __vm_area_free(vma); #endif @@ -3155,6 +3157,12 @@ void __init mm_cache_init(void) void __init proc_caches_init(void) { + struct kmem_cache_args vm_args = { + .align = __alignof__(struct vm_area_struct), + .sheaf_capacity = 32, + .sheaf_rcu_dtor = __vma_area_rcu_free_dtor, + }; + sighand_cachep = kmem_cache_create("sighand_cache", sizeof(struct sighand_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU| @@ -3172,7 +3180,10 @@ void __init proc_caches_init(void) SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL); - vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_ACCOUNT); + vm_area_cachep = kmem_cache_create("vm_area_struct", + sizeof(struct vm_area_struct), &vm_args, + SLAB_PANIC|SLAB_ACCOUNT); + #ifdef CONFIG_PER_VMA_LOCK vma_lock_cachep = KMEM_CACHE(vma_lock, SLAB_PANIC|SLAB_ACCOUNT); #endif From patchwork Tue Nov 12 16:38:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13872510 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7EACD42BAB for ; Tue, 12 Nov 2024 16:40:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 17C956B0096; Tue, 12 Nov 2024 11:39:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D85006B00FE; Tue, 12 Nov 2024 11:39:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A61B06B0103; Tue, 12 Nov 2024 11:39:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E65266B0100 for ; Tue, 12 Nov 2024 11:39:51 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 9E7281203B5 for ; Tue, 12 Nov 2024 16:39:51 +0000 (UTC) X-FDA: 82778003364.24.00DA79B Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf20.hostedemail.com (Postfix) with ESMTP id EF8041C0012 for ; Tue, 12 Nov 2024 16:38:56 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=q6qWV7hI; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=LJNvl0h5; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=q6qWV7hI; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=LJNvl0h5; spf=pass (imf20.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731429416; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uriiopleY9LF99nzRKEM/QRONRLZG7kJq5gEkIUEooo=; b=TdWgK/3jMfKtTKf7D/fn8opqA+l39BPh5YAX2O1vhZywuwo3sjrhGxgj0GANRYbFzf5thP U4ClJppJyg5rKkQeRfCOxIZ/SE6P4ItHUOkKC3LoaP9VPVglPDU3Z79pe5YQxYNwQmYrrO W0bnadsHJ8MlefW6sjaatm0jCfXQnEM= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=q6qWV7hI; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=LJNvl0h5; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=q6qWV7hI; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=LJNvl0h5; spf=pass (imf20.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731429416; a=rsa-sha256; cv=none; b=8fIWrvGIxs1ykHJLmSWGE7PdG5wo0CbEfgDYR8XK22DXgT9+EiNZWSIDA1SP6tgM8E3dy/ r2fpMFFyj3rRw8DhXVL1TKKgcqI/GbyaFg5c8AlsE9EcFz6+vaFS8v+87Tw3cyfVYWEs6l JhT2K4H+sIx55NP8QCWFsI7en8eSC90= Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id B54C51F45F; Tue, 12 Nov 2024 16:39:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1731429587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uriiopleY9LF99nzRKEM/QRONRLZG7kJq5gEkIUEooo=; b=q6qWV7hIXDcF1jRtbTKkeJrrdrrA1UIIBsXB0uGZRGVaLHDOtWSKX140ESvd3T3gxs7Vfr Qx4fvsPgMjOBaIOCGKX9O8wEiINoFnn7S/J20O9WVU71zdBN/xFzXvhg1mXxsrmqqkykG/ nixdIhn7iHqmMkBspOuOekE8iQvaB30= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1731429587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uriiopleY9LF99nzRKEM/QRONRLZG7kJq5gEkIUEooo=; b=LJNvl0h5HG3PLxcKUneuUDBpOlvSXv6Fqb0klJ2JjgdNa3FsxvKhVquDJ8Wgbp6AyIpv4T vcfhwMLV7h8T/7Cg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1731429587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uriiopleY9LF99nzRKEM/QRONRLZG7kJq5gEkIUEooo=; b=q6qWV7hIXDcF1jRtbTKkeJrrdrrA1UIIBsXB0uGZRGVaLHDOtWSKX140ESvd3T3gxs7Vfr Qx4fvsPgMjOBaIOCGKX9O8wEiINoFnn7S/J20O9WVU71zdBN/xFzXvhg1mXxsrmqqkykG/ nixdIhn7iHqmMkBspOuOekE8iQvaB30= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1731429587; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uriiopleY9LF99nzRKEM/QRONRLZG7kJq5gEkIUEooo=; b=LJNvl0h5HG3PLxcKUneuUDBpOlvSXv6Fqb0klJ2JjgdNa3FsxvKhVquDJ8Wgbp6AyIpv4T vcfhwMLV7h8T/7Cg== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 939E413A8C; Tue, 12 Nov 2024 16:39:47 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id aB6rI9OEM2e6IwAAD6G6ig (envelope-from ); Tue, 12 Nov 2024 16:39:47 +0000 From: Vlastimil Babka Date: Tue, 12 Nov 2024 17:38:49 +0100 Subject: [PATCH RFC 5/6] mm, slub: cheaper locking for percpu sheaves MIME-Version: 1.0 Message-Id: <20241112-slub-percpu-caches-v1-5-ddc0bdc27e05@suse.cz> References: <20241112-slub-percpu-caches-v1-0-ddc0bdc27e05@suse.cz> In-Reply-To: <20241112-slub-percpu-caches-v1-0-ddc0bdc27e05@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, "Paul E. McKenney" , Lorenzo Stoakes , Matthew Wilcox , Boqun Feng , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, Mateusz Guzik , Jann Horn , Vlastimil Babka X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=15892; i=vbabka@suse.cz; h=from:subject:message-id; bh=PRBm6yTHnI9o2EriRwSuA2+7nFOSW6qnz/vcAISTClE=; b=owEBbQGS/pANAwAIAbvgsHXSRYiaAcsmYgBnM4TNjQnearUwkpXTNV2rFVjJOdqdyuEsdUtqA SwzRmisRSiJATMEAAEIAB0WIQR7u8hBFZkjSJZITfG74LB10kWImgUCZzOEzQAKCRC74LB10kWI mlzRB/9eK4hOsHKJNy1cTTIQoMZwmlN0OGFTjcOONeLpT3Qbc5Ph6VUS6gGgFP26sGiNOl24Xn7 9RaOHWOHYk2roLmYVIWzwYMM0eiW4EizQao2zxV2yF1FjLr1g6oaqMsGWavvUWW2C2V/egaC5St iPEwnCRdF2k6fAYTfj1Ih9R/Rwi5oR/euViKELRG7GUQq9siENtFfmmXe1dTpj8OlCEu+4XQMm3 dyfQbMTAtKv44n9HPUoB1/3MjZWuNU0V+C2NuhgV//vY3JfOjZUpYxteWwKNtc9r0csLqFuMkHk MrP96yPZkfuTuj/DLDb6D+qR7VtQg29QKtSvLxfYiDwuOAVg X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: EF8041C0012 X-Stat-Signature: w5a85t773knyktco5zp8e7nmjbidkbtt X-Rspam-User: X-HE-Tag: 1731429536-768861 X-HE-Meta: U2FsdGVkX18c9RHfXWV0mLkzRZb4+kD/gSZs/4wPCBRUFYBI+MEHYBU9laGD6MEkp4kCwG6nDkKu209IrT9TaU8+Q+zdrZcjP06m2ootVma7I1Rf6/0HElSiYKWpGNlwuFTYSb64we4StVmBapJqWCrJR3fPhM8LGU1psniH4Xqlf7toPrqEw4EIV8Uebn5/KB+b+gMX80E81d/XFL4R4ZxwgsHasoWTQdbFZzfJErj87O3/PspP+FaB0PAfVnP8WDHldxNdYy/WORYL2+9lnomrriQMO1giQcRrDHzo68hX6tLEhq+6jeSoIZkaxO8uKZVq5ujAErc0nvZMqz+TbNOU7b25PUfbb7JHOD3Fpd1PX4AW3xJxM5n6HzhdVG7qOiB7sN9J3RJKuq1RcV4eVPAGEMgx9xLrkIMp9h7EbcwyWb1w9ER7uhUO7dw1tOUAOXNAzaUhSHNTls/rbFdQKds65XBwMdNPUK8By39QAhqn6RXiRU/KNy8sIYn+RHKDfZF2uBqabO5KOTpvX+HDnILMUwZQJsnwwermIlfJTKVRxMnovI19iS8UIk1CUxe3b7aUBFNGrhNRYL7SV7OTuTm/oYJyx3nSisKqvYnIsoxoOiwMB45y4KVafY35XzP0GD39LVPTLQB4vBlXzBLbn3HMazYy3cLtD4DEcPce695lJJnSrZxsXBcKPshIcrGEdWLsgv5155F/DHcrtN6sxuQNFVXdN2FnbD+8PYx3fLOLcKN1h8+5mJJuG4HTToCa2XIkYY042zDsVrAHjRQbnT/uznaDkm4Moe0qXU16iS+7Sv/YRhyM+k0+aYkH3O+LlU+8iD5DXlpnevtyhZVF1Et/c+siVUxKEm0VQ0VoJuQXwvPsVUsiioVd8aYmVkTzijDa5Bj0DfBwyCEe8nz6i6E6ZeMxuQvIk9LGDM56cwXJCMj/h9Z+9iIsbI9EXdBvKLdiThsB0sNj/ZkvqxZ JV3C7AXl PKRqOuzZ73vjKzZaUyHKjCIgLBDR02XaPcWbHBh5aN9tcow0sAwctiLWk8wvzK5aZbgA5RDGL/Uq1B/zkQ5eJAnYKzJuxND0j802A3xMg4HAZQPYBahVSe8zCksShimCf4lby0QW+P4m7Xc/1jL0M/d3xGg5J8VnjldzhRY9ylyLkd/kF+fvRLwDKb2qfK4E5sQpxfb2wfC5CCZyflBenqJcCcYJaL/FfOiBpoDfowbnqkzYmXuCQR834E72CkFQYnlbbvdk/zZ4Ug/B0TEYWOEwaxcVq1sUBP8lX4+dbqGPzuMuOri7jady5t4QeUICCYqeDtInhl5MXfsJduc4broa/VBcsE8ZHnBVAXq8BWjVdoSNOFrshz2EOhA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Instead of local_lock_irqsave(), use just get_cpu_ptr() (which only disables preemption) and then set an active flag. If potential callers include irq handler, the operation must use a trylock variant that bails out if the flag is already set to active because we interrupted another operation in progress. Changing the flag doesn't need to be atomic as the irq is one the same cpu. This should make using percpu sheaves cheaper, with the downside of some unlucky operations in irq handlers have to fallback to non-sheave variants. That should be rare so there should be a net benefit. On PREEMPT_RT we can use simply local_lock() as that does the right thing without the need to disable irqs. Thanks to Mateusz Guzik and Jann Horn for suggesting this kind of locking scheme in online conversations. Initially attempted to fully copy the page allocator's pcplist locking, but its reliance on spin_trylock() made it much more costly. Cc: Mateusz Guzik Cc: Jann Horn Signed-off-by: Vlastimil Babka --- mm/slub.c | 230 +++++++++++++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 174 insertions(+), 56 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 6811d766c0470cd7066c2574ad86e00405c916bb..1900afa6153ca6d88f9df7db3ce84d98629489e7 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -450,14 +450,111 @@ struct slab_sheaf { void *objects[]; }; +struct local_tryirq_lock { +#ifndef CONFIG_PREEMPT_RT + int active; +#else + local_lock_t llock; +#endif +}; + struct slub_percpu_sheaves { - local_lock_t lock; + struct local_tryirq_lock lock; struct slab_sheaf *main; /* never NULL when unlocked */ struct slab_sheaf *spare; /* empty or full, may be NULL */ struct slab_sheaf *rcu_free; struct node_barn *barn; }; +/* + * Generic helper to lookup a per-cpu variable with a lock that allows only + * trylock from irq handler context to avoid expensive irq disable or atomic + * operations and memory barriers - only compiler barriers are needed. + * + * On !PREEMPT_RT this is done by get_cpu_ptr(), which disables preemption, and + * checking that a variable is not already set to 1. If it is, it means we are + * in irq handler that has interrupted the locked operation, and must give up. + * Otherwise we set the variable to 1. + * + * On PREEMPT_RT we can simply use local_lock() as that does the right thing + * without actually disabling irqs. Thus the trylock can't actually fail. + * + */ +#ifndef CONFIG_PREEMPT_RT + +#define pcpu_local_tryirq_lock(type, member, ptr) \ +({ \ + type *_ret; \ + lockdep_assert(!irq_count()); \ + _ret = get_cpu_ptr(ptr); \ + lockdep_assert(_ret->member.active == 0); \ + WRITE_ONCE(_ret->member.active, 1); \ + barrier(); \ + _ret; \ +}) + +#define pcpu_local_tryirq_trylock(type, member, ptr) \ +({ \ + type *_ret; \ + _ret = get_cpu_ptr(ptr); \ + if (unlikely(READ_ONCE(_ret->member.active) == 1)) { \ + put_cpu_ptr(ptr); \ + _ret = NULL; \ + } else { \ + WRITE_ONCE(_ret->member.active, 1); \ + barrier(); \ + } \ + _ret; \ +}) + +#define pcpu_local_tryirq_unlock(member, ptr) \ +({ \ + lockdep_assert(this_cpu_ptr(ptr)->member.active == 1); \ + barrier(); \ + WRITE_ONCE(this_cpu_ptr(ptr)->member.active, 0); \ + put_cpu_ptr(ptr); \ +}) + +#define local_tryirq_lock_init(lock) \ +({ \ + (lock)->active = 0; \ +}) + +#else + +#define pcpu_local_tryirq_lock(type, member, ptr) \ +({ \ + type *_ret; \ + local_lock(&ptr->member.llock); \ + _ret = this_cpu_ptr(ptr); \ + _ret; \ +}) + +#define pcpu_local_tryirq_trylock(type, member, ptr) \ + pcpu_local_tryirq_lock(type, member, ptr) + +#define pcpu_local_tryirq_unlock(member, ptr) \ +({ \ + local_unlock(&ptr->member.llock); \ +}) + +#define local_tryirq_lock_init(lock) \ +({ \ + local_lock_init(&(lock)->llock); \ +}) + +#endif + +/* struct slub_percpu_sheaves specific helpers. */ +#define cpu_sheaves_lock(ptr) \ + pcpu_local_tryirq_lock(struct slub_percpu_sheaves, lock, ptr) + +#define cpu_sheaves_trylock(ptr) \ + pcpu_local_tryirq_trylock(struct slub_percpu_sheaves, lock, ptr) + +#define cpu_sheaves_unlock(ptr) \ + pcpu_local_tryirq_unlock(lock, ptr) + /* * The slab lists for all objects. */ @@ -2517,17 +2614,20 @@ static struct slab_sheaf *alloc_full_sheaf(struct kmem_cache *s, gfp_t gfp) static void __kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p); -static void sheaf_flush_main(struct kmem_cache *s) +/* returns true if at least partially flushed */ +static bool sheaf_flush_main(struct kmem_cache *s) { struct slub_percpu_sheaves *pcs; unsigned int batch, remaining; void *objects[PCS_BATCH_MAX]; struct slab_sheaf *sheaf; - unsigned long flags; + bool ret = false; next_batch: - local_lock_irqsave(&s->cpu_sheaves->lock, flags); - pcs = this_cpu_ptr(s->cpu_sheaves); + pcs = cpu_sheaves_trylock(s->cpu_sheaves); + if (!pcs) + return ret; + sheaf = pcs->main; batch = min(PCS_BATCH_MAX, sheaf->size); @@ -2537,14 +2637,18 @@ static void sheaf_flush_main(struct kmem_cache *s) remaining = sheaf->size; - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + cpu_sheaves_unlock(s->cpu_sheaves); __kmem_cache_free_bulk(s, batch, &objects[0]); stat_add(s, SHEAF_FLUSH_MAIN, batch); + ret = true; + if (remaining) goto next_batch; + + return ret; } static void sheaf_flush(struct kmem_cache *s, struct slab_sheaf *sheaf) @@ -2581,6 +2685,8 @@ static void rcu_free_sheaf_nobarn(struct rcu_head *head) * Caller needs to make sure migration is disabled in order to fully flush * single cpu's sheaves * + * must not be called from an irq + * * flushing operations are rare so let's keep it simple and flush to slabs * directly, skipping the barn */ @@ -2588,10 +2694,8 @@ static void pcs_flush_all(struct kmem_cache *s) { struct slub_percpu_sheaves *pcs; struct slab_sheaf *spare, *rcu_free; - unsigned long flags; - local_lock_irqsave(&s->cpu_sheaves->lock, flags); - pcs = this_cpu_ptr(s->cpu_sheaves); + pcs = cpu_sheaves_lock(s->cpu_sheaves); spare = pcs->spare; pcs->spare = NULL; @@ -2599,7 +2703,7 @@ static void pcs_flush_all(struct kmem_cache *s) rcu_free = pcs->rcu_free; pcs->rcu_free = NULL; - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + cpu_sheaves_unlock(s->cpu_sheaves); if (spare) { sheaf_flush(s, spare); @@ -4523,11 +4627,11 @@ static __fastpath_inline void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp) { struct slub_percpu_sheaves *pcs; - unsigned long flags; void *object; - local_lock_irqsave(&s->cpu_sheaves->lock, flags); - pcs = this_cpu_ptr(s->cpu_sheaves); + pcs = cpu_sheaves_trylock(s->cpu_sheaves); + if (!pcs) + return NULL; if (unlikely(pcs->main->size == 0)) { @@ -4559,7 +4663,7 @@ void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp) } } - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + cpu_sheaves_unlock(s->cpu_sheaves); if (!can_alloc) return NULL; @@ -4581,8 +4685,11 @@ void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp) if (!full) return NULL; - local_lock_irqsave(&s->cpu_sheaves->lock, flags); - pcs = this_cpu_ptr(s->cpu_sheaves); + /* + * we can reach here only when gfpflags_allow_blocking + * so this must not be an irq + */ + pcs = cpu_sheaves_lock(s->cpu_sheaves); /* * If we are returning empty sheaf, we either got it from the @@ -4615,7 +4722,7 @@ void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp) do_alloc: object = pcs->main->objects[--pcs->main->size]; - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + cpu_sheaves_unlock(s->cpu_sheaves); stat(s, ALLOC_PCS); @@ -4627,13 +4734,13 @@ unsigned int alloc_from_pcs_bulk(struct kmem_cache *s, size_t size, void **p) { struct slub_percpu_sheaves *pcs; struct slab_sheaf *main; - unsigned long flags; unsigned int allocated = 0; unsigned int batch; next_batch: - local_lock_irqsave(&s->cpu_sheaves->lock, flags); - pcs = this_cpu_ptr(s->cpu_sheaves); + pcs = cpu_sheaves_trylock(s->cpu_sheaves); + if (!pcs) + return allocated; if (unlikely(pcs->main->size == 0)) { @@ -4652,7 +4759,7 @@ unsigned int alloc_from_pcs_bulk(struct kmem_cache *s, size_t size, void **p) goto do_alloc; } - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + cpu_sheaves_unlock(s->cpu_sheaves); /* * Once full sheaves in barn are depleted, let the bulk @@ -4670,7 +4777,7 @@ unsigned int alloc_from_pcs_bulk(struct kmem_cache *s, size_t size, void **p) main->size -= batch; memcpy(p, main->objects + main->size, batch * sizeof(void *)); - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + cpu_sheaves_unlock(s->cpu_sheaves); stat_add(s, ALLOC_PCS, batch); @@ -5090,14 +5197,14 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, * The object is expected to have passed slab_free_hook() already. */ static __fastpath_inline -void free_to_pcs(struct kmem_cache *s, void *object) +bool free_to_pcs(struct kmem_cache *s, void *object) { struct slub_percpu_sheaves *pcs; - unsigned long flags; restart: - local_lock_irqsave(&s->cpu_sheaves->lock, flags); - pcs = this_cpu_ptr(s->cpu_sheaves); + pcs = cpu_sheaves_trylock(s->cpu_sheaves); + if (!pcs) + return false; if (unlikely(pcs->main->size == s->sheaf_capacity)) { @@ -5131,7 +5238,7 @@ void free_to_pcs(struct kmem_cache *s, void *object) struct slab_sheaf *to_flush = pcs->spare; pcs->spare = NULL; - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + cpu_sheaves_unlock(s->cpu_sheaves); sheaf_flush(s, to_flush); empty = to_flush; @@ -5139,18 +5246,27 @@ void free_to_pcs(struct kmem_cache *s, void *object) } alloc_empty: - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + cpu_sheaves_unlock(s->cpu_sheaves); empty = alloc_empty_sheaf(s, GFP_NOWAIT); if (!empty) { - sheaf_flush_main(s); - goto restart; + if (sheaf_flush_main(s)) + goto restart; + else + return false; } got_empty: - local_lock_irqsave(&s->cpu_sheaves->lock, flags); - pcs = this_cpu_ptr(s->cpu_sheaves); + pcs = cpu_sheaves_trylock(s->cpu_sheaves); + if (!pcs) { + struct node_barn *barn; + + barn = get_node(s, numa_mem_id())->barn; + + barn_put_empty_sheaf(barn, empty, true); + return false; + } /* * if we put any sheaf to barn here, it's because we raced or @@ -5178,9 +5294,11 @@ void free_to_pcs(struct kmem_cache *s, void *object) do_free: pcs->main->objects[pcs->main->size++] = object; - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + cpu_sheaves_unlock(s->cpu_sheaves); stat(s, FREE_PCS); + + return true; } static void __rcu_free_sheaf_prepare(struct kmem_cache *s, @@ -5242,10 +5360,10 @@ bool __kfree_rcu_sheaf(struct kmem_cache *s, void *obj) { struct slub_percpu_sheaves *pcs; struct slab_sheaf *rcu_sheaf; - unsigned long flags; - local_lock_irqsave(&s->cpu_sheaves->lock, flags); - pcs = this_cpu_ptr(s->cpu_sheaves); + pcs = cpu_sheaves_trylock(s->cpu_sheaves); + if (!pcs) + goto fail; if (unlikely(!pcs->rcu_free)) { @@ -5258,17 +5376,16 @@ bool __kfree_rcu_sheaf(struct kmem_cache *s, void *obj) goto do_free; } - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + cpu_sheaves_unlock(s->cpu_sheaves); empty = alloc_empty_sheaf(s, GFP_NOWAIT); - if (!empty) { - stat(s, FREE_RCU_SHEAF_FAIL); - return false; - } + if (!empty) + goto fail; - local_lock_irqsave(&s->cpu_sheaves->lock, flags); - pcs = this_cpu_ptr(s->cpu_sheaves); + pcs = cpu_sheaves_trylock(s->cpu_sheaves); + if (!pcs) + goto fail; if (unlikely(pcs->rcu_free)) barn_put_empty_sheaf(pcs->barn, empty, true); @@ -5283,19 +5400,22 @@ bool __kfree_rcu_sheaf(struct kmem_cache *s, void *obj) rcu_sheaf->objects[rcu_sheaf->size++] = obj; if (likely(rcu_sheaf->size < s->sheaf_capacity)) { - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + cpu_sheaves_unlock(s->cpu_sheaves); stat(s, FREE_RCU_SHEAF); return true; } pcs->rcu_free = NULL; - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + cpu_sheaves_unlock(s->cpu_sheaves); call_rcu(&rcu_sheaf->rcu_head, rcu_free_sheaf); stat(s, FREE_RCU_SHEAF); - return true; + +fail: + stat(s, FREE_RCU_SHEAF_FAIL); + return false; } /* @@ -5307,7 +5427,6 @@ static void free_to_pcs_bulk(struct kmem_cache *s, size_t size, void **p) { struct slub_percpu_sheaves *pcs; struct slab_sheaf *main; - unsigned long flags; unsigned int batch, i = 0; bool init; @@ -5330,8 +5449,9 @@ static void free_to_pcs_bulk(struct kmem_cache *s, size_t size, void **p) } next_batch: - local_lock_irqsave(&s->cpu_sheaves->lock, flags); - pcs = this_cpu_ptr(s->cpu_sheaves); + pcs = cpu_sheaves_trylock(s->cpu_sheaves); + if (!pcs) + goto fallback; if (unlikely(pcs->main->size == s->sheaf_capacity)) { @@ -5361,13 +5481,13 @@ static void free_to_pcs_bulk(struct kmem_cache *s, size_t size, void **p) } no_empty: - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + cpu_sheaves_unlock(s->cpu_sheaves); /* * if we depleted all empty sheaves in the barn or there are too * many full sheaves, free the rest to slab pages */ - +fallback: __kmem_cache_free_bulk(s, size, p); return; } @@ -5379,7 +5499,7 @@ static void free_to_pcs_bulk(struct kmem_cache *s, size_t size, void **p) memcpy(main->objects + main->size, p, batch * sizeof(void *)); main->size += batch; - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); + cpu_sheaves_unlock(s->cpu_sheaves); stat_add(s, FREE_PCS, batch); @@ -5479,9 +5599,7 @@ void slab_free(struct kmem_cache *s, struct slab *slab, void *object, if (unlikely(!slab_free_hook(s, object, slab_want_init_on_free(s), false))) return; - if (s->cpu_sheaves) - free_to_pcs(s, object); - else + if (!s->cpu_sheaves || !free_to_pcs(s, object)) do_slab_free(s, slab, object, object, 1, addr); } @@ -6121,7 +6239,7 @@ static int init_percpu_sheaves(struct kmem_cache *s) pcs = per_cpu_ptr(s->cpu_sheaves, cpu); - local_lock_init(&pcs->lock); + local_tryirq_lock_init(&pcs->lock); nid = cpu_to_mem(cpu); From patchwork Tue Nov 12 16:38:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13872511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F51DD42BAB for ; Tue, 12 Nov 2024 16:40:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4235C6B00FD; Tue, 12 Nov 2024 11:39:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 10F126B0105; Tue, 12 Nov 2024 11:39:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CEB226B00FD; Tue, 12 Nov 2024 11:39:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 203936B0096 for ; Tue, 12 Nov 2024 11:39:52 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id CCDEA803F8 for ; Tue, 12 Nov 2024 16:39:51 +0000 (UTC) X-FDA: 82778004204.28.5A9CED4 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf29.hostedemail.com (Postfix) with ESMTP id A3D3C120003 for ; Tue, 12 Nov 2024 16:38:52 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731429396; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=S/1END9d9BHjTJ+7Rf7H45tc8S6VpQYl1A6JL4iS4e8=; b=F+MEXPeslRoXONOi6fX6xCazLAJDkmVRvuY58WSWvRq3jYytR7RS3O/zdGlm+OLk2rORmc CTGlC1YpI0fggjkNYsakBNVO3pDckDsoxZDR81O6IJqevBG+oLWvk0A1QGT4vvECMx4al7 nJYSyyySoraH8iSu+qZyKkRs1GCkWAE= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731429396; a=rsa-sha256; cv=none; b=3zJ6i8gmIqJGqIcQd5RbXvS9kAwiP8zEFQPazQe8gdIgg1oH+f+s/tJCSX/s6vvnXxxCfG VIR/2gZjWlIcvf3NizABf71iGOkDMG099mCdcng5HcJX4SfW4IjfwpMvPHJw0v47zUx3si Doc4qxlF7WpcfOQcszhzUvkoUZYb2Hk= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id CF7711F749; Tue, 12 Nov 2024 16:39:47 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id B0F5F13AA1; Tue, 12 Nov 2024 16:39:47 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 4KDlKtOEM2e6IwAAD6G6ig (envelope-from ); Tue, 12 Nov 2024 16:39:47 +0000 From: Vlastimil Babka Date: Tue, 12 Nov 2024 17:38:50 +0100 Subject: [PATCH RFC 6/6] mm, slub: sheaf prefilling for guaranteed allocations MIME-Version: 1.0 Message-Id: <20241112-slub-percpu-caches-v1-6-ddc0bdc27e05@suse.cz> References: <20241112-slub-percpu-caches-v1-0-ddc0bdc27e05@suse.cz> In-Reply-To: <20241112-slub-percpu-caches-v1-0-ddc0bdc27e05@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, "Paul E. McKenney" , Lorenzo Stoakes , Matthew Wilcox , Boqun Feng , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, Vlastimil Babka X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=6843; i=vbabka@suse.cz; h=from:subject:message-id; bh=6nE4JXhtpPa9Ksw5Gnb4/9dN7CeJDc1XrhY/1U7BLXM=; b=owEBbQGS/pANAwAIAbvgsHXSRYiaAcsmYgBnM4TQAMvaemFosL6x4yQCCpE61LymMdBMw/tef FneZ10kYRyJATMEAAEIAB0WIQR7u8hBFZkjSJZITfG74LB10kWImgUCZzOE0AAKCRC74LB10kWI mrqtB/0Y1K6PeRzfOZSmJ81PnW9hb1dhLtFTaI0qEdXsbvIsyomX+bah7XtYhtN10o0/VWvsMbS yqlNe52NVyvhVgMqoEgwxFHryXf7Cl21sh60HUFKowamWJpYrXo1TuIBiUTr/GOfnl1kRzdoOyo 8Fx4blrdqQ9op2CBxEqosOGC5CnKk/SKwycaQPXDFaQPJuBpjVMLNzrc091SMUbXb6947+Z7rZp FbibQPWnvNxH/v1n4AfVDEg5LWv/ob64p2hPCESPfN6GA76ownzMxl7vUZz481tSR1Y2XfYMaOf MwBsFdMwq0p9wbn14cfAbl3RYZq4yAGwWqGGe0WFs+eH7zhO X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Action: no action X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: A3D3C120003 X-Stat-Signature: sbxaghhb11bkg5dhye6x1npcnq8paugn X-Rspam-User: X-HE-Tag: 1731429532-4813 X-HE-Meta: U2FsdGVkX19dQJgt8ajFzl7UR49znK7tS8v3hoNU83tTlBMuqy1ZDvJQegkdwoBBuVxJe07U5ExP0OHvu7dME1G+W6cYVBp5LyTNUbvmCTtHAuOQ3kgR2JhNxfXdmMiZNOAsMrJ2n5w6n+TBiqh8GJjN6qFODXp/6BX7G9rGJdr6wg5x4vgYDQza46qzunLwZlwgzMnuNoIgYTQoFp7TYjUmpfahw6WEJeUMs2warK+WUNcnNpAZ1AN0PyAGs2lNnE2ctkWXv590OfAqC8YY0/l2HokTZRchxjvC8VpLiKrXiwDbcbXaPgpwMiS4+KAhfaI6ficA2tFh7we6fytLjldiEk4+/qmw0/KJKc0KozhEJ4nTY41nGlfXt7Mo+DWTR+hay7E5vm34qJriwydEHDfIGDK+fDNZEYtW99yUgYQ6/Zg5yxggkQA/sQovx23M4Zrsf0ZfhqL58UfJ3CSk+WFrosnG75eUIe7tsMGl6RBaeVVWwsK3Fnw8LI0yjiD6V+BSvOW9yXGryroCc9l8rf3zJLnQGGj7EJc3a8WFrThLkeCKFzljIoWCNSLOSAubf/77bCEImaUIoBmqw6/olsNSUOEaKawQg3mEmmEsWSlnhf488S8TSN3uAVYeCFeCnmIH3rcqcy+GJH5wu6MOlk2agwleAlPsagwEagFtAwhhjGhNI4C0+mXywbo+CZcsg8gONOcPpIdwJDjFJKcABU63h8mBrM6OYiAvc+643+JbWQhDVllnGGhVmjIS0TrhqgoXISKlDlaJMF8v+vPB5aFQJpq7dIHN+oM27RY/6kJ4gxPsajS1qS2c3emtuVxTu+/UEVQFgrtfqwQZsz2+2+1mU5cllyXOWbcxs/vA2wHpRRLp5PRMu3Kyl08dPZzhqcGmQCqcCUHyPCyqSo7/BfOOLOjG/gQECyeIgFbNqlJMwopVBTxic7QS4kqm2c3w9vVn8Wc1D6Hdrg2shnh JRrf44aF LRmh0NSruy9YbmjPwENnS2turYPDJbGAUES+u7IXay59PBlqR9CE5H3s54+3oVT5xo0LELxHpIOIIvZmmoyNR3YpKOiVG2LuJ6kPxZr9GzdFRFcFbobBFUOVgCjFNYxyOFs/ogD6qLVD7Tp1Rc/ijyLQZIQZrkpnRa+9zS7jS6bBMJclituqL7ZRNlqAgQ48SBeZB X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add three functions for efficient guaranteed allocations in a critical section (that cannot sleep) when the exact number of allocations is not known beforehand, but an upper limit can be calculated. kmem_cache_prefill_sheaf() returns a sheaf containing at least given number of objects. kmem_cache_alloc_from_sheaf() will allocate an object from the sheaf and is guaranteed not to fail until depleted. kmem_cache_return_sheaf() is for giving the sheaf back to the slab allocator after the critical section. This will also attempt to refill it to cache's sheaf capacity for better efficiency of sheaves handling, but it's not stricly necessary to succeed. TODO: the current implementation is limited to cache's sheaf_capacity Signed-off-by: Vlastimil Babka --- include/linux/slab.h | 11 ++++ mm/slub.c | 149 +++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 160 insertions(+) diff --git a/include/linux/slab.h b/include/linux/slab.h index 23904321992ad2eeb9389d0883cf4d5d5d71d896..a87dc3c6392fe235de2eabe1792df86d40c3bbf9 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -820,6 +820,17 @@ void *kmem_cache_alloc_node_noprof(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment __malloc; #define kmem_cache_alloc_node(...) alloc_hooks(kmem_cache_alloc_node_noprof(__VA_ARGS__)) +struct slab_sheaf * +kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int count); + +void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf); + +void *kmem_cache_alloc_from_sheaf_noprof(struct kmem_cache *cachep, gfp_t gfp, + struct slab_sheaf *sheaf) __assume_slab_alignment __malloc; +#define kmem_cache_alloc_from_sheaf(...) \ + alloc_hooks(kmem_cache_alloc_from_sheaf_noprof(__VA_ARGS__)) + /* * These macros allow declaring a kmem_buckets * parameter alongside size, which * can be compiled out with CONFIG_SLAB_BUCKETS=n so that a large number of call diff --git a/mm/slub.c b/mm/slub.c index 1900afa6153ca6d88f9df7db3ce84d98629489e7..a0e2cb7dfb5173f39f36bea1eb9760c3c1b99dd7 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -444,6 +444,7 @@ struct slab_sheaf { union { struct rcu_head rcu_head; struct list_head barn_list; + bool oversize; }; struct kmem_cache *cache; unsigned int size; @@ -2819,6 +2820,30 @@ static int barn_put_full_sheaf(struct node_barn *barn, struct slab_sheaf *sheaf, return ret; } +static struct slab_sheaf *barn_get_full_or_empty_sheaf(struct node_barn *barn) +{ + struct slab_sheaf *sheaf = NULL; + unsigned long flags; + + spin_lock_irqsave(&barn->lock, flags); + + if (barn->nr_empty) { + sheaf = list_first_entry(&barn->sheaves_empty, + struct slab_sheaf, barn_list); + list_del(&sheaf->barn_list); + barn->nr_empty--; + } else if (barn->nr_full) { + sheaf = list_first_entry(&barn->sheaves_full, struct slab_sheaf, + barn_list); + list_del(&sheaf->barn_list); + barn->nr_full--; + } + + spin_unlock_irqrestore(&barn->lock, flags); + + return sheaf; +} + /* * If a full sheaf is available, return it and put the supplied empty one to * barn. We ignore the limit on empty sheaves as the number of sheaves doesn't @@ -4893,6 +4918,130 @@ void *kmem_cache_alloc_node_noprof(struct kmem_cache *s, gfp_t gfpflags, int nod } EXPORT_SYMBOL(kmem_cache_alloc_node_noprof); + +/* + * returns a sheaf that has least the given count of objects + * when prefilling is needed, do so with given gfp flags + * + * return NULL if prefilling failed, or when the requested count is + * above cache's sheaf_capacity (TODO: lift this limitation) + */ +struct slab_sheaf * +kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int count) +{ + struct slub_percpu_sheaves *pcs; + struct slab_sheaf *sheaf = NULL; + + //TODO: handle via oversize sheaf + if (count > s->sheaf_capacity) + return NULL; + + pcs = cpu_sheaves_lock(s->cpu_sheaves); + + if (pcs->spare && pcs->spare->size > 0) { + sheaf = pcs->spare; + pcs->spare = NULL; + } + + if (!sheaf) + sheaf = barn_get_full_or_empty_sheaf(pcs->barn); + + cpu_sheaves_unlock(s->cpu_sheaves); + + if (!sheaf) + sheaf = alloc_empty_sheaf(s, gfp); + + if (sheaf && sheaf->size < count) { + if (refill_sheaf(s, sheaf, gfp)) { + sheaf_flush(s, sheaf); + free_empty_sheaf(s, sheaf); + sheaf = NULL; + } + } + + return sheaf; +} + +/* + * Use this to return a sheaf obtained by kmem_cache_prefill_sheaf() + * It tries to refill the sheaf back to the cache's sheaf_capacity + * to avoid handling partially full sheaves. + * + * If the refill fails because gfp is e.g. GFP_NOWAIT, the sheaf is + * instead dissolved + */ +void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf) +{ + struct slub_percpu_sheaves *pcs; + bool refill = false; + struct node_barn *barn; + + //TODO: handle oversize sheaf + + pcs = cpu_sheaves_lock(s->cpu_sheaves); + + if (!pcs->spare) { + pcs->spare = sheaf; + sheaf = NULL; + } + + /* racy check */ + if (!sheaf && pcs->barn->nr_full >= MAX_FULL_SHEAVES) { + barn = pcs->barn; + refill = true; + } + + cpu_sheaves_unlock(s->cpu_sheaves); + + if (!sheaf) + return; + + /* + * if the barn is full of full sheaves or we fail to refill the sheaf, + * simply flush and free it + */ + if (!refill || refill_sheaf(s, sheaf, gfp)) { + sheaf_flush(s, sheaf); + free_empty_sheaf(s, sheaf); + return; + } + + /* we racily determined the sheaf would fit, so now force it */ + barn_put_full_sheaf(barn, sheaf, true); +} + +/* + * Allocate from a sheaf obtained by kmem_cache_prefill_sheaf() + * + * Guaranteed not to fail as many allocations as was the requested count. + * After the sheaf is emptied, it fails - no fallback to the slab cache itself. + * + * The gfp parameter is meant only to specify __GFP_ZERO or __GFP_ACCOUNT + * memcg charging is forced over limit if necessary, to avoid failure. + */ +void * +kmem_cache_alloc_from_sheaf_noprof(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf) +{ + void *ret = NULL; + bool init; + + if (sheaf->size == 0) + goto out; + + ret = sheaf->objects[--sheaf->size]; + + init = slab_want_init_on_alloc(gfp, s); + + /* add __GFP_NOFAIL to force successful memcg charging */ + slab_post_alloc_hook(s, NULL, gfp | __GFP_NOFAIL, 1, &ret, init, s->object_size); +out: + trace_kmem_cache_alloc(_RET_IP_, ret, s, gfp, NUMA_NO_NODE); + + return ret; +} + /* * To avoid unnecessary overhead, we pass through large allocation requests * directly to the page allocator. We use __GFP_COMP, because we will need to