From patchwork Mon Mar 17 14:33:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 14019360 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2F2DC28B30 for ; Mon, 17 Mar 2025 14:33:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 207E8280007; Mon, 17 Mar 2025 10:33:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1416C280004; Mon, 17 Mar 2025 10:33:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E6177280007; Mon, 17 Mar 2025 10:33:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C01A9280006 for ; Mon, 17 Mar 2025 10:33:49 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id C1AF0A128B for ; Mon, 17 Mar 2025 14:33:50 +0000 (UTC) X-FDA: 83231287020.14.107957D Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf02.hostedemail.com (Postfix) with ESMTP id 845C780017 for ; Mon, 17 Mar 2025 14:33:48 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=NAaRuNDJ; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=tSjPo9Qr; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=NAaRuNDJ; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=tSjPo9Qr; dmarc=none; spf=pass (imf02.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1742222028; a=rsa-sha256; cv=none; b=KJhPb8Y/B2e+8a8n3bnPlD4msEfq82X/+gyP6nNVYlIJ4o5mPa7/kiH/JRp3PIu04IbRhi 8A2TApCO36+gFARYEFqqf4BjdITf5irizPdPyLa+zIzlUKuKrjQTHhRLgKcty9/sDIsTIF fyZNyk2RNY285OK/McxU+l1TzqBijWg= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=NAaRuNDJ; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=tSjPo9Qr; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=NAaRuNDJ; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=tSjPo9Qr; dmarc=none; spf=pass (imf02.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1742222028; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Wn9aKNpkhSMth08ochZAlsCMdZibViuHXaQHyLqEwc8=; b=VQwOjtTl3doDiFVI0f7lDJIT6Ko1a3rufKdVu8k9+nld6maicZAOQrYtqBT0q9Y0dRB22j qMLbklXQbvWJ99qQwl+kT81kGEs2rCbEoSEKUIWAAib42HJU7468/xw4+nNgTGHsBGXrWX /Gks6k2qt2M6yWbAnR+F4ZCvncxwUos= Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 019231F810; Mon, 17 Mar 2025 14:33:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1742222014; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Wn9aKNpkhSMth08ochZAlsCMdZibViuHXaQHyLqEwc8=; b=NAaRuNDJkQWAoSM+YuAl9APkKS/YJ49HjW72D56t5K5/ytpeHa4FRuqL0iKLzpHRxI3Rm1 krFK3Z3SjwRnp3HidFCy7LB/K7ixC17drB2XheHeL5m4kZijuaHsXveWyNZ/Gwm/1F8A5t O9RU6Ze2XCRx7b5/dHmBK+oZjk+twlA= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1742222014; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Wn9aKNpkhSMth08ochZAlsCMdZibViuHXaQHyLqEwc8=; b=tSjPo9QrOt0P6i/dK50bI1yjSXmKPsLm4ridxxIvFIEmAIFKaL4GXl6r1Vtep+bFxNbKsO 28ISaj35nBGzhKBw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1742222014; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Wn9aKNpkhSMth08ochZAlsCMdZibViuHXaQHyLqEwc8=; b=NAaRuNDJkQWAoSM+YuAl9APkKS/YJ49HjW72D56t5K5/ytpeHa4FRuqL0iKLzpHRxI3Rm1 krFK3Z3SjwRnp3HidFCy7LB/K7ixC17drB2XheHeL5m4kZijuaHsXveWyNZ/Gwm/1F8A5t O9RU6Ze2XCRx7b5/dHmBK+oZjk+twlA= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1742222014; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Wn9aKNpkhSMth08ochZAlsCMdZibViuHXaQHyLqEwc8=; b=tSjPo9QrOt0P6i/dK50bI1yjSXmKPsLm4ridxxIvFIEmAIFKaL4GXl6r1Vtep+bFxNbKsO 28ISaj35nBGzhKBw== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id DE4D8139D2; Mon, 17 Mar 2025 14:33:33 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 0F37Nb0y2GcycQAAD6G6ig (envelope-from ); Mon, 17 Mar 2025 14:33:33 +0000 From: Vlastimil Babka Date: Mon, 17 Mar 2025 15:33:05 +0100 Subject: [PATCH RFC v3 4/8] slab: sheaf prefilling for guaranteed allocations MIME-Version: 1.0 Message-Id: <20250317-slub-percpu-caches-v3-4-9d9884d8b643@suse.cz> References: <20250317-slub-percpu-caches-v3-0-9d9884d8b643@suse.cz> In-Reply-To: <20250317-slub-percpu-caches-v3-0-9d9884d8b643@suse.cz> To: Suren Baghdasaryan , "Liam R. Howlett" , Christoph Lameter , David Rientjes Cc: Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org, vbabka@suse.cz X-Mailer: b4 0.14.2 X-Stat-Signature: 6gz91bm7is5uj7hxwchou9mw3x6cjy1d X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 845C780017 X-Rspam-User: X-HE-Tag: 1742222028-141789 X-HE-Meta: U2FsdGVkX19eoe6ZD4YCwqk6RPRuMHv7pHu6c/aO5wDqvxOe3ZrLRTy2mw5sZ0M38opfRNIe5HbUfaEHaea3ev9UPTrY49shLCKVPSJfCcQM0qmEXG+f4vUXc9IjQDoJdEuo8Sw8OgvsJibpP5fHqVFwTqlxL6pb7tadAsi/gydlTbKZiCAHbY//3TvTi5Q/sU4VCgxiL63N5LhmUu7tU6d4BOkK2/bn41kP3dz/uCCNnmmT16O5wPB8HZLX+dQlWk/vizSeWnPIt4Au300RR+mG2tociK1gxWkM38kE5X6E0qQToB25G6ocXo655aTmp+4yRObqC4kDG76gBKYi+r/AVy3Yb6CbHxnuMevl2uUp8/OCAvF4zdnMVMH8fESgAE2WTbEtfTwKmkoFKYeRAgoezC22Bd4sD/9FMDmS2SiitcxZTmRl5rO4IVuuxGpTJ0DS6CakdISkhL5qCKazYG3OYGljhzWMevWpoNGQ77ny8obKSp/6QcsNHSoEhntjCmxwvkjjNCXFG36pmBjCBHCxUUL6VyWOK5ZG2iHq2J6Pc0bo2IlRKaLnA4FB2YJoBtqR4XhLoNMuv50GVHvbbX6H+g6BBI3Pd/wA19JSGsi/KEgDpMPqLSJxcvwDN2mlKLVcvf0osNO30h11SKX8yjob3ZjpdpBhPXimNuOnwowr2NMMI/mzrjVD09kWovKA/7jKDL8dC9foI4zpcCdSzlyPLB61CvOs4OZ++e6LcxtqST0EyA3DJZsJ1QrwqUjysoQS5y2UKjkMVGJxODVkdIEIlkd2U6OU5/41H0T7MR5Rbjof+ipbY7itnHaxTEwgqSDK2g96bFJgD65lPaA6ZJ3+spDp+LEX/lZO5RwLUtfjOj9MXIGHKdiPImcHzZlwe/CBV6F3JupggTqbnLSUkFUaTrFIlX6TXuGRWJC1MUcUtJr66bKMrlZ9uRaWLdg3mBG0i59I64rrMPQIfeh KJWlKIH3 zt+4OC2Ei031l5aio4dCyO+4wPXmgN/j8zfm2qYQi59ByS5NchSDg38OUJtwr03CNHT83BHdhphEPk5bk+X5TXYDoABCggPp0Je4PuaCqiF9/7Q6tAyG6Z0V5CcLTREstzPLyTosZhXprY50WUutWRb+3tF2DVo194wTsyt7XinR8PJrEwJZAXU0AvHsJQ18gWREvKNh+EauFqR/rHr/sBmaNgQaEvfTUCSgS5qcBOACkEqJ+lj53irk8tTbQyLV++8vUYG11RiE/KPcIBf0qc3WfWw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add functions for efficient guaranteed allocations e.g. in a critical section that cannot sleep, when the exact number of allocations is not known beforehand, but an upper limit can be calculated. kmem_cache_prefill_sheaf() returns a sheaf containing at least given number of objects. kmem_cache_alloc_from_sheaf() will allocate an object from the sheaf and is guaranteed not to fail until depleted. kmem_cache_return_sheaf() is for giving the sheaf back to the slab allocator after the critical section. This will also attempt to refill it to cache's sheaf capacity for better efficiency of sheaves handling, but it's not stricly necessary to succeed. kmem_cache_refill_sheaf() can be used to refill a previously obtained sheaf to requested size. If the current size is sufficient, it does nothing. If the requested size exceeds cache's sheaf_capacity and the sheaf's current capacity, the sheaf will be replaced with a new one, hence the indirect pointer parameter. kmem_cache_sheaf_size() can be used to query the current size. The implementation supports requesting sizes that exceed cache's sheaf_capacity, but it is not efficient - such sheaves are allocated fresh in kmem_cache_prefill_sheaf() and flushed and freed immediately by kmem_cache_return_sheaf(). kmem_cache_refill_sheaf() might be especially ineffective when replacing a sheaf with a new one of a larger capacity. It is therefore better to size cache's sheaf_capacity accordingly. Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan --- include/linux/slab.h | 16 ++++ mm/slub.c | 228 +++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 244 insertions(+) diff --git a/include/linux/slab.h b/include/linux/slab.h index 0e1b25228c77140d05b5b4433c9d7923de36ec05..dd01b67982e856b1b02f4f0e6fc557726e7f02a8 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -829,6 +829,22 @@ void *kmem_cache_alloc_node_noprof(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment __malloc; #define kmem_cache_alloc_node(...) alloc_hooks(kmem_cache_alloc_node_noprof(__VA_ARGS__)) +struct slab_sheaf * +kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int size); + +int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf **sheafp, unsigned int size); + +void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf); + +void *kmem_cache_alloc_from_sheaf_noprof(struct kmem_cache *cachep, gfp_t gfp, + struct slab_sheaf *sheaf) __assume_slab_alignment __malloc; +#define kmem_cache_alloc_from_sheaf(...) \ + alloc_hooks(kmem_cache_alloc_from_sheaf_noprof(__VA_ARGS__)) + +unsigned int kmem_cache_sheaf_size(struct slab_sheaf *sheaf); + /* * These macros allow declaring a kmem_buckets * parameter alongside size, which * can be compiled out with CONFIG_SLAB_BUCKETS=n so that a large number of call diff --git a/mm/slub.c b/mm/slub.c index 83f4395267dccfbc144920baa7d0a85a27fbb1b4..ab3532d5f41045d8268b12ad774541dcd066c4c4 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -443,6 +443,8 @@ struct slab_sheaf { union { struct rcu_head rcu_head; struct list_head barn_list; + /* only used for prefilled sheafs */ + unsigned int capacity; }; struct kmem_cache *cache; unsigned int size; @@ -2748,6 +2750,30 @@ static int barn_put_full_sheaf(struct node_barn *barn, struct slab_sheaf *sheaf, return ret; } +static struct slab_sheaf *barn_get_full_or_empty_sheaf(struct node_barn *barn) +{ + struct slab_sheaf *sheaf = NULL; + unsigned long flags; + + spin_lock_irqsave(&barn->lock, flags); + + if (barn->nr_full) { + sheaf = list_first_entry(&barn->sheaves_full, struct slab_sheaf, + barn_list); + list_del(&sheaf->barn_list); + barn->nr_full--; + } else if (barn->nr_empty) { + sheaf = list_first_entry(&barn->sheaves_empty, + struct slab_sheaf, barn_list); + list_del(&sheaf->barn_list); + barn->nr_empty--; + } + + spin_unlock_irqrestore(&barn->lock, flags); + + return sheaf; +} + /* * If a full sheaf is available, return it and put the supplied empty one to * barn. We ignore the limit on empty sheaves as the number of sheaves doesn't @@ -4844,6 +4870,208 @@ void *kmem_cache_alloc_node_noprof(struct kmem_cache *s, gfp_t gfpflags, int nod } EXPORT_SYMBOL(kmem_cache_alloc_node_noprof); +/* + * returns a sheaf that has least the requested size + * when prefilling is needed, do so with given gfp flags + * + * return NULL if sheaf allocation or prefilling failed + */ +struct slab_sheaf * +kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int size) +{ + struct slub_percpu_sheaves *pcs; + struct slab_sheaf *sheaf = NULL; + + if (unlikely(size > s->sheaf_capacity)) { + sheaf = kzalloc(struct_size(sheaf, objects, size), gfp); + if (!sheaf) + return NULL; + + sheaf->cache = s; + sheaf->capacity = size; + + if (!__kmem_cache_alloc_bulk(s, gfp, size, + &sheaf->objects[0])) { + kfree(sheaf); + return NULL; + } + + sheaf->size = size; + + return sheaf; + } + + localtry_lock(&s->cpu_sheaves->lock); + pcs = this_cpu_ptr(s->cpu_sheaves); + + if (pcs->spare) { + sheaf = pcs->spare; + pcs->spare = NULL; + } + + if (!sheaf) + sheaf = barn_get_full_or_empty_sheaf(pcs->barn); + + localtry_unlock(&s->cpu_sheaves->lock); + + if (!sheaf) + sheaf = alloc_empty_sheaf(s, gfp); + + if (sheaf && sheaf->size < size) { + if (refill_sheaf(s, sheaf, gfp)) { + sheaf_flush_unused(s, sheaf); + free_empty_sheaf(s, sheaf); + sheaf = NULL; + } + } + + if (sheaf) + sheaf->capacity = s->sheaf_capacity; + + return sheaf; +} + +/* + * Use this to return a sheaf obtained by kmem_cache_prefill_sheaf() + * + * If the sheaf cannot simply become the percpu spare sheaf, but there's space + * for a full sheaf in the barn, we try to refill the sheaf back to the cache's + * sheaf_capacity to avoid handling partially full sheaves. + * + * If the refill fails because gfp is e.g. GFP_NOWAIT, or the barn is full, the + * sheaf is instead flushed and freed. + */ +void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf) +{ + struct slub_percpu_sheaves *pcs; + bool refill = false; + struct node_barn *barn; + + if (unlikely(sheaf->capacity != s->sheaf_capacity)) { + sheaf_flush_unused(s, sheaf); + kfree(sheaf); + return; + } + + localtry_lock(&s->cpu_sheaves->lock); + pcs = this_cpu_ptr(s->cpu_sheaves); + + if (!pcs->spare) { + pcs->spare = sheaf; + sheaf = NULL; + } else if (data_race(pcs->barn->nr_full) < MAX_FULL_SHEAVES) { + barn = pcs->barn; + refill = true; + } + + localtry_unlock(&s->cpu_sheaves->lock); + + if (!sheaf) + return; + + /* + * if the barn is full of full sheaves or we fail to refill the sheaf, + * simply flush and free it + */ + if (!refill || refill_sheaf(s, sheaf, gfp)) { + sheaf_flush_unused(s, sheaf); + free_empty_sheaf(s, sheaf); + return; + } + + /* we racily determined the sheaf would fit, so now force it */ + barn_put_full_sheaf(barn, sheaf, true); +} + +/* + * refill a sheaf previously returned by kmem_cache_prefill_sheaf to at least + * the given size + * + * the sheaf might be replaced by a new one when requesting more than + * s->sheaf_capacity objects if such replacement is necessary, but the refill + * fails (returning -ENOMEM), the existing sheaf is left intact + * + * In practice we always refill to full sheaf's capacity. + */ +int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf **sheafp, unsigned int size) +{ + struct slab_sheaf *sheaf; + + /* + * TODO: do we want to support *sheaf == NULL to be equivalent of + * kmem_cache_prefill_sheaf() ? + */ + if (!sheafp || !(*sheafp)) + return -EINVAL; + + sheaf = *sheafp; + if (sheaf->size >= size) + return 0; + + if (likely(sheaf->capacity >= size)) { + if (likely(sheaf->capacity == s->sheaf_capacity)) + return refill_sheaf(s, sheaf, gfp); + + if (!__kmem_cache_alloc_bulk(s, gfp, sheaf->capacity - sheaf->size, + &sheaf->objects[sheaf->size])) { + return -ENOMEM; + } + sheaf->size = sheaf->capacity; + + return 0; + } + + /* + * We had a regular sized sheaf and need an oversize one, or we had an + * oversize one already but need a larger one now. + * This should be a very rare path so let's not complicate it. + */ + sheaf = kmem_cache_prefill_sheaf(s, gfp, size); + if (!sheaf) + return -ENOMEM; + + kmem_cache_return_sheaf(s, gfp, *sheafp); + *sheafp = sheaf; + return 0; +} + +/* + * Allocate from a sheaf obtained by kmem_cache_prefill_sheaf() + * + * Guaranteed not to fail as many allocations as was the requested size. + * After the sheaf is emptied, it fails - no fallback to the slab cache itself. + * + * The gfp parameter is meant only to specify __GFP_ZERO or __GFP_ACCOUNT + * memcg charging is forced over limit if necessary, to avoid failure. + */ +void * +kmem_cache_alloc_from_sheaf_noprof(struct kmem_cache *s, gfp_t gfp, + struct slab_sheaf *sheaf) +{ + void *ret = NULL; + bool init; + + if (sheaf->size == 0) + goto out; + + ret = sheaf->objects[--sheaf->size]; + + init = slab_want_init_on_alloc(gfp, s); + + /* add __GFP_NOFAIL to force successful memcg charging */ + slab_post_alloc_hook(s, NULL, gfp | __GFP_NOFAIL, 1, &ret, init, s->object_size); +out: + trace_kmem_cache_alloc(_RET_IP_, ret, s, gfp, NUMA_NO_NODE); + + return ret; +} + +unsigned int kmem_cache_sheaf_size(struct slab_sheaf *sheaf) +{ + return sheaf->size; +} /* * To avoid unnecessary overhead, we pass through large allocation requests * directly to the page allocator. We use __GFP_COMP, because we will need to