From patchwork Wed Nov 29 09:53:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13472567 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43915C4167B for ; Wed, 29 Nov 2023 09:53:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 158C86B03C2; Wed, 29 Nov 2023 04:53:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CEE3D6B03BB; Wed, 29 Nov 2023 04:53:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 996626B03BF; Wed, 29 Nov 2023 04:53:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 368FC6B03BA for ; Wed, 29 Nov 2023 04:53:42 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 06BF7160197 for ; Wed, 29 Nov 2023 09:53:42 +0000 (UTC) X-FDA: 81510529884.18.3930F99 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf13.hostedemail.com (Postfix) with ESMTP id DD24020013 for ; Wed, 29 Nov 2023 09:53:39 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf13.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701251620; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=P2ukcSj2GFDFWZJr5dQPgX+igAnhpGokILF+Ev+OXuM=; b=Y2Q5e1OgN65Bg4Xlo8vll5GePkabm8AzSSR9ZsFQGcjnL7ieD1yTkUKaGiq3wW/5+PbeIr H2iibjGjsGoaUj0oMHqCNWKZG2E1CsJyTF1IJ3ZolhM/MSw/JmvJ2OBe7nL8oFKTsEq9at o0VNjg/KV5cTjdszdEQEbtcqykxHw5w= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf13.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701251620; a=rsa-sha256; cv=none; b=R7MMsMQvqzUx7oyK71a7VIf5incUhNrP5lChJFZaakKq7TpYUXFMWderZstrea93he7CF3 9taBQHBluEpgceQWcC1pf28yi8sDuns4fBU5wI00D5Lzkg2yA+FezbGvmjat3InFgt+bbY xk8MP/izsB86CuG17L1/+yG4CKOnxmA= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 2B8351F8B3; Wed, 29 Nov 2023 09:53:37 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 08AC613A98; Wed, 29 Nov 2023 09:53:37 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id qLDSASEKZ2UrfQAAD6G6ig (envelope-from ); Wed, 29 Nov 2023 09:53:37 +0000 From: Vlastimil Babka Date: Wed, 29 Nov 2023 10:53:28 +0100 Subject: [PATCH RFC v3 3/9] mm/slub: handle bulk and single object freeing separately MIME-Version: 1.0 Message-Id: <20231129-slub-percpu-caches-v3-3-6bcf536772bc@suse.cz> References: <20231129-slub-percpu-caches-v3-0-6bcf536772bc@suse.cz> In-Reply-To: <20231129-slub-percpu-caches-v3-0-6bcf536772bc@suse.cz> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , "Liam R. Howlett" Cc: Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Alexander Potapenko , Marco Elver , Dmitry Vyukov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, maple-tree@lists.infradead.org, kasan-dev@googlegroups.com, Vlastimil Babka X-Mailer: b4 0.12.4 X-Spamd-Bar: +++++++++ X-Rspam-User: X-Stat-Signature: jqsz7o6qu9nibcj3w11fb6tmr5qrkhxt X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: DD24020013 X-HE-Tag: 1701251619-258326 X-HE-Meta: U2FsdGVkX18JgSuzPsOGvP33yjm79U/gqwLYMGffJstqZtf4D0NF6iVOYjxx4Mf14hNihUzqDdNO/GiCNJqP7kg+wJp8ZnWLfUjiBlep741j4WGjUdqCq49pSoGeSMcVtLSGv4E2Qbqcobt18CDVKSw6weLzyfQh8Wl7ZD8Hcm1GKLlKHAjQvh3NRS+isM3AzVlC/UreC6k2pf/GcMrx+gcM+7uN58MDYTv+G778UVmgQ8iYSawc6lPI2obD3y/gujqruHUx8G1o04XxgF1KocnzyO0EMibgSWzWbZ4I5pJrxzBX1YhZsrdurZ1Vy+VRnsb2G2QJ82vElkeNJCkHsYL9o+r7bGQkZ9qtBTaqahYVWnKUbcqNhNVMpQwVI7rSRJ96NkJk33s0Ba7kOLHgk6+txflTHymkRh5SeM7XRQKuiCSSdwkHZTC7ukEpcEEpHPtsFsRwCUx+4mx7EbG+2VDc2tnGWBqd0Tb6w3atyr2wByuBMcTxzCsXUbH/75i1MifmmP4JHIXTV1rwwFlwLQPF17vdBZLB6VIuZJwB9thY/HcKyg/fdIvOF1Uw68kCGVU9FWyqCHKfR2r5vtoT+nmfKNsMJrP1NdAJWOj0z3WzyQRAlNbz125zzAePCGVMpoLYQ91G+FqbRNRX4RZAXJ0sGGDaNRXinCEdtKVr54+tuzUDujukdZoAet/tSXR/ZQdFxDaumvX6UlLRxVtvtotLMGOK+TLRCvyBt3ZLQtPSD3I2gXu9NyDeaaNt0azcxkFig767NubdhIbQTnVNazKbxtPWAyVWsfdQ+ncWmesp3Xcx3HGmlI5G6hOcuezh4LkSKhrMKlLagGQIzmsmNDmOWBS6uxAG765udD1Lv2puJYH0l16sTHaM6Izc4TQLpLlNAxMJfB9dnBvEo8JUcAyNUOatxxxF4HuLyfZkW2ZlpVPLhohu2mZi7mLeQIby9fmtA4oOePvYUuGfJ5q OLorHBwz 4xwg3JsPWR+2XzIaHQdqsUSSFxfavJlLQkJnq4ncXejju8SwCtng0i05h76fi4y9ohitu+MEZ1G/05fhBM/ug4ILsz4KLcwDn8c+Ugc+pbAVnTLNby9KCv8BnF55unsHvuUQxn0smBpNCEfH/qXkrc8PrmwjgcEd7zl7oyjLFTW7WErJhv4scCMM+EdO6LoySJgKz X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Until now we have a single function slab_free() handling both single object freeing and bulk freeing with neccessary hooks, the latter case requiring slab_free_freelist_hook(). It should be however better to distinguish the two scenarios for the following reasons: - code simpler to follow for the single object case - better code generation - although inlining should eliminate the slab_free_freelist_hook() in case no debugging options are enabled, it seems it's not perfect. When e.g. KASAN is enabled, we're imposing additional unnecessary overhead for single object freeing. - preparation to add percpu array caches in later patches Therefore, simplify slab_free() for the single object case by dropping unnecessary parameters and calling only slab_free_hook() instead of slab_free_freelist_hook(). Rename the bulk variant to slab_free_bulk() and adjust callers accordingly. While at it, flip (and document) slab_free_hook() return value so that it returns true when the freeing can proceed, which matches the logic of slab_free_freelist_hook() and is not confusingly the opposite. Additionally we can simplify a bit by changing the tail parameter of do_slab_free() when freeing a single object - instead of NULL we can set equal to head. bloat-o-meter shows small code reduction with a .config that has KASAN etc disabled: add/remove: 0/0 grow/shrink: 0/4 up/down: 0/-118 (-118) Function old new delta kmem_cache_alloc_bulk 1203 1196 -7 kmem_cache_free 861 835 -26 __kmem_cache_free 741 704 -37 kmem_cache_free_bulk 911 863 -48 Signed-off-by: Vlastimil Babka --- mm/slub.c | 57 ++++++++++++++++++++++++++++++++++----------------------- 1 file changed, 34 insertions(+), 23 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 16748aeada8f..7d23f10d42e6 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1770,9 +1770,12 @@ static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, /* * Hooks for other subsystems that check memory allocations. In a typical * production configuration these hooks all should produce no code at all. + * + * Returns true if freeing of the object can proceed, false if its reuse + * was delayed by KASAN quarantine. */ -static __always_inline bool slab_free_hook(struct kmem_cache *s, - void *x, bool init) +static __always_inline +bool slab_free_hook(struct kmem_cache *s, void *x, bool init) { kmemleak_free_recursive(x, s->flags); kmsan_slab_free(s, x); @@ -1805,7 +1808,7 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s, s->size - s->inuse - rsize); } /* KASAN might put x into memory quarantine, delaying its reuse. */ - return kasan_slab_free(s, x, init); + return !kasan_slab_free(s, x, init); } static inline bool slab_free_freelist_hook(struct kmem_cache *s, @@ -1815,7 +1818,7 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, void *object; void *next = *head; - void *old_tail = *tail ? *tail : *head; + void *old_tail = *tail; if (is_kfence_address(next)) { slab_free_hook(s, next, false); @@ -1831,7 +1834,7 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, next = get_freepointer(s, object); /* If object's reuse doesn't have to be delayed */ - if (!slab_free_hook(s, object, slab_want_init_on_free(s))) { + if (slab_free_hook(s, object, slab_want_init_on_free(s))) { /* Move object to the new freelist */ set_freepointer(s, object, *head); *head = object; @@ -1846,9 +1849,6 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, } } while (object != old_tail); - if (*head == *tail) - *tail = NULL; - return *head != NULL; } @@ -3743,7 +3743,6 @@ static __always_inline void do_slab_free(struct kmem_cache *s, struct slab *slab, void *head, void *tail, int cnt, unsigned long addr) { - void *tail_obj = tail ? : head; struct kmem_cache_cpu *c; unsigned long tid; void **freelist; @@ -3762,14 +3761,14 @@ static __always_inline void do_slab_free(struct kmem_cache *s, barrier(); if (unlikely(slab != c->slab)) { - __slab_free(s, slab, head, tail_obj, cnt, addr); + __slab_free(s, slab, head, tail, cnt, addr); return; } if (USE_LOCKLESS_FAST_PATH()) { freelist = READ_ONCE(c->freelist); - set_freepointer(s, tail_obj, freelist); + set_freepointer(s, tail, freelist); if (unlikely(!__update_cpu_freelist_fast(s, freelist, head, tid))) { note_cmpxchg_failure("slab_free", s, tid); @@ -3786,7 +3785,7 @@ static __always_inline void do_slab_free(struct kmem_cache *s, tid = c->tid; freelist = c->freelist; - set_freepointer(s, tail_obj, freelist); + set_freepointer(s, tail, freelist); c->freelist = head; c->tid = next_tid(tid); @@ -3799,15 +3798,27 @@ static void do_slab_free(struct kmem_cache *s, struct slab *slab, void *head, void *tail, int cnt, unsigned long addr) { - void *tail_obj = tail ? : head; - - __slab_free(s, slab, head, tail_obj, cnt, addr); + __slab_free(s, slab, head, tail, cnt, addr); } #endif /* CONFIG_SLUB_TINY */ -static __fastpath_inline void slab_free(struct kmem_cache *s, struct slab *slab, - void *head, void *tail, void **p, int cnt, - unsigned long addr) +static __fastpath_inline +void slab_free(struct kmem_cache *s, struct slab *slab, void *object, + unsigned long addr) +{ + bool init; + + memcg_slab_free_hook(s, slab, &object, 1); + + init = !is_kfence_address(object) && slab_want_init_on_free(s); + + if (likely(slab_free_hook(s, object, init))) + do_slab_free(s, slab, object, object, 1, addr); +} + +static __fastpath_inline +void slab_free_bulk(struct kmem_cache *s, struct slab *slab, void *head, + void *tail, void **p, int cnt, unsigned long addr) { memcg_slab_free_hook(s, slab, p, cnt); /* @@ -3821,13 +3832,13 @@ static __fastpath_inline void slab_free(struct kmem_cache *s, struct slab *slab, #ifdef CONFIG_KASAN_GENERIC void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) { - do_slab_free(cache, virt_to_slab(x), x, NULL, 1, addr); + do_slab_free(cache, virt_to_slab(x), x, x, 1, addr); } #endif void __kmem_cache_free(struct kmem_cache *s, void *x, unsigned long caller) { - slab_free(s, virt_to_slab(x), x, NULL, &x, 1, caller); + slab_free(s, virt_to_slab(x), x, caller); } void kmem_cache_free(struct kmem_cache *s, void *x) @@ -3836,7 +3847,7 @@ void kmem_cache_free(struct kmem_cache *s, void *x) if (!s) return; trace_kmem_cache_free(_RET_IP_, x, s); - slab_free(s, virt_to_slab(x), x, NULL, &x, 1, _RET_IP_); + slab_free(s, virt_to_slab(x), x, _RET_IP_); } EXPORT_SYMBOL(kmem_cache_free); @@ -3953,8 +3964,8 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) if (!df.slab) continue; - slab_free(df.s, df.slab, df.freelist, df.tail, &p[size], df.cnt, - _RET_IP_); + slab_free_bulk(df.s, df.slab, df.freelist, df.tail, &p[size], + df.cnt, _RET_IP_); } while (likely(size)); } EXPORT_SYMBOL(kmem_cache_free_bulk);