From patchwork Wed Jun 10 16:31:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 11598289 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6D8731391 for ; Wed, 10 Jun 2020 16:32:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 444422070B for ; Wed, 10 Jun 2020 16:32:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 444422070B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0E8366B0070; Wed, 10 Jun 2020 12:31:56 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DB7EE6B0071; Wed, 10 Jun 2020 12:31:55 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B49A16B0075; Wed, 10 Jun 2020 12:31:55 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id 850406B0071 for ; Wed, 10 Jun 2020 12:31:55 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 37CF721DD32 for ; Wed, 10 Jun 2020 16:31:55 +0000 (UTC) X-FDA: 76913843790.14.sense67_611762926dcc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id EB82D182081D6 for ; Wed, 10 Jun 2020 16:31:54 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,vbabka@suse.cz,,RULES_HIT:30054:30070,0,RBL:195.135.220.15:@suse.cz:.lbl8.mailshell.net-62.2.6.2 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: sense67_611762926dcc X-Filterd-Recvd-Size: 3261 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Wed, 10 Jun 2020 16:31:54 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 0B35BAF4D; Wed, 10 Jun 2020 16:31:55 +0000 (UTC) From: Vlastimil Babka To: Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, vinmenon@codeaurora.org, Kees Cook , Matthew Garrett , Roman Gushchin , Vlastimil Babka , Jann Horn , Vijayanand Jitta Subject: [PATCH 6/9] mm, slub: introduce static key for slub_debug() Date: Wed, 10 Jun 2020 18:31:32 +0200 Message-Id: <20200610163135.17364-7-vbabka@suse.cz> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200610163135.17364-1-vbabka@suse.cz> References: <20200610163135.17364-1-vbabka@suse.cz> MIME-Version: 1.0 X-Rspamd-Queue-Id: EB82D182081D6 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: One advantage of CONFIG_SLUB_DEBUG is that a generic distro kernel can be built with the option enabled, but it's inactive until simply enabled on boot, without rebuilding the kernel. With a static key, we can further eliminate the overhead of checking whether a cache has a particular debug flag enabled if we know that there are no such caches (slub_debug was not enabled during boot). We use the same mechanism also for e.g. page_owner, debug_pagealloc or kmemcg functionality. This patch introduces the static key and makes the general check for per-cache debug flags kmem_cache_debug() use it. This benefits several call sites, including (slow path but still rather frequent) __slab_free(). The next patches will add more uses. Signed-off-by: Vlastimil Babka Acked-by: Roman Gushchin Reviewed-by: Kees Cook --- mm/slub.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 8dd2925448ae..24d3e5f832aa 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -114,13 +114,21 @@ * the fast path and disables lockless freelists. */ +#ifdef CONFIG_SLUB_DEBUG +#ifdef CONFIG_SLUB_DEBUG_ON +DEFINE_STATIC_KEY_TRUE(slub_debug_enabled); +#else +DEFINE_STATIC_KEY_FALSE(slub_debug_enabled); +#endif +#endif + static inline int kmem_cache_debug(struct kmem_cache *s) { #ifdef CONFIG_SLUB_DEBUG - return unlikely(s->flags & SLAB_DEBUG_FLAGS); -#else - return 0; + if (static_branch_unlikely(&slub_debug_enabled)) + return s->flags & SLAB_DEBUG_FLAGS; #endif + return 0; } void *fixup_red_left(struct kmem_cache *s, void *p) @@ -1389,6 +1397,8 @@ static int __init setup_slub_debug(char *str) slub_debug_string = saved_str; } out: + if (slub_debug != 0 || slub_debug_string) + static_branch_enable(&slub_debug_enabled); if ((static_branch_unlikely(&init_on_alloc) || static_branch_unlikely(&init_on_free)) && (slub_debug & SLAB_POISON))