From patchwork Tue Aug 23 17:04:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12952325 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48587C32789 for ; Tue, 23 Aug 2022 17:04:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E6FB9940009; Tue, 23 Aug 2022 13:04:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E19EC94000A; Tue, 23 Aug 2022 13:04:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF7F994000C; Tue, 23 Aug 2022 13:04:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7943A94000A for ; Tue, 23 Aug 2022 13:04:10 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 44D5912166D for ; Tue, 23 Aug 2022 17:04:10 +0000 (UTC) X-FDA: 79831480260.03.E6D0823 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf13.hostedemail.com (Postfix) with ESMTP id B462F2000B for ; Tue, 23 Aug 2022 17:04:09 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 9C33D3372D; Tue, 23 Aug 2022 17:04:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1661274248; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=M6Oa8AcVdF5tGcAEifrO9DkGH8lIXE4zswm8+5FHn5k=; b=CYDl6/lVAKMdnEevruhouuFOR5hBqzuwOFh49WSPGTusGAFfPKbHDzv3Rlm2ZJC3jCME2E XdEsNo8044sqnmQ9UpAi/j2xuvSQ58ka5C1Nl7U6pKF9ahfzeXglkImjH2YRm3vEdaJRG5 IY83bd8kqZUNrWtHVtlgxfFHlZ9uHU4= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1661274248; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=M6Oa8AcVdF5tGcAEifrO9DkGH8lIXE4zswm8+5FHn5k=; b=AeWJkytyVKlcvHIVPcwd2x1Y33sQxGXnpBqAX472T9Uu6TwsObuqF/uChox8+RDoE2LBhq MjidfAd03uH/xhDQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6BD5613AE6; Tue, 23 Aug 2022 17:04:08 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id CAzDGYgIBWPTQgAAMHmgww (envelope-from ); Tue, 23 Aug 2022 17:04:08 +0000 From: Vlastimil Babka To: Rongwei Wang , Christoph Lameter , Joonsoo Kim , David Rientjes , Pekka Enberg Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, Sebastian Andrzej Siewior , Thomas Gleixner , Mike Galbraith , Vlastimil Babka Subject: [PATCH v2 5/5] mm/slub: simplify __cmpxchg_double_slab() and slab_[un]lock() Date: Tue, 23 Aug 2022 19:04:00 +0200 Message-Id: <20220823170400.26546-6-vbabka@suse.cz> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220823170400.26546-1-vbabka@suse.cz> References: <20220823170400.26546-1-vbabka@suse.cz> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661274249; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=M6Oa8AcVdF5tGcAEifrO9DkGH8lIXE4zswm8+5FHn5k=; b=wQvpBkDHpf6sJHkg5WF5CJuf9VueAOF4d055t3EyGvOpDybuhftthEdVTBqLjiqndBGY+T +GUJumLD0V6R4Pe5aSMHKa07gl00/DvD+LN7sGTAFf0C6+pAtnTKc5sQHiOp5xpF3JY+d+ 3glgWMbcBu8HyBXCM93e742AtpNxAAk= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="CYDl6/lV"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=AeWJkyty; spf=pass (imf13.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661274249; a=rsa-sha256; cv=none; b=zhImZztmevdpX88sHA9UwKJnSyBH9Y+OGEW2Mfm8+M/3D2SBWBZYVFLBPB1nbFiZlJfQzc WHlijQ2cJRWAMkaHTnufPouXUAKOvskabnmaiuIXHLQ7ZKtH35yHbb7Hc9w0hXjTq3Gev5 oNsd05BixCiNMuFg2AqP0iOBeWSyT0g= X-Stat-Signature: muexrip46f4asmy7zpff3r9ph3mcntc3 X-Rspamd-Queue-Id: B462F2000B X-Rspam-User: Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="CYDl6/lV"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=AeWJkyty; spf=pass (imf13.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-Rspamd-Server: rspam01 X-HE-Tag: 1661274249-817902 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The PREEMPT_RT specific disabling of irqs in __cmpxchg_double_slab() (through slab_[un]lock()) is unnecessary as bit_spin_lock() disables preemption and that's sufficient on RT where interrupts are threaded. That means we no longer need the slab_[un]lock() wrappers, so delete them and rename the current __slab_[un]lock() to slab_[un]lock(). Signed-off-by: Vlastimil Babka Acked-by: David Rientjes Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Sebastian Andrzej Siewior --- mm/slub.c | 39 ++++++++++++--------------------------- 1 file changed, 12 insertions(+), 27 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 0444a2ba4f12..bb8c1292d7e8 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -446,7 +446,7 @@ slub_set_cpu_partial(struct kmem_cache *s, unsigned int nr_objects) /* * Per slab locking using the pagelock */ -static __always_inline void __slab_lock(struct slab *slab) +static __always_inline void slab_lock(struct slab *slab) { struct page *page = slab_page(slab); @@ -454,7 +454,7 @@ static __always_inline void __slab_lock(struct slab *slab) bit_spin_lock(PG_locked, &page->flags); } -static __always_inline void __slab_unlock(struct slab *slab) +static __always_inline void slab_unlock(struct slab *slab) { struct page *page = slab_page(slab); @@ -462,24 +462,12 @@ static __always_inline void __slab_unlock(struct slab *slab) __bit_spin_unlock(PG_locked, &page->flags); } -static __always_inline void slab_lock(struct slab *slab, unsigned long *flags) -{ - if (IS_ENABLED(CONFIG_PREEMPT_RT)) - local_irq_save(*flags); - __slab_lock(slab); -} - -static __always_inline void slab_unlock(struct slab *slab, unsigned long *flags) -{ - __slab_unlock(slab); - if (IS_ENABLED(CONFIG_PREEMPT_RT)) - local_irq_restore(*flags); -} - /* * Interrupts must be disabled (for the fallback code to work right), typically - * by an _irqsave() lock variant. Except on PREEMPT_RT where locks are different - * so we disable interrupts as part of slab_[un]lock(). + * by an _irqsave() lock variant. Except on PREEMPT_RT where these variants do + * not actually disable interrupts. On the other hand the migrate_disable() + * done by bit_spin_lock() is sufficient on PREEMPT_RT thanks to its threaded + * interrupts. */ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab, void *freelist_old, unsigned long counters_old, @@ -498,18 +486,15 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab } else #endif { - /* init to 0 to prevent spurious warnings */ - unsigned long flags = 0; - - slab_lock(slab, &flags); + slab_lock(slab); if (slab->freelist == freelist_old && slab->counters == counters_old) { slab->freelist = freelist_new; slab->counters = counters_new; - slab_unlock(slab, &flags); + slab_unlock(slab); return true; } - slab_unlock(slab, &flags); + slab_unlock(slab); } cpu_relax(); @@ -540,16 +525,16 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab, unsigned long flags; local_irq_save(flags); - __slab_lock(slab); + slab_lock(slab); if (slab->freelist == freelist_old && slab->counters == counters_old) { slab->freelist = freelist_new; slab->counters = counters_new; - __slab_unlock(slab); + slab_unlock(slab); local_irq_restore(flags); return true; } - __slab_unlock(slab); + slab_unlock(slab); local_irq_restore(flags); }