From patchwork Thu Sep 2 21:52:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12472717 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4314AC00306 for ; Thu, 2 Sep 2021 21:52:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F312E61104 for ; Thu, 2 Sep 2021 21:52:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org F312E61104 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 9669E6B0083; Thu, 2 Sep 2021 17:52:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 916B98D0001; Thu, 2 Sep 2021 17:52:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 805516B0088; Thu, 2 Sep 2021 17:52:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 70AD36B0083 for ; Thu, 2 Sep 2021 17:52:14 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3273482F7A82 for ; Thu, 2 Sep 2021 21:52:14 +0000 (UTC) X-FDA: 78543982188.36.3AC6978 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf03.hostedemail.com (Postfix) with ESMTP id D285F3000096 for ; Thu, 2 Sep 2021 21:52:13 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id C0730610A0; Thu, 2 Sep 2021 21:52:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1630619533; bh=sxrQfxoHGVKbljgranv0aNECw12f7++8fUn5E91ntJ4=; h=Date:From:To:Subject:In-Reply-To:From; b=pLojRSyaTE+l0w/aMAHo4onNlQwJp5IAXYtI/04Yhs8CNxC824oGPXpUcLqb/dDeu wZvsDuGH1aPVyPIbcG/fjkkcBrgeSJc5bpkyB147wPsV40LzOEGskUac1k/0Jq1Mm/ XvYAsDL6QB68tgdby2aB/XDfE+TXDqz0RoWd98No= Date: Thu, 02 Sep 2021 14:52:12 -0700 From: Andrew Morton To: akpm@linux-foundation.org, bigeasy@linutronix.de, brouer@redhat.com, cl@linux.com, efault@gmx.de, iamjoonsoo.kim@lge.com, jannh@google.com, linux-mm@kvack.org, mgorman@techsingularity.net, mm-commits@vger.kernel.org, penberg@kernel.org, rientjes@google.com, tglx@linutronix.de, torvalds@linux-foundation.org, vbabka@suse.cz Subject: [patch 042/212] mm, slub: use migrate_disable() on PREEMPT_RT Message-ID: <20210902215212.CUVLaVpin%akpm@linux-foundation.org> In-Reply-To: <20210902144820.78957dff93d7bea620d55a89@linux-foundation.org> User-Agent: s-nail v14.8.16 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=pLojRSya; spf=pass (imf03.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Stat-Signature: 84mr4wwpop4iogh7t61kzaht796ze7js X-Rspamd-Queue-Id: D285F3000096 X-Rspamd-Server: rspam04 X-HE-Tag: 1630619533-15936 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vlastimil Babka Subject: mm, slub: use migrate_disable() on PREEMPT_RT We currently use preempt_disable() (directly or via get_cpu_ptr()) to stabilize the pointer to kmem_cache_cpu. On PREEMPT_RT this would be incompatible with the list_lock spinlock. We can use migrate_disable() instead, but that increases overhead on !PREEMPT_RT as it's an unconditional function call. In order to get the best available mechanism on both PREEMPT_RT and !PREEMPT_RT, introduce private slub_get_cpu_ptr() and slub_put_cpu_ptr() wrappers and use them. Link: https://lkml.kernel.org/r/20210805152000.12817-35-vbabka@suse.cz Signed-off-by: Vlastimil Babka Cc: Christoph Lameter Cc: David Rientjes Cc: Jann Horn Cc: Jesper Dangaard Brouer Cc: Joonsoo Kim Cc: Mel Gorman Cc: Mike Galbraith Cc: Pekka Enberg Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Signed-off-by: Andrew Morton --- mm/slub.c | 39 ++++++++++++++++++++++++++++++--------- 1 file changed, 30 insertions(+), 9 deletions(-) --- a/mm/slub.c~mm-slub-use-migrate_disable-on-preempt_rt +++ a/mm/slub.c @@ -118,6 +118,26 @@ * the fast path and disables lockless freelists. */ +/* + * We could simply use migrate_disable()/enable() but as long as it's a + * function call even on !PREEMPT_RT, use inline preempt_disable() there. + */ +#ifndef CONFIG_PREEMPT_RT +#define slub_get_cpu_ptr(var) get_cpu_ptr(var) +#define slub_put_cpu_ptr(var) put_cpu_ptr(var) +#else +#define slub_get_cpu_ptr(var) \ +({ \ + migrate_disable(); \ + this_cpu_ptr(var); \ +}) +#define slub_put_cpu_ptr(var) \ +do { \ + (void)(var); \ + migrate_enable(); \ +} while (0) +#endif + #ifdef CONFIG_SLUB_DEBUG #ifdef CONFIG_SLUB_DEBUG_ON DEFINE_STATIC_KEY_TRUE(slub_debug_enabled); @@ -2828,7 +2848,7 @@ redo: if (unlikely(!pfmemalloc_match_unsafe(page, gfpflags))) goto deactivate_slab; - /* must check again c->page in case IRQ handler changed it */ + /* must check again c->page in case we got preempted and it changed */ local_irq_save(flags); if (unlikely(page != c->page)) { local_irq_restore(flags); @@ -2887,7 +2907,8 @@ new_slab: } if (unlikely(!slub_percpu_partial(c))) { local_irq_restore(flags); - goto new_objects; /* stolen by an IRQ handler */ + /* we were preempted and partial list got empty */ + goto new_objects; } page = c->page = slub_percpu_partial(c); @@ -2903,9 +2924,9 @@ new_objects: if (freelist) goto check_new_page; - put_cpu_ptr(s->cpu_slab); + slub_put_cpu_ptr(s->cpu_slab); page = new_slab(s, gfpflags, node); - c = get_cpu_ptr(s->cpu_slab); + c = slub_get_cpu_ptr(s->cpu_slab); if (unlikely(!page)) { slab_out_of_memory(s, gfpflags, node); @@ -2988,12 +3009,12 @@ static void *__slab_alloc(struct kmem_ca * cpu before disabling preemption. Need to reload cpu area * pointer. */ - c = get_cpu_ptr(s->cpu_slab); + c = slub_get_cpu_ptr(s->cpu_slab); #endif p = ___slab_alloc(s, gfpflags, node, addr, c); #ifdef CONFIG_PREEMPT_COUNT - put_cpu_ptr(s->cpu_slab); + slub_put_cpu_ptr(s->cpu_slab); #endif return p; } @@ -3522,7 +3543,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca * IRQs, which protects against PREEMPT and interrupts * handlers invoking normal fastpath. */ - c = get_cpu_ptr(s->cpu_slab); + c = slub_get_cpu_ptr(s->cpu_slab); local_irq_disable(); for (i = 0; i < size; i++) { @@ -3568,7 +3589,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca } c->tid = next_tid(c->tid); local_irq_enable(); - put_cpu_ptr(s->cpu_slab); + slub_put_cpu_ptr(s->cpu_slab); /* * memcg and kmem_cache debug support and memory initialization. @@ -3578,7 +3599,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca slab_want_init_on_alloc(flags, s)); return i; error: - put_cpu_ptr(s->cpu_slab); + slub_put_cpu_ptr(s->cpu_slab); slab_post_alloc_hook(s, objcg, flags, i, p, false); __kmem_cache_free_bulk(s, i, p); return 0;