From patchwork Mon Aug 23 14:58:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12453017 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DDB6C4338F for ; Mon, 23 Aug 2021 14:59:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C425E61247 for ; Mon, 23 Aug 2021 14:59:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C425E61247 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 6DA1590001B; Mon, 23 Aug 2021 10:58:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EA284900016; Mon, 23 Aug 2021 10:58:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A4B590000F; Mon, 23 Aug 2021 10:58:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id DAE4D90000A for ; Mon, 23 Aug 2021 10:58:43 -0400 (EDT) Received: from smtpin37.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 8C9AA8249980 for ; Mon, 23 Aug 2021 14:58:43 +0000 (UTC) X-FDA: 78506652126.37.1A00863 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf20.hostedemail.com (Postfix) with ESMTP id 2FD06D000642 for ; Mon, 23 Aug 2021 14:58:43 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 2459321FF1; Mon, 23 Aug 2021 14:58:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1629730722; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xZI7+dKJemS0ixThf+iPmR2dFzYWLlF4161QC1l8tfE=; b=uZnxCI66/2HriaBeif1I1wE2YsrnGaw70ZEoz3S986SvpuzPG8M7VW5CbKl9Artqrqz0yg Zov/VGVu4M1Ug5Kb7qXXEpfG05RZ48G85yQPVDQy55svS5IsZxC8IxIMTNge9iBscVXjYf uxPQ2gGmfW8wI9RYykLoOLRoAwXffas= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1629730722; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xZI7+dKJemS0ixThf+iPmR2dFzYWLlF4161QC1l8tfE=; b=X0pUCx/I/mjAh5m9CgqGTlu2Wf+fowfZNwdlZ9ruVae+Vrz3zmgDRViNq7JkjHTbVLzDWC dhYkSVq5NVwaocAg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id EF6CC13BE1; Mon, 23 Aug 2021 14:58:41 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id wL/hOaG3I2EFQQAAMHmgww (envelope-from ); Mon, 23 Aug 2021 14:58:41 +0000 From: Vlastimil Babka To: Andrew Morton , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Galbraith , Sebastian Andrzej Siewior , Thomas Gleixner , Mel Gorman , Jesper Dangaard Brouer , Jann Horn , Vlastimil Babka Subject: [PATCH v5 25/35] mm, slub: separate detaching of partial list in unfreeze_partials() from unfreezing Date: Mon, 23 Aug 2021 16:58:16 +0200 Message-Id: <20210823145826.3857-26-vbabka@suse.cz> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210823145826.3857-1-vbabka@suse.cz> References: <20210823145826.3857-1-vbabka@suse.cz> MIME-Version: 1.0 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=uZnxCI66; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="X0pUCx/I"; dmarc=none; spf=pass (imf20.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 2FD06D000642 X-Stat-Signature: j7eny3mbwfptyo94pny3egmf8wy9kymk X-HE-Tag: 1629730723-68454 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Unfreezing partial list can be split to two phases - detaching the list from struct kmem_cache_cpu, and processing the list. The whole operation does not need to be protected by disabled irqs. Restructure the code to separate the detaching (with disabled irqs) and unfreezing (with irq disabling to be reduced in the next patch). Also, unfreeze_partials() can be called from another cpu on behalf of a cpu that is being offlined, where disabling irqs on the local cpu has no sense, so restructure the code as follows: - __unfreeze_partials() is the bulk of unfreeze_partials() that processes the detached percpu partial list - unfreeze_partials() detaches list from current cpu with irqs disabled and calls __unfreeze_partials() - unfreeze_partials_cpu() is to be called for the offlined cpu so it needs no irq disabling, and is called from __flush_cpu_slab() - flush_cpu_slab() is for the local cpu thus it needs to call unfreeze_partials(). So it can't simply call __flush_cpu_slab(smp_processor_id()) anymore and we have to open-code the proper calls. Signed-off-by: Vlastimil Babka --- mm/slub.c | 73 ++++++++++++++++++++++++++++++++++++++----------------- 1 file changed, 51 insertions(+), 22 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index d8bfc41dc1f0..5f57e36e93ef 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2333,25 +2333,15 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, } } -/* - * Unfreeze all the cpu partial slabs. - * - * This function must be called with preemption or migration - * disabled with c local to the cpu. - */ -static void unfreeze_partials(struct kmem_cache *s, - struct kmem_cache_cpu *c) -{ #ifdef CONFIG_SLUB_CPU_PARTIAL +static void __unfreeze_partials(struct kmem_cache *s, struct page *partial_page) +{ struct kmem_cache_node *n = NULL, *n2 = NULL; - struct page *page, *partial_page, *discard_page = NULL; + struct page *page, *discard_page = NULL; unsigned long flags; local_irq_save(flags); - partial_page = slub_percpu_partial(c); - c->partial = NULL; - while (partial_page) { struct page new; struct page old; @@ -2406,10 +2396,45 @@ static void unfreeze_partials(struct kmem_cache *s, discard_slab(s, page); stat(s, FREE_SLAB); } +} -#endif /* CONFIG_SLUB_CPU_PARTIAL */ +/* + * Unfreeze all the cpu partial slabs. + */ +static void unfreeze_partials(struct kmem_cache *s) +{ + struct page *partial_page; + unsigned long flags; + + local_irq_save(flags); + partial_page = this_cpu_read(s->cpu_slab->partial); + this_cpu_write(s->cpu_slab->partial, NULL); + local_irq_restore(flags); + + if (partial_page) + __unfreeze_partials(s, partial_page); } +static void unfreeze_partials_cpu(struct kmem_cache *s, + struct kmem_cache_cpu *c) +{ + struct page *partial_page; + + partial_page = slub_percpu_partial(c); + c->partial = NULL; + + if (partial_page) + __unfreeze_partials(s, partial_page); +} + +#else /* CONFIG_SLUB_CPU_PARTIAL */ + +static inline void unfreeze_partials(struct kmem_cache *s) { } +static inline void unfreeze_partials_cpu(struct kmem_cache *s, + struct kmem_cache_cpu *c) { } + +#endif /* CONFIG_SLUB_CPU_PARTIAL */ + /* * Put a page that was just frozen (in __slab_free|get_partial_node) into a * partial page slot if available. @@ -2438,7 +2463,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain) * partial array is full. Move the existing * set to the per node partial list. */ - unfreeze_partials(s, this_cpu_ptr(s->cpu_slab)); + unfreeze_partials(s); oldpage = NULL; pobjects = 0; pages = 0; @@ -2473,11 +2498,6 @@ static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c) stat(s, CPUSLAB_FLUSH); } -/* - * Flush cpu slab. - * - * Called from IPI handler with interrupts disabled. - */ static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) { struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); @@ -2485,14 +2505,23 @@ static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) if (c->page) flush_slab(s, c); - unfreeze_partials(s, c); + unfreeze_partials_cpu(s, c); } +/* + * Flush cpu slab. + * + * Called from IPI handler with interrupts disabled. + */ static void flush_cpu_slab(void *d) { struct kmem_cache *s = d; + struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab); - __flush_cpu_slab(s, smp_processor_id()); + if (c->page) + flush_slab(s, c); + + unfreeze_partials(s); } static bool has_cpu_slab(int cpu, void *info)