From patchwork Wed Sep 8 02:53:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12479803 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC807C433EF for ; Wed, 8 Sep 2021 02:53:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6181161100 for ; Wed, 8 Sep 2021 02:53:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6181161100 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 0E47294000D; Tue, 7 Sep 2021 22:53:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 094CD940008; Tue, 7 Sep 2021 22:53:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E4FDB94000D; Tue, 7 Sep 2021 22:53:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0085.hostedemail.com [216.40.44.85]) by kanga.kvack.org (Postfix) with ESMTP id D7239940008 for ; Tue, 7 Sep 2021 22:53:20 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 9778F82499A8 for ; Wed, 8 Sep 2021 02:53:20 +0000 (UTC) X-FDA: 78562884960.32.51C3F09 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf06.hostedemail.com (Postfix) with ESMTP id 42088801A89F for ; Wed, 8 Sep 2021 02:53:20 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 0AAE1610C9; Wed, 8 Sep 2021 02:53:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1631069599; bh=ucRSdMMTvY12qn/Y46LNJm6dtoS1vB0uv8zlgp2hxmo=; h=Date:From:To:Subject:In-Reply-To:From; b=sAIYCe2rjNJEsq1ZIBNfR5RCO28B588BTRXOtnTC8CPiStlOXBG5whoSGoRlI7Jmg lK4XB4nBTQzkcKuAENZZHlqJVW1o2UdX6V+Ubgvb/GDXyRoM6BnXAiKUSkxhqrlYgP qN038yIv6XEyMWzTXLNgwovJiFfUo1vPfV2v0kuw= Date: Tue, 07 Sep 2021 19:53:18 -0700 From: Andrew Morton To: akpm@linux-foundation.org, bigeasy@linutronix.de, brouer@redhat.com, cl@linux.com, efault@gmx.de, iamjoonsoo.kim@lge.com, jannh@google.com, linux-mm@kvack.org, mgorman@techsingularity.net, mm-commits@vger.kernel.org, penberg@kernel.org, quic_qiancai@quicinc.com, rientjes@google.com, tglx@linutronix.de, torvalds@linux-foundation.org, vbabka@suse.cz Subject: [patch 007/147] mm, slub: dissolve new_slab_objects() into ___slab_alloc() Message-ID: <20210908025318.0a_jq6Ci5%akpm@linux-foundation.org> In-Reply-To: <20210907195226.14b1d22a07c085b22968b933@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Queue-Id: 42088801A89F Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=sAIYCe2r; dmarc=none; spf=pass (imf06.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org X-Rspamd-Server: rspam01 X-Stat-Signature: ckb6yrhtfx87a5sfsz5hof58ypci6ypp X-HE-Tag: 1631069600-779483 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vlastimil Babka Subject: mm, slub: dissolve new_slab_objects() into ___slab_alloc() The later patches will need more fine grained control over individual actions in ___slab_alloc(), the only caller of new_slab_objects(), so dissolve it there. This is a preparatory step with no functional change. The only minor change is moving WARN_ON_ONCE() for using a constructor together with __GFP_ZERO to new_slab(), which makes it somewhat less frequent, but still able to catch a development change introducing a systematic misuse. Link: https://lkml.kernel.org/r/20210904105003.11688-8-vbabka@suse.cz Signed-off-by: Vlastimil Babka Acked-by: Christoph Lameter Acked-by: Mel Gorman Cc: David Rientjes Cc: Jann Horn Cc: Jesper Dangaard Brouer Cc: Joonsoo Kim Cc: Mike Galbraith Cc: Pekka Enberg Cc: Qian Cai Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Signed-off-by: Andrew Morton --- mm/slub.c | 50 ++++++++++++++++++-------------------------------- 1 file changed, 18 insertions(+), 32 deletions(-) --- a/mm/slub.c~mm-slub-dissolve-new_slab_objects-into-___slab_alloc +++ a/mm/slub.c @@ -1885,6 +1885,8 @@ static struct page *new_slab(struct kmem if (unlikely(flags & GFP_SLAB_BUG_MASK)) flags = kmalloc_fix_flags(flags); + WARN_ON_ONCE(s->ctor && (flags & __GFP_ZERO)); + return allocate_slab(s, flags & (GFP_RECLAIM_MASK | GFP_CONSTRAINT_MASK), node); } @@ -2610,36 +2612,6 @@ slab_out_of_memory(struct kmem_cache *s, #endif } -static inline void *new_slab_objects(struct kmem_cache *s, gfp_t flags, - int node, struct kmem_cache_cpu **pc) -{ - void *freelist = NULL; - struct kmem_cache_cpu *c = *pc; - struct page *page; - - WARN_ON_ONCE(s->ctor && (flags & __GFP_ZERO)); - - page = new_slab(s, flags, node); - if (page) { - c = raw_cpu_ptr(s->cpu_slab); - if (c->page) - flush_slab(s, c); - - /* - * No other reference to the page yet so we can - * muck around with it freely without cmpxchg - */ - freelist = page->freelist; - page->freelist = NULL; - - stat(s, ALLOC_SLAB); - c->page = page; - *pc = c; - } - - return freelist; -} - static inline bool pfmemalloc_match(struct page *page, gfp_t gfpflags) { if (unlikely(PageSlabPfmemalloc(page))) @@ -2786,13 +2758,27 @@ new_slab: if (freelist) goto check_new_page; - freelist = new_slab_objects(s, gfpflags, node, &c); + page = new_slab(s, gfpflags, node); - if (unlikely(!freelist)) { + if (unlikely(!page)) { slab_out_of_memory(s, gfpflags, node); return NULL; } + c = raw_cpu_ptr(s->cpu_slab); + if (c->page) + flush_slab(s, c); + + /* + * No other reference to the page yet so we can + * muck around with it freely without cmpxchg + */ + freelist = page->freelist; + page->freelist = NULL; + + stat(s, ALLOC_SLAB); + c->page = page; + check_new_page: page = c->page; if (likely(!kmem_cache_debug(s) && pfmemalloc_match(page, gfpflags)))