From patchwork Mon Oct 4 13:46:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A4CBC433F5 for ; Mon, 4 Oct 2021 14:04:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D749D611C1 for ; Mon, 4 Oct 2021 14:04:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D749D611C1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 758D9940023; Mon, 4 Oct 2021 10:04:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 70B3394000B; Mon, 4 Oct 2021 10:04:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F87B940023; Mon, 4 Oct 2021 10:04:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0070.hostedemail.com [216.40.44.70]) by kanga.kvack.org (Postfix) with ESMTP id 4F8BD94000B for ; Mon, 4 Oct 2021 10:04:47 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0D77282499A8 for ; Mon, 4 Oct 2021 14:04:47 +0000 (UTC) X-FDA: 78658925814.39.E812C72 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id B8809B00244D for ; Mon, 4 Oct 2021 14:04:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=4nWAHAlDFDgLThN8urngdmIGHoeLFUUlwM8UhgaXB94=; b=ewC5UKLuCe7K2g/g1axre9ZxwI GiXmX9Db4BOyrHEGJ13LkIfyuVN6dGncbFSgIkvIc2nsUWJ+KONzbKu5F0Q13r1frWC/abuJKkeV8 X/v/KzPdIuVjACwZl5FYiuQ/SnIIprvpeYBD7uZpGjO3ZDIuT7/s7xdGaa5mwuYrwc5VAZDozAjzn D+QHF4hStfpbgr43/L+xkj0aS2xsJytUK2b14KjXKTNz0qymHjXR+g27NkwAgsYSBP/Or3Hsg54OI b/9hkHVTWP5RlmdpIkbSu4LR2CUNx8h8xYkr5r8shCCem3iTzHAuH/WGEXRm6kZIteyljFpUwNFec MG2VlnBw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOYj-00Gwl4-O5; Mon, 04 Oct 2021 14:03:35 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 14/62] mm/slub: Convert early_kmem_cache_node_alloc() to use struct slab Date: Mon, 4 Oct 2021 14:46:02 +0100 Message-Id: <20211004134650.4031813-15-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: B8809B00244D X-Stat-Signature: 7r9r8mpfaof474akipwuuxdh5no5qqnp Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ewC5UKLu; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633356286-630968 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a little type safety. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 555c46cbae1f..41c4ccd67d95 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3891,38 +3891,38 @@ static struct kmem_cache *kmem_cache_node; */ static void early_kmem_cache_node_alloc(int node) { - struct page *page; + struct slab *slab; struct kmem_cache_node *n; BUG_ON(kmem_cache_node->size < sizeof(struct kmem_cache_node)); - page = slab_page(new_slab(kmem_cache_node, GFP_NOWAIT, node)); + slab = new_slab(kmem_cache_node, GFP_NOWAIT, node); - BUG_ON(!page); - if (page_to_nid(page) != node) { + BUG_ON(!slab); + if (slab_nid(slab) != node) { pr_err("SLUB: Unable to allocate memory from node %d\n", node); pr_err("SLUB: Allocating a useless per node structure in order to be able to continue\n"); } - n = page->freelist; + n = slab->freelist; BUG_ON(!n); #ifdef CONFIG_SLUB_DEBUG init_object(kmem_cache_node, n, SLUB_RED_ACTIVE); init_tracking(kmem_cache_node, n); #endif n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL, false); - page->freelist = get_freepointer(kmem_cache_node, n); - page->inuse = 1; - page->frozen = 0; + slab->freelist = get_freepointer(kmem_cache_node, n); + slab->inuse = 1; + slab->frozen = 0; kmem_cache_node->node[node] = n; init_kmem_cache_node(n); - inc_slabs_node(kmem_cache_node, node, page->objects); + inc_slabs_node(kmem_cache_node, node, slab->objects); /* * No locks need to be taken here as it has just been * initialized and there is no concurrent access. */ - __add_partial(n, page, DEACTIVATE_TO_HEAD); + __add_partial(n, slab_page(slab), DEACTIVATE_TO_HEAD); } static void free_kmem_cache_nodes(struct kmem_cache *s)