From patchwork Mon Oct 4 13:46:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88A10C433F5 for ; Mon, 4 Oct 2021 14:24:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3425B6121F for ; Mon, 4 Oct 2021 14:24:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3425B6121F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id CC02F940034; Mon, 4 Oct 2021 10:24:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C700F94000B; Mon, 4 Oct 2021 10:24:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5F22940034; Mon, 4 Oct 2021 10:24:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0171.hostedemail.com [216.40.44.171]) by kanga.kvack.org (Postfix) with ESMTP id A782094000B for ; Mon, 4 Oct 2021 10:24:41 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 6225C82499A8 for ; Mon, 4 Oct 2021 14:24:41 +0000 (UTC) X-FDA: 78658975962.23.DAF7AEA Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 1B048801C350 for ; Mon, 4 Oct 2021 14:24:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=NkcJY9C1P+XdPROdq74kteMBcBunECOoflm0ZOH2fCA=; b=r8pPAYzhLX8jqQlWQdztcLUVXw 4FfePFa2mfmYR/S5H5msiObPlnnNLRfhZHiyO/jlnNMIni+SFeOHqWut+i+0QpazGa6s3c1WM4pq2 nIK+fEYMbGq8AI5aZsCw8yL6ibOtYMOIpSVBNB5SyStG04HZ5QlR/5+jyXoyDOYOscwoUQ9QhxTes G1/UgS6wVsmsIjXWbKJbVMWXjNFeBqfD1NVY/0WlqSN/jph159xHwy6fDgamjMdSbddyJAnoKZASH fnAyE/IKaLglMMVprR7qQF50fSFwHPHdncu7GMQTxKbI2pa7CkTlzQaXB6mXHLZLGxh1R/CcqIcBG O4LoMK/A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOrT-00Gyz2-MX; Mon, 04 Oct 2021 14:22:54 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 28/62] mm/slub: Convert deactivate_slab() to take a struct slab Date: Mon, 4 Oct 2021 14:46:16 +0100 Message-Id: <20211004134650.4031813-29-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1B048801C350 X-Stat-Signature: wtt7a9bbxmua7erfbzwempigj8onuact Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=r8pPAYzh; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-HE-Tag: 1633357480-869008 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improves type safety and removes calls to slab_page(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 54 +++++++++++++++++++++++++++--------------------------- 1 file changed, 27 insertions(+), 27 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index e6fd0619d1f2..5330d0b02f13 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2298,25 +2298,25 @@ static void init_kmem_cache_cpus(struct kmem_cache *s) } /* - * Finishes removing the cpu slab. Merges cpu's freelist with page's freelist, + * Finishes removing the cpu slab. Merges cpu's freelist with slab's freelist, * unfreezes the slabs and puts it on the proper list. * Assumes the slab has been already safely taken away from kmem_cache_cpu * by the caller. */ -static void deactivate_slab(struct kmem_cache *s, struct page *page, +static void deactivate_slab(struct kmem_cache *s, struct slab *slab, void *freelist) { enum slab_modes { M_NONE, M_PARTIAL, M_FULL, M_FREE }; - struct kmem_cache_node *n = get_node(s, page_to_nid(page)); + struct kmem_cache_node *n = get_node(s, slab_nid(slab)); int lock = 0, free_delta = 0; enum slab_modes l = M_NONE, m = M_NONE; void *nextfree, *freelist_iter, *freelist_tail; int tail = DEACTIVATE_TO_HEAD; unsigned long flags = 0; - struct page new; - struct page old; + struct slab new; + struct slab old; - if (page->freelist) { + if (slab->freelist) { stat(s, DEACTIVATE_REMOTE_FREES); tail = DEACTIVATE_TO_TAIL; } @@ -2335,7 +2335,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, * 'freelist_iter' is already corrupted. So isolate all objects * starting at 'freelist_iter' by skipping them. */ - if (freelist_corrupted(s, page, &freelist_iter, nextfree)) + if (freelist_corrupted(s, slab_page(slab), &freelist_iter, nextfree)) break; freelist_tail = freelist_iter; @@ -2345,25 +2345,25 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, } /* - * Stage two: Unfreeze the page while splicing the per-cpu - * freelist to the head of page's freelist. + * Stage two: Unfreeze the slab while splicing the per-cpu + * freelist to the head of slab's freelist. * - * Ensure that the page is unfrozen while the list presence + * Ensure that the slab is unfrozen while the list presence * reflects the actual number of objects during unfreeze. * * We setup the list membership and then perform a cmpxchg - * with the count. If there is a mismatch then the page - * is not unfrozen but the page is on the wrong list. + * with the count. If there is a mismatch then the slab + * is not unfrozen but the slab is on the wrong list. * * Then we restart the process which may have to remove - * the page from the list that we just put it on again + * the slab from the list that we just put it on again * because the number of objects in the slab may have * changed. */ redo: - old.freelist = READ_ONCE(page->freelist); - old.counters = READ_ONCE(page->counters); + old.freelist = READ_ONCE(slab->freelist); + old.counters = READ_ONCE(slab->counters); VM_BUG_ON(!old.frozen); /* Determine target state of the slab */ @@ -2385,7 +2385,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, lock = 1; /* * Taking the spinlock removes the possibility - * that acquire_slab() will see a slab page that + * that acquire_slab() will see a slab that * is frozen */ spin_lock_irqsave(&n->list_lock, flags); @@ -2405,18 +2405,18 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, if (l != m) { if (l == M_PARTIAL) - remove_partial(n, page); + remove_partial(n, slab_page(slab)); else if (l == M_FULL) - remove_full(s, n, page); + remove_full(s, n, slab_page(slab)); if (m == M_PARTIAL) - add_partial(n, page, tail); + add_partial(n, slab_page(slab), tail); else if (m == M_FULL) - add_full(s, n, page); + add_full(s, n, slab_page(slab)); } l = m; - if (!cmpxchg_double_slab(s, page, + if (!cmpxchg_double_slab(s, slab_page(slab), old.freelist, old.counters, new.freelist, new.counters, "unfreezing slab")) @@ -2431,7 +2431,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, stat(s, DEACTIVATE_FULL); else if (m == M_FREE) { stat(s, DEACTIVATE_EMPTY); - discard_slab(s, page); + discard_slab(s, slab_page(slab)); stat(s, FREE_SLAB); } } @@ -2603,7 +2603,7 @@ static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c) local_unlock_irqrestore(&s->cpu_slab->lock, flags); if (slab) { - deactivate_slab(s, slab_page(slab), freelist); + deactivate_slab(s, slab, freelist); stat(s, CPUSLAB_FLUSH); } } @@ -2619,7 +2619,7 @@ static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) c->tid = next_tid(c->tid); if (slab) { - deactivate_slab(s, slab_page(slab), freelist); + deactivate_slab(s, slab, freelist); stat(s, CPUSLAB_FLUSH); } @@ -2961,7 +2961,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, c->slab = NULL; c->freelist = NULL; local_unlock_irqrestore(&s->cpu_slab->lock, flags); - deactivate_slab(s, slab_page(slab), freelist); + deactivate_slab(s, slab, freelist); new_slab: @@ -3043,7 +3043,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, local_unlock_irqrestore(&s->cpu_slab->lock, flags); - deactivate_slab(s, slab_page(flush_slab), flush_freelist); + deactivate_slab(s, flush_slab, flush_freelist); stat(s, CPUSLAB_FLUSH); @@ -3055,7 +3055,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, return_single: - deactivate_slab(s, slab_page(slab), get_freepointer(s, freelist)); + deactivate_slab(s, slab, get_freepointer(s, freelist)); return freelist; }