From patchwork Tue Apr 11 13:08:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13207575 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FB19C77B6F for ; Tue, 11 Apr 2023 13:09:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0B40F6B0075; Tue, 11 Apr 2023 09:09:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 03D226B0078; Tue, 11 Apr 2023 09:09:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DD1C3900003; Tue, 11 Apr 2023 09:09:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id C57486B0075 for ; Tue, 11 Apr 2023 09:09:34 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 63BCFA0B43 for ; Tue, 11 Apr 2023 13:09:34 +0000 (UTC) X-FDA: 80669141868.21.38034C2 Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by imf03.hostedemail.com (Postfix) with ESMTP id 4264220019 for ; Tue, 11 Apr 2023 13:09:32 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=lZH5Mq0h; spf=pass (imf03.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.216.50 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681218572; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=Jeqq2OfBgH6mJzME3PRNy7z/JBrRI0f5dlbPOvTLd4E=; b=p3Crc5dJYwFdoG20B5u7qySKjldr1h725l21GfXqEn38WHsrda1YeLu9u9S+qh85zFJf9D D+dpOplzfFHat/Rev5tbU7YmqFQmLvnbTdi1fUq8nI+gl3Uqs/5Q+W5GFCPYAiKWQYISIC V09jH0yI5pozUNoQ0Ah06TASOg+ueno= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=lZH5Mq0h; spf=pass (imf03.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.216.50 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681218572; a=rsa-sha256; cv=none; b=5ScvdiQfofzju1ARK92A24VnsjClG0KESUa46wAOhCJL3bVz4XOZQ6GPGc/X7rbaGjNcue 8a+B6/pGbGmHrXXM7sXM1spmGi81aJXTGqo49x9KfKTcsJ6+3vWZu3rpJuJXrvlzxf6Qk1 4DIGmrbADbxbBHVplAig7ND1va0KQpU= Received: by mail-pj1-f50.google.com with SMTP id 98e67ed59e1d1-24684839593so247267a91.1 for ; Tue, 11 Apr 2023 06:09:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1681218571; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=Jeqq2OfBgH6mJzME3PRNy7z/JBrRI0f5dlbPOvTLd4E=; b=lZH5Mq0haFctrz4WXBPzvEU3nZCI0r0P5a/f44xMfY/uoS78HP+uCDMxDYsCl5CnSW eXXbJfkCmds1kTUPD6j/iQb4VvXjzn2M1n4nEcaSH/M8yPpESGAkDJ/SEIR1YPu7BQWq 4o/O9IF9uCsHncBdTNz5OmE7DJp6KHo7+5sHfFCw7SKDmvcMsCQTj35ZOIC+AGK5kCmY nk4Q6HRpsKDvU6Y/COvDNJVie+jkidOSY+DpjyDWRza/q7uCPSDSAPVIB6QF3Uhua4ZR jNrHBFsar/I4E9W84jM6X8Rl4/gJlE+uQINBbZgyDsFgi/sw8LXkutgAHNHcrHD1EFXN 36oQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681218571; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Jeqq2OfBgH6mJzME3PRNy7z/JBrRI0f5dlbPOvTLd4E=; b=xbDmnO2KwCc8RlkaeLVCSeNnsiSYaaNFJUEElkvezkcsLBwBhDUeT54XDqRV0sjw+O R25KBj5THewLugd2uri6GpoIaL6QYIHUTo1BsnpEJFY3qwQFQ+svWegdeXLqhT+PwYJm gPalXvWOkBPRZcopRYifcsQ4mbc0n0lnaAcwcAqod3ikgTn6U1k1J404nfeW6KZcDgze nKFl9wO/lhFvdNKwC4c/mGZAt4pQDzesj1eV3OcNLFhOw99UggOPwJURBycTtoVpeukM rufjyz7cOzwJLkQ9m5nvGT7/ShFteWAVmP5m4oCfKeuWWSLJZGADe2BqLkPaCLeT0+Ig AbQw== X-Gm-Message-State: AAQBX9ddyZu2YH5+DBc7aW04llXZgDtR0wZeRzVGf6oF2xek1gqrQ4xO JOp7wuT8h6YQ92wyc60QcX4vVw== X-Google-Smtp-Source: AKy350ZULWyo2+GLvgqpzum3P7vfZmEBvN/gNqxVoanE83D3a9jx4f02Lvb4HtqExJi55GXmrZY0OA== X-Received: by 2002:a17:90a:2c0c:b0:244:a41a:f658 with SMTP id m12-20020a17090a2c0c00b00244a41af658mr10539667pjd.4.1681218570910; Tue, 11 Apr 2023 06:09:30 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id r6-20020a17090a940600b0024499d4b72esm5626852pjo.51.2023.04.11.06.09.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 Apr 2023 06:09:30 -0700 (PDT) From: Qi Zheng To: vbabka@suse.cz, 42.hyeyoo@gmail.com, akpm@linux-foundation.org, roman.gushchin@linux.dev, iamjoonsoo.kim@lge.com, rientjes@google.com, penberg@kernel.org, cl@linux.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qi Zheng , Zhao Gongyi Subject: [PATCH] mm: slub: annotate kmem_cache_node->list_lock as raw_spinlock Date: Tue, 11 Apr 2023 21:08:54 +0800 Message-Id: <20230411130854.46795-1-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 4264220019 X-Stat-Signature: fwfhf5mtbkjbhkseuhfi3te4hfmtjzqb X-HE-Tag: 1681218572-977683 X-HE-Meta: U2FsdGVkX19V6UhIzSY8S1OWMqKHraebgvUWUdYCMCWNth/bcqUON/sAauzeR5fq3Q/ULvAzQxqGSfLrib6fmmLcmIiKa7I6e9y/SEgRPfx+frkQL7/+TqOYZdp8LW/kYOdWTW2ycX6bfq5SD3nSZpfPtUhkMrVovJzb64B/oWkhxcNboqRWIxjhcj7cnryCsS5azq3gYkSd0QfRVBIHkRqF5jOQi8Lp+DHG57sduWAF19BsRnXl9aphwc5OQWty+h+6q9VdHZNwUoVLgYn1rzWrfTpxrcj++DWDK437xOo039wKgbEp9d77hvsvrh4IlnetlIKzMW455geLkhpnnWTRDnyCf0UdKaa8iNVohdlGjHWN9IXWk9AYqhwouFOnqMl1Xj8Hh9xSraLtVNBptIimd6uJqgbRoeeCy1qFVIi1Iei0mOIbwsAc0B82YtNKKBKctvpsrR3c00Re/u899klmHvckMa+AEyNmuQZj0SV2fKQVNW/3L+qHoGbMDhCG6LqisXumOonIu0bgxzH2qEgD7y2Qfj+wJWv/9rUowiVzm/lv4PIiQfQMDSVZoND8zm4Yl1KrqaooYy7Hqhc8rt3U3zRhijtvY7npaDcFSTeLeF5f6e7OyxoZNX3mbF04QypM3F0jzxiDhij02b4K89u95O7xPCRyFpPcsnflQgS3fndGr7IMeV1s7Ql9eF5bzi94cuoJvWJJjanZPKJ6JM4FhDaPc/NEljOzKOFdsNYQkzhs6w4DZWC77tOPgp6yRsug2sd1SIU3lly9gGbeb2d12P9HqMjZYfowrvtbp6KUXy2amnsLn8tIUWMQuIOQ2PwfH1uUvve3+22E/O7Y5EWNNoBI+dhoLeu/oSNa2fDSMkISlqRFxTnebXjWe76DgMx56aOiWiD7KmyzfBV12YvfGa+PA//QwWzt0MGwsV0nLw53OfHv9yWTjW9Iv5EZagJXyPw8p3FfFLXfLmd ZHlp3JJ2 zMQnLo8CMhRDEK6xbKREY6WUeKPTfrK4guyerea+RLqWCQIjscHdw46MfrVh3gdzq3yUwNnOaarA4MJJ2k83H3DiWK0URJwZ379S7va1XCoj5jlZuEhu61sa0TGayVy+Yw/J7dkvUEwhbvpqe0YoQiVqrZohyETzmlh/GPp8/ZrejJ3Mxzk5/8MbkUlxyIOQTxG8zPLtRd/zSG79gua4KX183hSuRJPg0k9FsgVmAPe+rRp5I3QFj2X5hpVBGQcDYaDrov9ZqzPt9xIxtz+W+Wtb5R4RZEMq4iBCbELrgR+Z/Gfr7pLyDa7lMytCFL0VgeEZNiySxrAof9seqTkXkRRRjtVmaaRXWK9JghsrTTuAe/mFiZkg+Ol37HhFF9Y9whN/OOCy9PTpqW/1oaijQwq3noSk7eNODJeaPP/RergzYSTQka0aVbph3qqvseD74Hn18VCey7n9mJ30FE/g2r0/0czau2qaGUTYIDJ0O9q0G2XwmFhAHgWj2jNHR+k3MmYvY3gH6ff15rOcMJJkoMBFeuYx2B6t1i+BZq6CvQrImpZ0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The list_lock can be held in the critical section of raw_spinlock, and then lockdep will complain about it like below: ============================= [ BUG: Invalid wait context ] 6.3.0-rc6-next-20230411 #7 Not tainted ----------------------------- swapper/0/1 is trying to lock: ffff888100055418 (&n->list_lock){....}-{3:3}, at: ___slab_alloc+0x73d/0x1330 other info that might help us debug this: context-{5:5} 2 locks held by swapper/0/1: #0: ffffffff824e8160 (rcu_tasks.cbs_gbl_lock){....}-{2:2}, at: cblist_init_generic+0x22/0x2d0 #1: ffff888136bede50 (&ACCESS_PRIVATE(rtpcp, lock)){....}-{2:2}, at: cblist_init_generic+0x232/0x2d0 stack backtrace: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.3.0-rc6-next-20230411 #7 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 Call Trace: dump_stack_lvl+0x77/0xc0 __lock_acquire+0xa65/0x2950 ? arch_stack_walk+0x65/0xf0 ? arch_stack_walk+0x65/0xf0 ? unwind_next_frame+0x602/0x8d0 lock_acquire+0xe0/0x300 ? ___slab_alloc+0x73d/0x1330 ? find_usage_forwards+0x39/0x50 ? check_irq_usage+0x162/0xa70 ? __bfs+0x10c/0x2c0 _raw_spin_lock_irqsave+0x4f/0x90 ? ___slab_alloc+0x73d/0x1330 ___slab_alloc+0x73d/0x1330 ? fill_pool+0x16b/0x2a0 ? look_up_lock_class+0x5d/0x160 ? register_lock_class+0x48/0x500 ? __lock_acquire+0xabc/0x2950 ? fill_pool+0x16b/0x2a0 kmem_cache_alloc+0x358/0x3b0 ? __lock_acquire+0xabc/0x2950 fill_pool+0x16b/0x2a0 ? __debug_object_init+0x292/0x560 ? lock_acquire+0xe0/0x300 ? cblist_init_generic+0x232/0x2d0 __debug_object_init+0x2c/0x560 cblist_init_generic+0x147/0x2d0 rcu_init_tasks_generic+0x15/0x190 kernel_init_freeable+0x6e/0x3e0 ? rest_init+0x1e0/0x1e0 kernel_init+0x1b/0x1d0 ? rest_init+0x1e0/0x1e0 ret_from_fork+0x1f/0x30 The fill_pool() can only be called in the !PREEMPT_RT kernel or in the preemptible context of the PREEMPT_RT kernel, so the above warning is not a real issue, but it's better to annotate kmem_cache_node->list_lock as raw_spinlock to get rid of such issue. Reported-by: Zhao Gongyi Signed-off-by: Qi Zheng --- mm/slab.h | 4 ++-- mm/slub.c | 66 +++++++++++++++++++++++++++---------------------------- 2 files changed, 35 insertions(+), 35 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index f01ac256a8f5..43f3436d13b4 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -723,8 +723,9 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, * The slab lists for all objects. */ struct kmem_cache_node { -#ifdef CONFIG_SLAB raw_spinlock_t list_lock; + +#ifdef CONFIG_SLAB struct list_head slabs_partial; /* partial list first, better asm code */ struct list_head slabs_full; struct list_head slabs_free; @@ -740,7 +741,6 @@ struct kmem_cache_node { #endif #ifdef CONFIG_SLUB - spinlock_t list_lock; unsigned long nr_partial; struct list_head partial; #ifdef CONFIG_SLUB_DEBUG diff --git a/mm/slub.c b/mm/slub.c index c87628cd8a9a..e66a35643624 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1331,7 +1331,7 @@ static void add_full(struct kmem_cache *s, if (!(s->flags & SLAB_STORE_USER)) return; - lockdep_assert_held(&n->list_lock); + assert_raw_spin_locked(&n->list_lock); list_add(&slab->slab_list, &n->full); } @@ -1340,7 +1340,7 @@ static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct if (!(s->flags & SLAB_STORE_USER)) return; - lockdep_assert_held(&n->list_lock); + assert_raw_spin_locked(&n->list_lock); list_del(&slab->slab_list); } @@ -2113,14 +2113,14 @@ __add_partial(struct kmem_cache_node *n, struct slab *slab, int tail) static inline void add_partial(struct kmem_cache_node *n, struct slab *slab, int tail) { - lockdep_assert_held(&n->list_lock); + assert_raw_spin_locked(&n->list_lock); __add_partial(n, slab, tail); } static inline void remove_partial(struct kmem_cache_node *n, struct slab *slab) { - lockdep_assert_held(&n->list_lock); + assert_raw_spin_locked(&n->list_lock); list_del(&slab->slab_list); n->nr_partial--; } @@ -2136,7 +2136,7 @@ static void *alloc_single_from_partial(struct kmem_cache *s, { void *object; - lockdep_assert_held(&n->list_lock); + assert_raw_spin_locked(&n->list_lock); object = slab->freelist; slab->freelist = get_freepointer(s, object); @@ -2181,7 +2181,7 @@ static void *alloc_single_from_new_slab(struct kmem_cache *s, */ return NULL; - spin_lock_irqsave(&n->list_lock, flags); + raw_spin_lock_irqsave(&n->list_lock, flags); if (slab->inuse == slab->objects) add_full(s, n, slab); @@ -2189,7 +2189,7 @@ static void *alloc_single_from_new_slab(struct kmem_cache *s, add_partial(n, slab, DEACTIVATE_TO_HEAD); inc_slabs_node(s, nid, slab->objects); - spin_unlock_irqrestore(&n->list_lock, flags); + raw_spin_unlock_irqrestore(&n->list_lock, flags); return object; } @@ -2208,7 +2208,7 @@ static inline void *acquire_slab(struct kmem_cache *s, unsigned long counters; struct slab new; - lockdep_assert_held(&n->list_lock); + assert_raw_spin_locked(&n->list_lock); /* * Zap the freelist and set the frozen bit. @@ -2267,7 +2267,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, if (!n || !n->nr_partial) return NULL; - spin_lock_irqsave(&n->list_lock, flags); + raw_spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry_safe(slab, slab2, &n->partial, slab_list) { void *t; @@ -2304,7 +2304,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, #endif } - spin_unlock_irqrestore(&n->list_lock, flags); + raw_spin_unlock_irqrestore(&n->list_lock, flags); return object; } @@ -2548,7 +2548,7 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, * Taking the spinlock removes the possibility that * acquire_slab() will see a slab that is frozen */ - spin_lock_irqsave(&n->list_lock, flags); + raw_spin_lock_irqsave(&n->list_lock, flags); } else { mode = M_FULL_NOLIST; } @@ -2559,14 +2559,14 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, new.freelist, new.counters, "unfreezing slab")) { if (mode == M_PARTIAL) - spin_unlock_irqrestore(&n->list_lock, flags); + raw_spin_unlock_irqrestore(&n->list_lock, flags); goto redo; } if (mode == M_PARTIAL) { add_partial(n, slab, tail); - spin_unlock_irqrestore(&n->list_lock, flags); + raw_spin_unlock_irqrestore(&n->list_lock, flags); stat(s, tail); } else if (mode == M_FREE) { stat(s, DEACTIVATE_EMPTY); @@ -2594,10 +2594,10 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) n2 = get_node(s, slab_nid(slab)); if (n != n2) { if (n) - spin_unlock_irqrestore(&n->list_lock, flags); + raw_spin_unlock_irqrestore(&n->list_lock, flags); n = n2; - spin_lock_irqsave(&n->list_lock, flags); + raw_spin_lock_irqsave(&n->list_lock, flags); } do { @@ -2626,7 +2626,7 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) } if (n) - spin_unlock_irqrestore(&n->list_lock, flags); + raw_spin_unlock_irqrestore(&n->list_lock, flags); while (slab_to_discard) { slab = slab_to_discard; @@ -2951,10 +2951,10 @@ static unsigned long count_partial(struct kmem_cache_node *n, unsigned long x = 0; struct slab *slab; - spin_lock_irqsave(&n->list_lock, flags); + raw_spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry(slab, &n->partial, slab_list) x += get_count(slab); - spin_unlock_irqrestore(&n->list_lock, flags); + raw_spin_unlock_irqrestore(&n->list_lock, flags); return x; } #endif /* CONFIG_SLUB_DEBUG || SLAB_SUPPORTS_SYSFS */ @@ -3515,7 +3515,7 @@ static noinline void free_to_partial_list( if (s->flags & SLAB_STORE_USER) handle = set_track_prepare(); - spin_lock_irqsave(&n->list_lock, flags); + raw_spin_lock_irqsave(&n->list_lock, flags); if (free_debug_processing(s, slab, head, tail, &cnt, addr, handle)) { void *prior = slab->freelist; @@ -3554,7 +3554,7 @@ static noinline void free_to_partial_list( dec_slabs_node(s, slab_nid(slab_free), slab_free->objects); } - spin_unlock_irqrestore(&n->list_lock, flags); + raw_spin_unlock_irqrestore(&n->list_lock, flags); if (slab_free) { stat(s, FREE_SLAB); @@ -3594,7 +3594,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, do { if (unlikely(n)) { - spin_unlock_irqrestore(&n->list_lock, flags); + raw_spin_unlock_irqrestore(&n->list_lock, flags); n = NULL; } prior = slab->freelist; @@ -3626,7 +3626,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, * Otherwise the list_lock will synchronize with * other processors updating the list of slabs. */ - spin_lock_irqsave(&n->list_lock, flags); + raw_spin_lock_irqsave(&n->list_lock, flags); } } @@ -3668,7 +3668,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, add_partial(n, slab, DEACTIVATE_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } - spin_unlock_irqrestore(&n->list_lock, flags); + raw_spin_unlock_irqrestore(&n->list_lock, flags); return; slab_empty: @@ -3683,7 +3683,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, remove_full(s, n, slab); } - spin_unlock_irqrestore(&n->list_lock, flags); + raw_spin_unlock_irqrestore(&n->list_lock, flags); stat(s, FREE_SLAB); discard_slab(s, slab); } @@ -4180,7 +4180,7 @@ static void init_kmem_cache_node(struct kmem_cache_node *n) { n->nr_partial = 0; - spin_lock_init(&n->list_lock); + raw_spin_lock_init(&n->list_lock); INIT_LIST_HEAD(&n->partial); #ifdef CONFIG_SLUB_DEBUG atomic_long_set(&n->nr_slabs, 0); @@ -4576,7 +4576,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) struct slab *slab, *h; BUG_ON(irqs_disabled()); - spin_lock_irq(&n->list_lock); + raw_spin_lock_irq(&n->list_lock); list_for_each_entry_safe(slab, h, &n->partial, slab_list) { if (!slab->inuse) { remove_partial(n, slab); @@ -4586,7 +4586,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) "Objects remaining in %s on __kmem_cache_shutdown()"); } } - spin_unlock_irq(&n->list_lock); + raw_spin_unlock_irq(&n->list_lock); list_for_each_entry_safe(slab, h, &discard, slab_list) discard_slab(s, slab); @@ -4790,7 +4790,7 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) for (i = 0; i < SHRINK_PROMOTE_MAX; i++) INIT_LIST_HEAD(promote + i); - spin_lock_irqsave(&n->list_lock, flags); + raw_spin_lock_irqsave(&n->list_lock, flags); /* * Build lists of slabs to discard or promote. @@ -4822,7 +4822,7 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) for (i = SHRINK_PROMOTE_MAX - 1; i >= 0; i--) list_splice(promote + i, &n->partial); - spin_unlock_irqrestore(&n->list_lock, flags); + raw_spin_unlock_irqrestore(&n->list_lock, flags); /* Release empty slabs */ list_for_each_entry_safe(slab, t, &discard, slab_list) @@ -5147,7 +5147,7 @@ static int validate_slab_node(struct kmem_cache *s, struct slab *slab; unsigned long flags; - spin_lock_irqsave(&n->list_lock, flags); + raw_spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry(slab, &n->partial, slab_list) { validate_slab(s, slab, obj_map); @@ -5173,7 +5173,7 @@ static int validate_slab_node(struct kmem_cache *s, } out: - spin_unlock_irqrestore(&n->list_lock, flags); + raw_spin_unlock_irqrestore(&n->list_lock, flags); return count; } @@ -6399,12 +6399,12 @@ static int slab_debug_trace_open(struct inode *inode, struct file *filep) if (!atomic_long_read(&n->nr_slabs)) continue; - spin_lock_irqsave(&n->list_lock, flags); + raw_spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry(slab, &n->partial, slab_list) process_slab(t, s, slab, alloc, obj_map); list_for_each_entry(slab, &n->full, slab_list) process_slab(t, s, slab, alloc, obj_map); - spin_unlock_irqrestore(&n->list_lock, flags); + raw_spin_unlock_irqrestore(&n->list_lock, flags); } /* Sort locations by count */