From patchwork Thu Aug 25 01:57:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12954135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48AD8C04AA5 for ; Thu, 25 Aug 2022 01:58:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AC02F6B0074; Wed, 24 Aug 2022 21:58:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A6D416B0075; Wed, 24 Aug 2022 21:58:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 90F03940007; Wed, 24 Aug 2022 21:58:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 82DCB6B0074 for ; Wed, 24 Aug 2022 21:58:36 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 50E74A0145 for ; Thu, 25 Aug 2022 01:58:36 +0000 (UTC) X-FDA: 79836455832.17.A7761E8 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf30.hostedemail.com (Postfix) with ESMTP id 090648003C for ; Thu, 25 Aug 2022 01:58:35 +0000 (UTC) Received: by mail-pl1-f173.google.com with SMTP id 2so17290070pll.0 for ; Wed, 24 Aug 2022 18:58:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc; bh=McjdVWcg5yH4l5WwS84vMDbd/eUz11zHVMD6I1qs6ic=; b=cYthl/MMJngt0CSZxjadIzmfV/4F6CQbLkiCIOV+I6d2c5sS4hrX12ba7QZ81CqesD Bt23+2bha3pKEBSxuxuffhbs3eDRPKt67NVESCIcLcCNmqFikHNQhpSHWbUcEqYYrGsu tTlzJdaC9n/jTCObNOi0ipQPU9GqG3epAulDLl7znAOBAI6mvVOn+F0gIenmnZwtTM3v IZ4U7GnOZ79WOUt7KOysP3jOEaBYfBsaAfZfR76SCFGGLk4+e+yuud8Ef1+I+9g/deUQ bnmcPGbGsrn+2Xr5EKFQtDjK2+enaPb5znmEWkmWlooOH4Qr5WzOzxQEDLL0V0Q1syYJ Rnxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc; bh=McjdVWcg5yH4l5WwS84vMDbd/eUz11zHVMD6I1qs6ic=; b=noQxqKaXpKVVan/ZGQZHrDCLGmnxzMm0NP9v1gthDPwCPJ7/HyWDsKZa8SXH1Ts2+t U9vrdb9amey+cJ1LTSVM1a2iuh6Hcv3UASLs6tmWHTX3wb8jqAmL46WdFP+o1+v5/JHt RHAydimlNtL4TxqYXy8TWLKKw3QqbfLZKFc/sJ6ebAzilWb3e67j22Bzq0WrL5ijYnXB YBrc0L0ht2BlXDNumHCFukBiZx3LdQqeOF5e+5QUMrj0daAsR7B0pcwvtES3aN1NG4PI W3vZWyn47NactPmZpdTQnKKNayC+ythNQPKFnZd0rEtR/B24tANN+bOwcGamzyLYYIU2 vRuQ== X-Gm-Message-State: ACgBeo0LNe5HqLF0P2B3bMeriKeDkUuY7dwVfKVngyWWwMn1QYu2hANI kEpIXgILiLA/VoTHdM4KtgY= X-Google-Smtp-Source: AA6agR6o5Mj8VcCRfEZJCxNFLae0gULOPNsYlHRXk1KIUlb2QNCWFuh30xRsM607vLB2FhhpkdLjqw== X-Received: by 2002:a17:902:ab98:b0:171:2cbc:3d17 with SMTP id f24-20020a170902ab9800b001712cbc3d17mr1683003plr.143.1661392714909; Wed, 24 Aug 2022 18:58:34 -0700 (PDT) Received: from hyeyoo.. ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id y3-20020aa79e03000000b0052e78582aa2sm10988872pfq.181.2022.08.24.18.58.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Aug 2022 18:58:33 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Sebastian Andrzej Siewior , Thomas Gleixner , Mike Galbraith , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] mm/slub: fix comments about fastpath limitation on PREEMPT_RT Date: Thu, 25 Aug 2022 10:57:22 +0900 Message-Id: <20220825015722.1697209-1-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661392716; a=rsa-sha256; cv=none; b=FGEZdXzRuWBAzmvO8hmUMFaXK8V5E253hM702Xa6pDXS9NEX3NlQdyh2TJK3wMz3tIQhP5 s6rNJyLR2ANxX4lvLtTdGwkDbzsFVpiHvtuCr3ggoIYEuxKfPj8eElCivi7BcsVwlD9zUf ld8IGII1r+WIOfx/CYk250x2A+q09ds= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="cYthl/MM"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf30.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661392716; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=McjdVWcg5yH4l5WwS84vMDbd/eUz11zHVMD6I1qs6ic=; b=ZoS/ZiFlu6NsjsvbxyIqjbkdOGqlnW+h7fFZa7LesYnSiGjOxsQBYaUd2nUr72VIIDvTBa 7XwrGcN0gHhEfRhXZwROsSf9sQ41CvMiBMlbBjsjvw6YaeCOf09R7rzjGdZhDskhUcM2Tk nu0sttqwnWp5bTaVcO1qxmtdo49xOGE= Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="cYthl/MM"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf30.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspamd-Queue-Id: 090648003C X-Rspamd-Server: rspam02 X-Stat-Signature: jz1niguyr11asrs3th7bd73bq5jd6rci X-Rspam-User: X-HE-Tag: 1661392715-455580 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With PREEMPT_RT disabling interrupt is unnecessary as there is no user of slab in hardirq context on PREEMPT_RT. The limitation of lockless fastpath on PREEMPT_RT comes from the fact that local_lock does not disable preemption on PREEMPT_RT. Fix comments accordingly. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slub.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 30c2ee9e8a29..aa42ac6013b8 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -100,7 +100,7 @@ * except the stat counters. This is a percpu structure manipulated only by * the local cpu, so the lock protects against being preempted or interrupted * by an irq. Fast path operations rely on lockless operations instead. - * On PREEMPT_RT, the local lock does not actually disable irqs (and thus + * On PREEMPT_RT, the local lock does not actually disable preemption (and thus * prevent the lockless operations), so fastpath operations also need to take * the lock and are no longer lockless. * @@ -3185,10 +3185,12 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l slab = c->slab; /* * We cannot use the lockless fastpath on PREEMPT_RT because if a - * slowpath has taken the local_lock_irqsave(), it is not protected - * against a fast path operation in an irq handler. So we need to take - * the slow path which uses local_lock. It is still relatively fast if - * there is a suitable cpu freelist. + * slowpath has taken the local_lock which does not disable preemption + * on PREEMPT_RT, it is not protected against a fast path operation in + * another thread that does not take the local_lock. + * + * So we need to take the slow path which uses local_lock. It is still + * relatively fast if there is a suitable cpu freelist. */ if (IS_ENABLED(CONFIG_PREEMPT_RT) || unlikely(!object || !slab || !node_match(slab, node))) { @@ -3457,10 +3459,13 @@ static __always_inline void do_slab_free(struct kmem_cache *s, #else /* CONFIG_PREEMPT_RT */ /* * We cannot use the lockless fastpath on PREEMPT_RT because if - * a slowpath has taken the local_lock_irqsave(), it is not - * protected against a fast path operation in an irq handler. So - * we need to take the local_lock. We shouldn't simply defer to - * __slab_free() as that wouldn't use the cpu freelist at all. + * a slowpath has taken the local_lock which does not disable + * preemption on PREEMPT_RT, it is not protected against a + * fast path operation in another thread that does not take + * the local_lock. + * + * So we need to take the local_lock. We shouldn't simply defer + * to __slab_free() as that wouldn't use the cpu freelist at all. */ void **freelist;