From patchwork Fri Dec 6 10:17:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinjie Ruan X-Patchwork-Id: 13896826 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1AB7CE7717D for ; Fri, 6 Dec 2024 10:18:28 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.849989.1264521 (Exim 4.92) (envelope-from ) id 1tJVPj-0003n7-Iu; Fri, 06 Dec 2024 10:18:23 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 849989.1264521; Fri, 06 Dec 2024 10:18:23 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1tJVPj-0003lo-EK; Fri, 06 Dec 2024 10:18:23 +0000 Received: by outflank-mailman (input) for mailman id 849989; Fri, 06 Dec 2024 10:18:21 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1tJVPh-0001om-JO for xen-devel@lists.xenproject.org; Fri, 06 Dec 2024 10:18:21 +0000 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 69236c05-b3bb-11ef-99a3-01e77a169b0f; Fri, 06 Dec 2024 11:18:19 +0100 (CET) Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Y4Rws4GZ2z1kvVr; Fri, 6 Dec 2024 18:15:57 +0800 (CST) Received: from kwepemg200008.china.huawei.com (unknown [7.202.181.35]) by mail.maildlp.com (Postfix) with ESMTPS id C093C1402E0; Fri, 6 Dec 2024 18:18:16 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemg200008.china.huawei.com (7.202.181.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Dec 2024 18:18:15 +0800 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 69236c05-b3bb-11ef-99a3-01e77a169b0f From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH -next v5 08/22] arm64: entry: Use different helpers to check resched for PREEMPT_DYNAMIC Date: Fri, 6 Dec 2024 18:17:30 +0800 Message-ID: <20241206101744.4161990-9-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241206101744.4161990-1-ruanjinjie@huawei.com> References: <20241206101744.4161990-1-ruanjinjie@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.53.73] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemg200008.china.huawei.com (7.202.181.35) In generic entry, when PREEMPT_DYNAMIC is enabled or disabled, two different helpers are used to check whether resched is required and some common code is reused. In preparation for moving arm64 over to the generic entry code, use new helper to check resched when PREEMPT_DYNAMIC enabled and reuse common code for the disabled case. No functional changes. Signed-off-by: Jinjie Ruan --- arch/arm64/include/asm/preempt.h | 3 +++ arch/arm64/kernel/entry-common.c | 21 +++++++++++---------- 2 files changed, 14 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/preempt.h index d0f93385bd85..0f0ba250efe8 100644 --- a/arch/arm64/include/asm/preempt.h +++ b/arch/arm64/include/asm/preempt.h @@ -93,11 +93,14 @@ void dynamic_preempt_schedule(void); #define __preempt_schedule() dynamic_preempt_schedule() void dynamic_preempt_schedule_notrace(void); #define __preempt_schedule_notrace() dynamic_preempt_schedule_notrace() +void dynamic_irqentry_exit_cond_resched(void); +#define irqentry_exit_cond_resched() dynamic_irqentry_exit_cond_resched() #else /* CONFIG_PREEMPT_DYNAMIC */ #define __preempt_schedule() preempt_schedule() #define __preempt_schedule_notrace() preempt_schedule_notrace() +#define irqentry_exit_cond_resched() raw_irqentry_exit_cond_resched() #endif /* CONFIG_PREEMPT_DYNAMIC */ #endif /* CONFIG_PREEMPTION */ diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index 029f8bd72f8a..015a65d19b52 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -75,10 +75,6 @@ static noinstr irqentry_state_t enter_from_kernel_mode(struct pt_regs *regs) return state; } -#ifdef CONFIG_PREEMPT_DYNAMIC -DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); -#endif - static inline bool arm64_need_resched(void) { /* @@ -106,17 +102,22 @@ static inline bool arm64_need_resched(void) void raw_irqentry_exit_cond_resched(void) { -#ifdef CONFIG_PREEMPT_DYNAMIC - if (!static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched)) - return; -#endif - if (!preempt_count()) { if (need_resched() && arm64_need_resched()) preempt_schedule_irq(); } } +#ifdef CONFIG_PREEMPT_DYNAMIC +DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); +void dynamic_irqentry_exit_cond_resched(void) +{ + if (!static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched)) + return; + raw_irqentry_exit_cond_resched(); +} +#endif + /* * Handle IRQ/context state management when exiting to kernel mode. * After this function returns it is not safe to call regular kernel code, @@ -140,7 +141,7 @@ static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs, } if (IS_ENABLED(CONFIG_PREEMPTION)) - raw_irqentry_exit_cond_resched(); + irqentry_exit_cond_resched(); trace_hardirqs_on(); } else {