Message ID | 20241025100700.3714552-16-ruanjinjie@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | arm64: entry: Convert to generic entry | expand |
On Fri, Oct 25 2024 at 18:06, Jinjie Ruan wrote: > As the front patch 6 ~ 13 did, the arm64_preempt_schedule_irq() is Once this series is applied nobody knows what 'front patch 6 ~ 13' did. > same with the irq preempt schedule code of generic entry besides those > architecture-related logic called arm64_irqentry_exit_need_resched(). > > So add arch irqentry_exit_need_resched() to support architecture-related > need_resched() check logic, which do not affect existing architectures > that use generic entry, but support arm64 to use generic irq entry. Simply say: ARM64 requires an additional whether to reschedule on return from interrupt. Add arch_irqentry_exit_need_resched() as the default NOOP implementation and hook it up into the need_resched() condition in raw_irqentry_exit_cond_resched(). This allows ARM64 to implement the architecture specific version for switchting over to the generic entry code. That explains things completely independently. Hmm? Thanks, tglx
On Mon, Oct 28 2024 at 19:05, Thomas Gleixner wrote: > On Fri, Oct 25 2024 at 18:06, Jinjie Ruan wrote: > >> As the front patch 6 ~ 13 did, the arm64_preempt_schedule_irq() is > > Once this series is applied nobody knows what 'front patch 6 ~ 13' did. > >> same with the irq preempt schedule code of generic entry besides those >> architecture-related logic called arm64_irqentry_exit_need_resched(). >> >> So add arch irqentry_exit_need_resched() to support architecture-related >> need_resched() check logic, which do not affect existing architectures >> that use generic entry, but support arm64 to use generic irq entry. > > Simply say: > > ARM64 requires an additional whether to reschedule on return from ARM64 requires an additional check whether to reschedule on return from obviously... > interrupt. > > Add arch_irqentry_exit_need_resched() as the default NOOP > implementation and hook it up into the need_resched() condition in > raw_irqentry_exit_cond_resched(). > > This allows ARM64 to implement the architecture specific version for > switchting over to the generic entry code. > > That explains things completely independently. Hmm? > > Thanks, > > tglx
On 2024/10/29 2:05, Thomas Gleixner wrote: > On Fri, Oct 25 2024 at 18:06, Jinjie Ruan wrote: > >> As the front patch 6 ~ 13 did, the arm64_preempt_schedule_irq() is > > Once this series is applied nobody knows what 'front patch 6 ~ 13' did. Yes, if some of the previous patches are applied, the description will immediately become difficult to understand, the other patch's similar commit message will be updated too. > >> same with the irq preempt schedule code of generic entry besides those >> architecture-related logic called arm64_irqentry_exit_need_resched(). >> >> So add arch irqentry_exit_need_resched() to support architecture-related >> need_resched() check logic, which do not affect existing architectures >> that use generic entry, but support arm64 to use generic irq entry. > > Simply say: > > ARM64 requires an additional whether to reschedule on return from > interrupt. > > Add arch_irqentry_exit_need_resched() as the default NOOP > implementation and hook it up into the need_resched() condition in > raw_irqentry_exit_cond_resched(). > > This allows ARM64 to implement the architecture specific version for > switchting over to the generic entry code. > > That explains things completely independently. Hmm? Of course, this is clearer and not as coupled as other patches and describes how to implement it. > > Thanks, > > tglx >
diff --git a/kernel/entry/common.c b/kernel/entry/common.c index 2ad132c7be05..0cc117b658b8 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -143,6 +143,20 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) return ret; } +/** + * arch_irqentry_exit_need_resched - Architecture specific need resched function + * + * Invoked from raw_irqentry_exit_cond_resched() to check if need resched. + * Defaults return true. + * + * The main purpose is to permit arch to skip preempt a task from an IRQ. + */ +static inline bool arch_irqentry_exit_need_resched(void); + +#ifndef arch_irqentry_exit_need_resched +static inline bool arch_irqentry_exit_need_resched(void) { return true; } +#endif + void raw_irqentry_exit_cond_resched(void) { if (!preempt_count()) { @@ -150,7 +164,7 @@ void raw_irqentry_exit_cond_resched(void) rcu_irq_exit_check_preempt(); if (IS_ENABLED(CONFIG_DEBUG_ENTRY)) WARN_ON_ONCE(!on_thread_stack()); - if (need_resched()) + if (need_resched() && arch_irqentry_exit_need_resched()) preempt_schedule_irq(); } }
As the front patch 6 ~ 13 did, the arm64_preempt_schedule_irq() is same with the irq preempt schedule code of generic entry besides those architecture-related logic called arm64_irqentry_exit_need_resched(). So add arch irqentry_exit_need_resched() to support architecture-related need_resched() check logic, which do not affect existing architectures that use generic entry, but support arm64 to use generic irq entry. Suggested-by: Mark Rutland <mark.rutland@arm.com> Suggested-by: Kevin Brodsky <kevin.brodsky@arm.com> Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> --- kernel/entry/common.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-)