Message ID | 20201109095021.9897-2-jgross@suse.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | xen/x86: implement NMI continuation | expand |
On 09.11.2020 10:50, Juergen Gross wrote: > Actions in NMI context are rather limited as e.g. locking is rather > fragile. > > Add a framework to continue processing in normal interrupt context > after leaving NMI processing. > > This is done by a high priority interrupt vector triggered via a > self IPI from NMI context, which will then call the continuation > function specified during NMI handling. > > Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> with one further adjustment request: > @@ -1799,6 +1800,24 @@ void unset_nmi_callback(void) > nmi_callback = dummy_nmi_callback; > } > > +bool nmi_check_continuation(void) > +{ > + bool ret = false; > + > + return ret; > +} > + > +void trigger_nmi_continuation(void) > +{ > + /* > + * Issue a self-IPI. Handling is done in spurious_interrupt(). > + * NMI could have happened in IPI sequence, so wait for ICR being idle > + * again before leaving NMI handler. > + */ > + send_IPI_self(SPURIOUS_APIC_VECTOR); > + apic_wait_icr_idle(); > +} This additionally relies on send_IPI_self_legacy() calling send_IPI_shortcut(), rather than e.g. resolving the local CPU number to a destination ID. I think this wants saying maybe here, but more importantly in that function. Jan
On 11.11.20 16:37, Jan Beulich wrote: > On 09.11.2020 10:50, Juergen Gross wrote: >> Actions in NMI context are rather limited as e.g. locking is rather >> fragile. >> >> Add a framework to continue processing in normal interrupt context >> after leaving NMI processing. >> >> This is done by a high priority interrupt vector triggered via a >> self IPI from NMI context, which will then call the continuation >> function specified during NMI handling. >> >> Signed-off-by: Juergen Gross <jgross@suse.com> > > Reviewed-by: Jan Beulich <jbeulich@suse.com> > with one further adjustment request: > >> @@ -1799,6 +1800,24 @@ void unset_nmi_callback(void) >> nmi_callback = dummy_nmi_callback; >> } >> >> +bool nmi_check_continuation(void) >> +{ >> + bool ret = false; >> + >> + return ret; >> +} >> + >> +void trigger_nmi_continuation(void) >> +{ >> + /* >> + * Issue a self-IPI. Handling is done in spurious_interrupt(). >> + * NMI could have happened in IPI sequence, so wait for ICR being idle >> + * again before leaving NMI handler. >> + */ >> + send_IPI_self(SPURIOUS_APIC_VECTOR); >> + apic_wait_icr_idle(); >> +} > > This additionally relies on send_IPI_self_legacy() calling > send_IPI_shortcut(), rather than e.g. resolving the local CPU > number to a destination ID. I think this wants saying maybe > here, but more importantly in that function. Okay. Juergen
diff --git a/xen/arch/x86/apic.c b/xen/arch/x86/apic.c index 60627fd6e6..7497ddb5da 100644 --- a/xen/arch/x86/apic.c +++ b/xen/arch/x86/apic.c @@ -40,6 +40,7 @@ #include <irq_vectors.h> #include <xen/kexec.h> #include <asm/guest.h> +#include <asm/nmi.h> #include <asm/time.h> static bool __read_mostly tdt_enabled; @@ -1376,16 +1377,22 @@ void spurious_interrupt(struct cpu_user_regs *regs) { /* * Check if this is a vectored interrupt (most likely, as this is probably - * a request to dump local CPU state). Vectored interrupts are ACKed; - * spurious interrupts are not. + * a request to dump local CPU state or to continue NMI handling). + * Vectored interrupts are ACKed; spurious interrupts are not. */ if (apic_isr_read(SPURIOUS_APIC_VECTOR)) { + bool is_spurious; + ack_APIC_irq(); + is_spurious = !nmi_check_continuation(); if (this_cpu(state_dump_pending)) { this_cpu(state_dump_pending) = false; dump_execstate(regs); - return; + is_spurious = false; } + + if ( !is_spurious ) + return; } /* see sw-dev-man vol 3, chapter 7.4.13.5 */ diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index bc5b8f8ea3..5005ac6e6e 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -79,6 +79,7 @@ #include <public/hvm/params.h> #include <asm/cpuid.h> #include <xsm/xsm.h> +#include <asm/mach-default/irq_vectors.h> #include <asm/pv/traps.h> #include <asm/pv/mm.h> @@ -1799,6 +1800,24 @@ void unset_nmi_callback(void) nmi_callback = dummy_nmi_callback; } +bool nmi_check_continuation(void) +{ + bool ret = false; + + return ret; +} + +void trigger_nmi_continuation(void) +{ + /* + * Issue a self-IPI. Handling is done in spurious_interrupt(). + * NMI could have happened in IPI sequence, so wait for ICR being idle + * again before leaving NMI handler. + */ + send_IPI_self(SPURIOUS_APIC_VECTOR); + apic_wait_icr_idle(); +} + void do_device_not_available(struct cpu_user_regs *regs) { #ifdef CONFIG_PV diff --git a/xen/include/asm-x86/nmi.h b/xen/include/asm-x86/nmi.h index a288f02a50..9a5da14162 100644 --- a/xen/include/asm-x86/nmi.h +++ b/xen/include/asm-x86/nmi.h @@ -33,5 +33,14 @@ nmi_callback_t *set_nmi_callback(nmi_callback_t *callback); void unset_nmi_callback(void); DECLARE_PER_CPU(unsigned int, nmi_count); - + +/** + * trigger_nmi_continuation + * + * Schedule continuation to be started in interrupt context after NMI handling. + */ +void trigger_nmi_continuation(void); + +/* Check for NMI continuation pending. */ +bool nmi_check_continuation(void); #endif /* ASM_NMI_H */