Message ID | 1483163161-2402-9-git-send-email-suravee.suthikulpanit@amd.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 31/12/2016 05:45, Suravee Suthikulpanit wrote: > Enabling AVIC implicitly disables the V_IRQ, V_INTR_PRIO, V_IGN_TPR, > and V_INTR_VECTOR fields in the VMCB Control Word. Therefore, this patch > introduces new interrupt injection code via AVIC backing page. > > Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> > Cc: Jan Beulich <JBeulich@suse.com> > Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> > --- > xen/arch/x86/hvm/svm/avic.c | 28 ++++++++++++++++++++++++++++ > xen/arch/x86/hvm/svm/intr.c | 4 ++++ > xen/arch/x86/hvm/svm/svm.c | 12 ++++++++++-- > xen/include/asm-x86/hvm/svm/avic.h | 1 + > 4 files changed, 43 insertions(+), 2 deletions(-) > > diff --git a/xen/arch/x86/hvm/svm/avic.c b/xen/arch/x86/hvm/svm/avic.c > index 6351c8e..faa5e45 100644 > --- a/xen/arch/x86/hvm/svm/avic.c > +++ b/xen/arch/x86/hvm/svm/avic.c > @@ -636,6 +636,34 @@ void svm_avic_vmexit_do_noaccel(struct cpu_user_regs *regs) > return; > } > > +void svm_avic_deliver_posted_intr(struct vcpu *v, u8 vec) > +{ > + struct vlapic *vlapic = vcpu_vlapic(v); > + > + /* Fallback to use non-AVIC if vcpu is not enabled with AVIC. */ > + if ( !svm_avic_vcpu_enabled(v) ) > + { > + if ( !vlapic_test_and_set_vector(vec, &vlapic->regs->data[APIC_IRR]) ) > + vcpu_kick(v); > + return; > + } > + > + if ( !(guest_cpu_user_regs()->eflags & X86_EFLAGS_IF) ) > + return; Won't this discard the interrupt? > + > + if ( vlapic_test_and_set_vector(vec, &vlapic->regs->data[APIC_IRR]) ) > + return; > + > + /* > + * If vcpu is running on another cpu, hit the doorbell to signal > + * it to process interrupt. Otherwise, kick it. > + */ > + if ( v->is_running && (v != current) ) > + wrmsrl(AVIC_DOORBELL, cpu_data[v->processor].apicid); Hmm - my gut feeling is that this is racy without holding the scheduler lock for the target pcpu. Nothing (I am aware of) excludes ->is_running and ->processor changing behind our back. CC'ing George and Dario for their input. ~Andrew > + else > + vcpu_kick(v); > +} > + > /* > * Local variables: > * mode: C >
>>> On 31.12.16 at 06:45, <suravee.suthikulpanit@amd.com> wrote: > --- a/xen/arch/x86/hvm/svm/avic.c > +++ b/xen/arch/x86/hvm/svm/avic.c > @@ -636,6 +636,34 @@ void svm_avic_vmexit_do_noaccel(struct cpu_user_regs *regs) > return; > } > > +void svm_avic_deliver_posted_intr(struct vcpu *v, u8 vec) > +{ > + struct vlapic *vlapic = vcpu_vlapic(v); > + > + /* Fallback to use non-AVIC if vcpu is not enabled with AVIC. */ > + if ( !svm_avic_vcpu_enabled(v) ) > + { > + if ( !vlapic_test_and_set_vector(vec, &vlapic->regs->data[APIC_IRR]) ) > + vcpu_kick(v); > + return; > + } > + > + if ( !(guest_cpu_user_regs()->eflags & X86_EFLAGS_IF) ) > + return; On top of Andrew's comment, this assumes v == current, which you neither assert, nor allow the reviewers to verify (as the function has no caller). Jan
On 02/01/17 17:45, Andrew Cooper wrote: > On 31/12/2016 05:45, Suravee Suthikulpanit wrote: >> Enabling AVIC implicitly disables the V_IRQ, V_INTR_PRIO, V_IGN_TPR, >> and V_INTR_VECTOR fields in the VMCB Control Word. Therefore, this patch >> introduces new interrupt injection code via AVIC backing page. >> >> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> >> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> >> Cc: Jan Beulich <JBeulich@suse.com> >> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> >> --- >> xen/arch/x86/hvm/svm/avic.c | 28 ++++++++++++++++++++++++++++ >> xen/arch/x86/hvm/svm/intr.c | 4 ++++ >> xen/arch/x86/hvm/svm/svm.c | 12 ++++++++++-- >> xen/include/asm-x86/hvm/svm/avic.h | 1 + >> 4 files changed, 43 insertions(+), 2 deletions(-) >> >> diff --git a/xen/arch/x86/hvm/svm/avic.c b/xen/arch/x86/hvm/svm/avic.c >> index 6351c8e..faa5e45 100644 >> --- a/xen/arch/x86/hvm/svm/avic.c >> +++ b/xen/arch/x86/hvm/svm/avic.c >> @@ -636,6 +636,34 @@ void svm_avic_vmexit_do_noaccel(struct cpu_user_regs *regs) >> return; >> } >> >> +void svm_avic_deliver_posted_intr(struct vcpu *v, u8 vec) >> +{ >> + struct vlapic *vlapic = vcpu_vlapic(v); >> + >> + /* Fallback to use non-AVIC if vcpu is not enabled with AVIC. */ >> + if ( !svm_avic_vcpu_enabled(v) ) >> + { >> + if ( !vlapic_test_and_set_vector(vec, &vlapic->regs->data[APIC_IRR]) ) >> + vcpu_kick(v); >> + return; >> + } >> + >> + if ( !(guest_cpu_user_regs()->eflags & X86_EFLAGS_IF) ) >> + return; > > Won't this discard the interrupt? > >> + >> + if ( vlapic_test_and_set_vector(vec, &vlapic->regs->data[APIC_IRR]) ) >> + return; >> + >> + /* >> + * If vcpu is running on another cpu, hit the doorbell to signal >> + * it to process interrupt. Otherwise, kick it. >> + */ >> + if ( v->is_running && (v != current) ) >> + wrmsrl(AVIC_DOORBELL, cpu_data[v->processor].apicid); > > Hmm - my gut feeling is that this is racy without holding the scheduler > lock for the target pcpu. Nothing (I am aware of) excludes ->is_running > and ->processor changing behind our back. > > CC'ing George and Dario for their input. I'm not sure how AVIC_DOORBELL works (haven't looked at the whole series) -- the vcpu_kick() path accesses both is_running and v->processor without locks; but that's because any schedule event which may change those values will also check to see whether there is a pending event to be delivered. In theory the same could apply to this mechanism, but it would take some careful thinking (in particular, understanding the "NB's" in vcpu_kick() to see if and how they apply). -George
diff --git a/xen/arch/x86/hvm/svm/avic.c b/xen/arch/x86/hvm/svm/avic.c index 6351c8e..faa5e45 100644 --- a/xen/arch/x86/hvm/svm/avic.c +++ b/xen/arch/x86/hvm/svm/avic.c @@ -636,6 +636,34 @@ void svm_avic_vmexit_do_noaccel(struct cpu_user_regs *regs) return; } +void svm_avic_deliver_posted_intr(struct vcpu *v, u8 vec) +{ + struct vlapic *vlapic = vcpu_vlapic(v); + + /* Fallback to use non-AVIC if vcpu is not enabled with AVIC. */ + if ( !svm_avic_vcpu_enabled(v) ) + { + if ( !vlapic_test_and_set_vector(vec, &vlapic->regs->data[APIC_IRR]) ) + vcpu_kick(v); + return; + } + + if ( !(guest_cpu_user_regs()->eflags & X86_EFLAGS_IF) ) + return; + + if ( vlapic_test_and_set_vector(vec, &vlapic->regs->data[APIC_IRR]) ) + return; + + /* + * If vcpu is running on another cpu, hit the doorbell to signal + * it to process interrupt. Otherwise, kick it. + */ + if ( v->is_running && (v != current) ) + wrmsrl(AVIC_DOORBELL, cpu_data[v->processor].apicid); + else + vcpu_kick(v); +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/hvm/svm/intr.c b/xen/arch/x86/hvm/svm/intr.c index bd94731..876d2ad 100644 --- a/xen/arch/x86/hvm/svm/intr.c +++ b/xen/arch/x86/hvm/svm/intr.c @@ -30,6 +30,7 @@ #include <asm/hvm/io.h> #include <asm/hvm/support.h> #include <asm/hvm/vlapic.h> +#include <asm/hvm/svm/avic.h> #include <asm/hvm/svm/svm.h> #include <asm/hvm/svm/intr.h> #include <asm/hvm/nestedhvm.h> /* for nestedhvm_vcpu_in_guestmode */ @@ -101,6 +102,9 @@ static void svm_enable_intr_window(struct vcpu *v, struct hvm_intack intack) HVMTRACE_3D(INTR_WINDOW, intack.vector, intack.source, vmcb->eventinj.fields.v?vmcb->eventinj.fields.vector:-1); + if ( svm_avic_vcpu_enabled(v) ) + return; + /* * Create a dummy virtual interrupt to intercept as soon as the * guest can accept the real interrupt. diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index df59b8d..922f48f 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -1089,7 +1089,8 @@ static void noreturn svm_do_resume(struct vcpu *v) hvm_asid_flush_vcpu(v); } - if ( !vcpu_guestmode && !vlapic_hw_disabled(vlapic) ) + if ( !vcpu_guestmode && !vlapic_hw_disabled(vlapic) && + !svm_avic_vcpu_enabled(v) ) { vintr_t intr; @@ -2309,7 +2310,8 @@ void svm_vmexit_handler(struct cpu_user_regs *regs) * NB. We need to preserve the low bits of the TPR to make checked builds * of Windows work, even though they don't actually do anything. */ - if ( !vcpu_guestmode && !vlapic_hw_disabled(vlapic) ) + if ( !vcpu_guestmode && !vlapic_hw_disabled(vlapic) && + !svm_avic_vcpu_enabled(v) ) { intr = vmcb_get_vintr(vmcb); vlapic_set_reg(vlapic, APIC_TASKPRI, @@ -2502,6 +2504,12 @@ void svm_vmexit_handler(struct cpu_user_regs *regs) u32 general1_intercepts = vmcb_get_general1_intercepts(vmcb); intr = vmcb_get_vintr(vmcb); + if ( svm_avic_vcpu_enabled(v) ) + { + gdprintk(XENLOG_ERR, "AVIC VINTR:\n"); + domain_crash(v->domain); + } + intr.fields.irq = 0; general1_intercepts &= ~GENERAL1_INTERCEPT_VINTR; diff --git a/xen/include/asm-x86/hvm/svm/avic.h b/xen/include/asm-x86/hvm/svm/avic.h index 7e0abcb..1676e01 100644 --- a/xen/include/asm-x86/hvm/svm/avic.h +++ b/xen/include/asm-x86/hvm/svm/avic.h @@ -40,4 +40,5 @@ int svm_avic_init_vmcb(struct vcpu *v); void svm_avic_vmexit_do_incomp_ipi(struct cpu_user_regs *regs); void svm_avic_vmexit_do_noaccel(struct cpu_user_regs *regs); +void svm_avic_deliver_posted_intr(struct vcpu *v, u8 vector); #endif /* _SVM_AVIC_H_ */
Enabling AVIC implicitly disables the V_IRQ, V_INTR_PRIO, V_IGN_TPR, and V_INTR_VECTOR fields in the VMCB Control Word. Therefore, this patch introduces new interrupt injection code via AVIC backing page. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Jan Beulich <JBeulich@suse.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> --- xen/arch/x86/hvm/svm/avic.c | 28 ++++++++++++++++++++++++++++ xen/arch/x86/hvm/svm/intr.c | 4 ++++ xen/arch/x86/hvm/svm/svm.c | 12 ++++++++++-- xen/include/asm-x86/hvm/svm/avic.h | 1 + 4 files changed, 43 insertions(+), 2 deletions(-)