Message ID | 1538479892-14835-29-git-send-email-paulus@ozlabs.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: PPC: Book3S HV: Nested HV virtualization | expand |
On Tue, Oct 02, 2018 at 09:31:27PM +1000, Paul Mackerras wrote: > From: Suraj Jitindar Singh <sjitindarsingh@gmail.com> > > restore_hv_regs() is used to copy the hv_regs L1 wants to set to run the > nested (L2) guest into the vcpu structure. We need to sanitise these > values to ensure we don't let the L1 guest hypervisor do things we don't > want it to. > > We don't let data address watchpoints or completed instruction address > breakpoints be set to match in hypervisor state. > > We also don't let L1 enable features in the hypervisor facility status > and control register (HFSCR) for L2 which we have disabled for L1. That > is L2 will get the subset of features which the L0 hypervisor has > enabled for L1 and the features L1 wants to enable for L2. This could > mean we give L1 a hypervisor facility unavailable interrupt for a > facility it thinks it has enabled, however it shouldn't have enabled a > facility it itself doesn't have for the L2 guest. > > We sanitise the registers when copying in the L2 hv_regs. We don't need > to sanitise when copying back the L1 hv_regs since these shouldn't be > able to contain invalid values as they're just what was copied out. > > Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> > Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> > --- > arch/powerpc/include/asm/reg.h | 1 + > arch/powerpc/kvm/book3s_hv_nested.c | 17 +++++++++++++++++ > 2 files changed, 18 insertions(+) > > diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h > index 9c42abf..47489f6 100644 > --- a/arch/powerpc/include/asm/reg.h > +++ b/arch/powerpc/include/asm/reg.h > @@ -415,6 +415,7 @@ > #define HFSCR_DSCR __MASK(FSCR_DSCR_LG) > #define HFSCR_VECVSX __MASK(FSCR_VECVSX_LG) > #define HFSCR_FP __MASK(FSCR_FP_LG) > +#define HFSCR_INTR_CAUSE (ASM_CONST(0xFF) << 56) /* interrupt cause */ > #define SPRN_TAR 0x32f /* Target Address Register */ > #define SPRN_LPCR 0x13E /* LPAR Control Register */ > #define LPCR_VPM0 ASM_CONST(0x8000000000000000) > diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c > index 7656cb3..7b1088a 100644 > --- a/arch/powerpc/kvm/book3s_hv_nested.c > +++ b/arch/powerpc/kvm/book3s_hv_nested.c > @@ -86,6 +86,22 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap, > } > } > > +static void sanitise_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr) > +{ > + /* > + * Don't let L1 enable features for L2 which we've disabled for L1, > + * but preserve the interrupt cause field. > + */ > + hr->hfscr &= (HFSCR_INTR_CAUSE | vcpu->arch.hfscr); > + > + /* Don't let data address watchpoint match in hypervisor state */ > + hr->dawrx0 &= ~DAWRX_HYP; > + > + /* Don't let completed instruction address breakpt match in HV state */ > + if ((hr->ciabr & CIABR_PRIV) == CIABR_PRIV_HYPER) > + hr->ciabr &= ~CIABR_PRIV; > +} > + > static void restore_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr) > { > struct kvmppc_vcore *vc = vcpu->arch.vcore; > @@ -198,6 +214,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) > mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD | > LPCR_LPES | LPCR_MER; > lpcr = (vc->lpcr & ~mask) | (l2_hv.lpcr & mask); > + sanitise_hv_regs(vcpu, &l2_hv); > restore_hv_regs(vcpu, &l2_hv); > > vcpu->arch.ret = RESUME_GUEST;
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h index 9c42abf..47489f6 100644 --- a/arch/powerpc/include/asm/reg.h +++ b/arch/powerpc/include/asm/reg.h @@ -415,6 +415,7 @@ #define HFSCR_DSCR __MASK(FSCR_DSCR_LG) #define HFSCR_VECVSX __MASK(FSCR_VECVSX_LG) #define HFSCR_FP __MASK(FSCR_FP_LG) +#define HFSCR_INTR_CAUSE (ASM_CONST(0xFF) << 56) /* interrupt cause */ #define SPRN_TAR 0x32f /* Target Address Register */ #define SPRN_LPCR 0x13E /* LPAR Control Register */ #define LPCR_VPM0 ASM_CONST(0x8000000000000000) diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 7656cb3..7b1088a 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -86,6 +86,22 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap, } } +static void sanitise_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr) +{ + /* + * Don't let L1 enable features for L2 which we've disabled for L1, + * but preserve the interrupt cause field. + */ + hr->hfscr &= (HFSCR_INTR_CAUSE | vcpu->arch.hfscr); + + /* Don't let data address watchpoint match in hypervisor state */ + hr->dawrx0 &= ~DAWRX_HYP; + + /* Don't let completed instruction address breakpt match in HV state */ + if ((hr->ciabr & CIABR_PRIV) == CIABR_PRIV_HYPER) + hr->ciabr &= ~CIABR_PRIV; +} + static void restore_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr) { struct kvmppc_vcore *vc = vcpu->arch.vcore; @@ -198,6 +214,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD | LPCR_LPES | LPCR_MER; lpcr = (vc->lpcr & ~mask) | (l2_hv.lpcr & mask); + sanitise_hv_regs(vcpu, &l2_hv); restore_hv_regs(vcpu, &l2_hv); vcpu->arch.ret = RESUME_GUEST;