Message ID | 1468430059-7958-1-git-send-email-james.hogan@imgtec.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 07/13/2016, 07:14 PM, James Hogan wrote:
> commit 797179bc4fe06c89e47a9f36f886f68640b423f8 upstream.
Applied to 3.12 too. Thanks!
On Wed, 2016-07-13 at 18:14 +0100, James Hogan wrote: > commit 797179bc4fe06c89e47a9f36f886f68640b423f8 upstream. > > Copy __kvm_mips_vcpu_run() into unmapped memory, so that we can never > get a TLB refill exception in it when KVM is built as a module. > > This was observed to happen with the host MIPS kernel running under > QEMU, due to a not entirely transparent optimisation in the QEMU TLB > handling where TLB entries replaced with TLBWR are copied to a separate > part of the TLB array. Code in those pages continue to be executable, > but those mappings persist only until the next ASID switch, even if they > are marked global. > > An ASID switch happens in __kvm_mips_vcpu_run() at exception level after > switching to the guest exception base. Subsequent TLB mapped kernel > instructions just prior to switching to the guest trigger a TLB refill > exception, which enters the guest exception handlers without updating > EPC. This appears as a guest triggered TLB refill on a host kernel > mapped (host KSeg2) address, which is not handled correctly as user > (guest) mode accesses to kernel (host) segments always generate address > error exceptions. > > Signed-off-by: James Hogan <james.hogan@imgtec.com> > Cc: Paolo Bonzini <pbonzini@redhat.com> > Cc: Radim Krčmář <rkrcmar@redhat.com> > Cc: Ralf Baechle <ralf@linux-mips.org> > Cc: kvm@vger.kernel.org > Cc: linux-mips@linux-mips.org > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > [james.hogan@imgtec.com: backported for stable 3.14] > Signed-off-by: James Hogan <james.hogan@imgtec.com> [...] Belatedly queued this up for 3.16. Ben.
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index a995fce87791..3ff5b4921b76 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -342,6 +342,7 @@ struct kvm_mips_tlb { #define KVM_MIPS_GUEST_TLB_SIZE 64 struct kvm_vcpu_arch { void *host_ebase, *guest_ebase; + int (*vcpu_run)(struct kvm_run *run, struct kvm_vcpu *vcpu); unsigned long host_stack; unsigned long host_gp; diff --git a/arch/mips/kvm/kvm_locore.S b/arch/mips/kvm/kvm_locore.S index ba5ce99c021d..d1fa2a57218b 100644 --- a/arch/mips/kvm/kvm_locore.S +++ b/arch/mips/kvm/kvm_locore.S @@ -229,6 +229,7 @@ FEXPORT(__kvm_mips_load_k0k1) /* Jump to guest */ eret +EXPORT(__kvm_mips_vcpu_run_end) VECTOR(MIPSX(exception), unknown) /* diff --git a/arch/mips/kvm/kvm_mips.c b/arch/mips/kvm/kvm_mips.c index 12d850b68763..2b2dd4ec03fb 100644 --- a/arch/mips/kvm/kvm_mips.c +++ b/arch/mips/kvm/kvm_mips.c @@ -348,6 +348,15 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id) memcpy(gebase + offset, mips32_GuestException, mips32_GuestExceptionEnd - mips32_GuestException); +#ifdef MODULE + offset += mips32_GuestExceptionEnd - mips32_GuestException; + memcpy(gebase + offset, (char *)__kvm_mips_vcpu_run, + __kvm_mips_vcpu_run_end - (char *)__kvm_mips_vcpu_run); + vcpu->arch.vcpu_run = gebase + offset; +#else + vcpu->arch.vcpu_run = __kvm_mips_vcpu_run; +#endif + /* Invalidate the icache for these ranges */ mips32_SyncICache((unsigned long) gebase, ALIGN(size, PAGE_SIZE)); @@ -431,7 +440,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) kvm_guest_enter(); - r = __kvm_mips_vcpu_run(run, vcpu); + r = vcpu->arch.vcpu_run(run, vcpu); kvm_guest_exit(); local_irq_enable(); diff --git a/arch/mips/kvm/kvm_mips_int.h b/arch/mips/kvm/kvm_mips_int.h index 20da7d29eede..bf41ea36210e 100644 --- a/arch/mips/kvm/kvm_mips_int.h +++ b/arch/mips/kvm/kvm_mips_int.h @@ -27,6 +27,8 @@ #define MIPS_EXC_MAX 12 /* XXXSL More to follow */ +extern char __kvm_mips_vcpu_run_end[]; + #define C_TI (_ULCAST_(1) << 30) #define KVM_MIPS_IRQ_DELIVER_ALL_AT_ONCE (0)