Message ID | 1467975211-12674-12-git-send-email-james.hogan@imgtec.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
diff --git a/arch/mips/kvm/tlb.c b/arch/mips/kvm/tlb.c index f5f8c2acae53..254377d8e0b9 100644 --- a/arch/mips/kvm/tlb.c +++ b/arch/mips/kvm/tlb.c @@ -332,6 +332,8 @@ void kvm_mips_flush_host_tlb(int skip_kseg0) /* Don't blow away guest kernel entries */ if (KVM_GUEST_KSEGX(entryhi) == KVM_GUEST_KSEG0) continue; + + write_c0_pagemask(old_pagemask); } /* Make sure all entries differ. */
KVM sometimes flushes host TLB entries, reading each one to check if it corresponds to a guest KSeg0 address. In the absence of EntryHi.EHInv bits to invalidate the whole entry, the entries will be set to unique virtual addresses in KSeg0 (which is not TLB mapped), spaced 2*PAGE_SIZE apart. The TLB read however will clobber the CP0_PageMask register with whatever page size that TLB entry had, and that same page size will be written back into the TLB entry along with the unique address. This would cause breakage when transparent huge pages are enabled on 64-bit host kernels, since huge page entries will overlap other nearby entries when separated by only 2*PAGE_SIZE, causing a machine check exception. Fix this by restoring the old CP0_PageMask value (which should be set to the normal page size) after reading the TLB entry if we're going to go ahead and invalidate it. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org --- arch/mips/kvm/tlb.c | 2 ++ 1 file changed, 2 insertions(+)