Message ID | 20240829191413.900740-1-seanjc@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Thu, 29 Aug 2024 12:14:11 -0700, Sean Christopherson wrote: > Fix a bug in kvm_clear_guest() where it would write beyond the target > page _if_ handed a gpa+len that would span multiple pages. Luckily, the > bug is unhittable in the current code base as all users ensure the > gpa+len is bound to a single page. > > Patch 2 hardens the underlying single page APIs to guard against a bad > offset+len, e.g. so that bugs like the one in kvm_clear_guest() are noisy > and don't escalate to an out-of-bounds access. > > [...] Applied to kvm-x86 generic, thanks! [1/2] KVM: Write the per-page "segment" when clearing (part of) a guest page https://github.com/kvm-x86/linux/commit/ec495f2ab122 [2/2] KVM: Harden guest memory APIs against out-of-bounds accesses https://github.com/kvm-x86/linux/commit/025dde582bbf -- https://github.com/kvm-x86/linux/tree/next
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index f18c2d8c7476..ce64e490e9c7 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3872,14 +3872,17 @@ bool __vmx_guest_state_valid(struct kvm_vcpu *vcpu) static int init_rmode_tss(struct kvm *kvm, void __user *ua) { - const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0))); + // const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0))); u16 data; - int i; + // int i; - for (i = 0; i < 3; i++) { - if (__copy_to_user(ua + PAGE_SIZE * i, zero_page, PAGE_SIZE)) - return -EFAULT; - } + if (kvm_clear_guest(kvm, to_kvm_vmx(kvm)->tss_addr, PAGE_SIZE * 3)) + return -EFAULT; + + // for (i = 0; i < 3; i++) { + // if (__copy_to_user(ua + PAGE_SIZE * i, zero_page, PAGE_SIZE)) + // return -EFAULT; + // } data = TSS_BASE_SIZE + TSS_REDIRECTION_SIZE; if (__copy_to_user(ua + TSS_IOPB_BASE_OFFSET, &data, sizeof(u16)))