Message ID | 20190918131545.6405-1-will@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | kvm: Ensure writes to the coalesced MMIO ring are within bounds | expand |
On Wed, Sep 18, 2019 at 02:15:45PM +0100, Will Deacon wrote: > When records are written to the coalesced MMIO ring in response to a > vCPU MMIO exit, the 'ring->last' field is used to index the ring buffer > page. Although we hold the 'kvm->ring_lock' at this point, the ring > structure is mapped directly into the host userspace and can therefore > be modified to point at arbitrary pages within the kernel. > > Since this shouldn't happen in normal operation, simply bound the index > by KVM_COALESCED_MMIO_MAX to contain the accesses within the ring buffer > page. > > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> > Cc: Paolo Bonzini <pbonzini@redhat.com> > Cc: <stable@kernel.org> # 5.2.y > Fixes: 5f94c1741bdc ("KVM: Add coalesced MMIO support (common part)") > Reported-by: Bill Creasey <bcreasey@google.com> > Signed-off-by: Will Deacon <will@kernel.org> > --- > > I think there are some other fixes kicking around for this, but they > still rely on 'ring->last' being stable, which isn't necessarily the > case. I'll send the -stable backport for kernels prior to 5.2 once this > hits mainline. > > virt/kvm/coalesced_mmio.c | 13 +++++++------ > 1 file changed, 7 insertions(+), 6 deletions(-) > > diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c > index 5294abb3f178..09b3e4421550 100644 > --- a/virt/kvm/coalesced_mmio.c > +++ b/virt/kvm/coalesced_mmio.c > @@ -67,6 +67,7 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu, > { > struct kvm_coalesced_mmio_dev *dev = to_mmio(this); > struct kvm_coalesced_mmio_ring *ring = dev->kvm->coalesced_mmio_ring; > + u32 last; > > if (!coalesced_mmio_in_range(dev, addr, len)) > return -EOPNOTSUPP; > @@ -79,13 +80,13 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu, > } > > /* copy data in first free entry of the ring */ > - > - ring->coalesced_mmio[ring->last].phys_addr = addr; > - ring->coalesced_mmio[ring->last].len = len; > - memcpy(ring->coalesced_mmio[ring->last].data, val, len); > - ring->coalesced_mmio[ring->last].pio = dev->zone.pio; > + last = ring->last % KVM_COALESCED_MMIO_MAX; > + ring->coalesced_mmio[last].phys_addr = addr; > + ring->coalesced_mmio[last].len = len; > + memcpy(ring->coalesced_mmio[last].data, val, len); > + ring->coalesced_mmio[last].pio = dev->zone.pio; > smp_wmb(); > - ring->last = (ring->last + 1) % KVM_COALESCED_MMIO_MAX; > + ring->last = (last + 1) % KVM_COALESCED_MMIO_MAX; > spin_unlock(&dev->kvm->ring_lock); > return 0; > } Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
On 18/09/19 15:15, Will Deacon wrote: > When records are written to the coalesced MMIO ring in response to a > vCPU MMIO exit, the 'ring->last' field is used to index the ring buffer > page. Although we hold the 'kvm->ring_lock' at this point, the ring > structure is mapped directly into the host userspace and can therefore > be modified to point at arbitrary pages within the kernel. > > Since this shouldn't happen in normal operation, simply bound the index > by KVM_COALESCED_MMIO_MAX to contain the accesses within the ring buffer > page. > > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> > Cc: Paolo Bonzini <pbonzini@redhat.com> > Cc: <stable@kernel.org> # 5.2.y > Fixes: 5f94c1741bdc ("KVM: Add coalesced MMIO support (common part)") > Reported-by: Bill Creasey <bcreasey@google.com> > Signed-off-by: Will Deacon <will@kernel.org> > --- > > I think there are some other fixes kicking around for this, but they > still rely on 'ring->last' being stable, which isn't necessarily the > case. I'll send the -stable backport for kernels prior to 5.2 once this > hits mainline. Google's patch, which checks if ring->last is not in range and fails with -EOPNOTSUPP if not, is slightly better. I'll send it in a second and Cc you (and also send it as a pull request to Linus). Paolo > virt/kvm/coalesced_mmio.c | 13 +++++++------ > 1 file changed, 7 insertions(+), 6 deletions(-) > > diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c > index 5294abb3f178..09b3e4421550 100644 > --- a/virt/kvm/coalesced_mmio.c > +++ b/virt/kvm/coalesced_mmio.c > @@ -67,6 +67,7 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu, > { > struct kvm_coalesced_mmio_dev *dev = to_mmio(this); > struct kvm_coalesced_mmio_ring *ring = dev->kvm->coalesced_mmio_ring; > + u32 last; > > if (!coalesced_mmio_in_range(dev, addr, len)) > return -EOPNOTSUPP; > @@ -79,13 +80,13 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu, > } > > /* copy data in first free entry of the ring */ > - > - ring->coalesced_mmio[ring->last].phys_addr = addr; > - ring->coalesced_mmio[ring->last].len = len; > - memcpy(ring->coalesced_mmio[ring->last].data, val, len); > - ring->coalesced_mmio[ring->last].pio = dev->zone.pio; > + last = ring->last % KVM_COALESCED_MMIO_MAX; > + ring->coalesced_mmio[last].phys_addr = addr; > + ring->coalesced_mmio[last].len = len; > + memcpy(ring->coalesced_mmio[last].data, val, len); > + ring->coalesced_mmio[last].pio = dev->zone.pio; > smp_wmb(); > - ring->last = (ring->last + 1) % KVM_COALESCED_MMIO_MAX; > + ring->last = (last + 1) % KVM_COALESCED_MMIO_MAX; > spin_unlock(&dev->kvm->ring_lock); > return 0; > } >
On Wed, Sep 18, 2019 at 03:41:40PM +0200, Paolo Bonzini wrote: > On 18/09/19 15:15, Will Deacon wrote: > > When records are written to the coalesced MMIO ring in response to a > > vCPU MMIO exit, the 'ring->last' field is used to index the ring buffer > > page. Although we hold the 'kvm->ring_lock' at this point, the ring > > structure is mapped directly into the host userspace and can therefore > > be modified to point at arbitrary pages within the kernel. > > > > Since this shouldn't happen in normal operation, simply bound the index > > by KVM_COALESCED_MMIO_MAX to contain the accesses within the ring buffer > > page. > > > > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> > > Cc: Paolo Bonzini <pbonzini@redhat.com> > > Cc: <stable@kernel.org> # 5.2.y > > Fixes: 5f94c1741bdc ("KVM: Add coalesced MMIO support (common part)") > > Reported-by: Bill Creasey <bcreasey@google.com> > > Signed-off-by: Will Deacon <will@kernel.org> > > --- > > > > I think there are some other fixes kicking around for this, but they > > still rely on 'ring->last' being stable, which isn't necessarily the > > case. I'll send the -stable backport for kernels prior to 5.2 once this > > hits mainline. > > Google's patch, which checks if ring->last is not in range and fails > with -EOPNOTSUPP if not, is slightly better. I'll send it in a second > and Cc you (and also send it as a pull request to Linus). Okey doke, as long as it gets fixed! My minor concerns with the error-checking variant are: * Whether or not you need a READ_ONCE to prevent the compiler potentially reloading 'ring->last' after validation * Whether or not this could be part of a spectre-v1 gadget so, given that I don't think the malicious host deserves an error code if it starts writing the 'last' index, I went with the "obviously safe" version. But up to you. Will
On 18/09/19 15:59, Will Deacon wrote: > Okey doke, as long as it gets fixed! My minor concerns with the error-checking > variant are: > > * Whether or not you need a READ_ONCE to prevent the compiler potentially > reloading 'ring->last' after validation Yes, it certainly needs READ_ONCE. I had already added it locally, indeed. > * Whether or not this could be part of a spectre-v1 gadget I think not, the spectrev1 gadget require a load that is indexed from the contents of ring->coalesced_mmio[insert], but there's none. Paolo > so, given that I don't think the malicious host deserves an error code if it > starts writing the 'last' index, I went with the "obviously safe" version. > But up to you. > > Will >
diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c index 5294abb3f178..09b3e4421550 100644 --- a/virt/kvm/coalesced_mmio.c +++ b/virt/kvm/coalesced_mmio.c @@ -67,6 +67,7 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu, { struct kvm_coalesced_mmio_dev *dev = to_mmio(this); struct kvm_coalesced_mmio_ring *ring = dev->kvm->coalesced_mmio_ring; + u32 last; if (!coalesced_mmio_in_range(dev, addr, len)) return -EOPNOTSUPP; @@ -79,13 +80,13 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu, } /* copy data in first free entry of the ring */ - - ring->coalesced_mmio[ring->last].phys_addr = addr; - ring->coalesced_mmio[ring->last].len = len; - memcpy(ring->coalesced_mmio[ring->last].data, val, len); - ring->coalesced_mmio[ring->last].pio = dev->zone.pio; + last = ring->last % KVM_COALESCED_MMIO_MAX; + ring->coalesced_mmio[last].phys_addr = addr; + ring->coalesced_mmio[last].len = len; + memcpy(ring->coalesced_mmio[last].data, val, len); + ring->coalesced_mmio[last].pio = dev->zone.pio; smp_wmb(); - ring->last = (ring->last + 1) % KVM_COALESCED_MMIO_MAX; + ring->last = (last + 1) % KVM_COALESCED_MMIO_MAX; spin_unlock(&dev->kvm->ring_lock); return 0; }
When records are written to the coalesced MMIO ring in response to a vCPU MMIO exit, the 'ring->last' field is used to index the ring buffer page. Although we hold the 'kvm->ring_lock' at this point, the ring structure is mapped directly into the host userspace and can therefore be modified to point at arbitrary pages within the kernel. Since this shouldn't happen in normal operation, simply bound the index by KVM_COALESCED_MMIO_MAX to contain the accesses within the ring buffer page. Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: <stable@kernel.org> # 5.2.y Fixes: 5f94c1741bdc ("KVM: Add coalesced MMIO support (common part)") Reported-by: Bill Creasey <bcreasey@google.com> Signed-off-by: Will Deacon <will@kernel.org> --- I think there are some other fixes kicking around for this, but they still rely on 'ring->last' being stable, which isn't necessarily the case. I'll send the -stable backport for kernels prior to 5.2 once this hits mainline. virt/kvm/coalesced_mmio.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-)