Message ID | 20220810193033.1090251-6-pcc@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: arm64: permit MAP_SHARED mappings with MTE enabled | expand |
On Wed, Aug 10, 2022 at 12:30:31PM -0700, Peter Collingbourne wrote: > Previously we allowed creating a memslot containing a private mapping that > was not VM_MTE_ALLOWED, but would later reject KVM_RUN with -EFAULT. Now > we reject the memory region at memslot creation time. > > Since this is a minor tweak to the ABI (a VMM that created one of > these memslots would fail later anyway), no VMM to my knowledge has > MTE support yet, and the hardware with the necessary features is not > generally available, we can probably make this ABI change at this point. I don't think that's a noticeable ABI change. As you said, such VMs would fail later anyway when trying to access such page. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
On 10/08/2022 20:30, Peter Collingbourne wrote: > Previously we allowed creating a memslot containing a private mapping that > was not VM_MTE_ALLOWED, but would later reject KVM_RUN with -EFAULT. Now > we reject the memory region at memslot creation time. > > Since this is a minor tweak to the ABI (a VMM that created one of > these memslots would fail later anyway), no VMM to my knowledge has > MTE support yet, and the hardware with the necessary features is not > generally available, we can probably make this ABI change at this point. > > Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Steven Price <steven.price@arm.com> > --- > arch/arm64/kvm/mmu.c | 25 ++++++++++++++++--------- > 1 file changed, 16 insertions(+), 9 deletions(-) > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index 750a69a97994..d54be80e31dd 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -1073,6 +1073,19 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, > } > } > > +static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) > +{ > + /* > + * VM_SHARED mappings are not allowed with MTE to avoid races > + * when updating the PG_mte_tagged page flag, see > + * sanitise_mte_tags for more details. > + */ > + if (vma->vm_flags & VM_SHARED) > + return false; > + > + return vma->vm_flags & VM_MTE_ALLOWED; > +} > + > static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > struct kvm_memory_slot *memslot, unsigned long hva, > unsigned long fault_status) > @@ -1249,9 +1262,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > } > > if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) { > - /* Check the VMM hasn't introduced a new VM_SHARED VMA */ > - if ((vma->vm_flags & VM_MTE_ALLOWED) && > - !(vma->vm_flags & VM_SHARED)) { > + /* Check the VMM hasn't introduced a new disallowed VMA */ > + if (kvm_vma_mte_allowed(vma)) { > sanitise_mte_tags(kvm, pfn, vma_pagesize); > } else { > ret = -EFAULT; > @@ -1695,12 +1707,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, > if (!vma) > break; > > - /* > - * VM_SHARED mappings are not allowed with MTE to avoid races > - * when updating the PG_mte_tagged page flag, see > - * sanitise_mte_tags for more details. > - */ > - if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) { > + if (kvm_has_mte(kvm) && !kvm_vma_mte_allowed(vma)) { > ret = -EINVAL; > break; > }
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 750a69a97994..d54be80e31dd 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1073,6 +1073,19 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, } } +static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) +{ + /* + * VM_SHARED mappings are not allowed with MTE to avoid races + * when updating the PG_mte_tagged page flag, see + * sanitise_mte_tags for more details. + */ + if (vma->vm_flags & VM_SHARED) + return false; + + return vma->vm_flags & VM_MTE_ALLOWED; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long hva, unsigned long fault_status) @@ -1249,9 +1262,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) { - /* Check the VMM hasn't introduced a new VM_SHARED VMA */ - if ((vma->vm_flags & VM_MTE_ALLOWED) && - !(vma->vm_flags & VM_SHARED)) { + /* Check the VMM hasn't introduced a new disallowed VMA */ + if (kvm_vma_mte_allowed(vma)) { sanitise_mte_tags(kvm, pfn, vma_pagesize); } else { ret = -EFAULT; @@ -1695,12 +1707,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, if (!vma) break; - /* - * VM_SHARED mappings are not allowed with MTE to avoid races - * when updating the PG_mte_tagged page flag, see - * sanitise_mte_tags for more details. - */ - if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) { + if (kvm_has_mte(kvm) && !kvm_vma_mte_allowed(vma)) { ret = -EINVAL; break; }
Previously we allowed creating a memslot containing a private mapping that was not VM_MTE_ALLOWED, but would later reject KVM_RUN with -EFAULT. Now we reject the memory region at memslot creation time. Since this is a minor tweak to the ABI (a VMM that created one of these memslots would fail later anyway), no VMM to my knowledge has MTE support yet, and the hardware with the necessary features is not generally available, we can probably make this ABI change at this point. Signed-off-by: Peter Collingbourne <pcc@google.com> --- arch/arm64/kvm/mmu.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-)