From patchwork Fri Jul 22 01:50:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12925874 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E7815C433EF for ; Fri, 22 Jul 2022 01:52:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=yFCqHmshRGaHg8otRSCmbVDOwLocAc7RtqwSnE3UtbI=; b=shP5mlyGBLrBI4PNCGESXgsubR We0Ijtfc1WPrM28JrzWc9DvS6hGZEwj/6f4CGJP3h5njlJrcVvX9+p8Py/N+IpzCIFnscCwzOfT6d GFTzmi/b3+PAdVUUDZ6aNy2Urs5xnehLt4kQ86HW+LjRspyLIUuAqq4NLOTOygqgHbMQ0TFQaftLh AmKE1o/zvXU9YsCs0meOfD8ProUaj9LnujWKBtdKjxKcEjgK0s4H94Fg3S/Z28zlA4llXNtTZuN9i 4ambzeuIJFzeQ/5VAVtkHIlH37xKYvro3/f4OgFPoUPbktWfQWlsLlavRYa62YwSsejACu1f6l33O uhLLS4Sg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEhpf-00Fnb0-6e; Fri, 22 Jul 2022 01:51:59 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEhoq-00FlrR-W0 for linux-arm-kernel@lists.infradead.org; Fri, 22 Jul 2022 01:51:10 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id r64-20020a254443000000b006707b7c2baeso2601500yba.16 for ; Thu, 21 Jul 2022 18:51:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GyooXJlVj4syrTh57O3ydwA+ijcUnjC6dc1fY8noLps=; b=YexvIvXmjzIiWZCe0Sk5hw56SQOQK5rCE+GIuFsEHy72Q9kS5JT4M+nBTT7s4wZuAm f5dgcRlU/elvPbUhNMcjoONhJPNZbqP9Gx+yqVuAkYgt2j7WBfrWdp2nvpa9il5ZAGcY uhOTucAWiXLVs4DCEzbfoEsOlh9g8RMGTfBO6KSXT0jKWgr6d5G6OLdAD8JdvWJd/cAT MqdldRcm0mVOGz7Jp90nD+TwbpLxHOWkqwPMedVjuKgHX10c/bwtjdG0JAy5utJ3aJVJ 2732cZqnk/H+d5WXLQ9+UDeSK1u0VXrOzTEJF49vbxTmV8DJAyfAaRzc8qK5XqgH4gyK Xjkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GyooXJlVj4syrTh57O3ydwA+ijcUnjC6dc1fY8noLps=; b=H9R6eiyLrBYOFgdGwXrUJNUOctMuaAcpnIoRlLOzl/hpzotLl3B7z3rwZ3wZHdLuBf n/9SwMI8mnl51HV54dZb62Sb/BJtUIpWxa+n/tfxWObWsrcQ4jXUQNjGcQiUIqq4uJfY /iESY+uyB0RPSX5B+U1tr/0UkMcU0G0kfu8T2C28uWvhTkbVdKPphjDg+XtobYldAcG3 aQmj4Co3G/nzKtVYdfYWuiIsP6shmaRw1XXeW+0Oa4z51/MGpijMD4sIHRLA0XTepf6q +AbqHWpXEcFGy4ewR9CuuJR9GQPs85+M40SnwjUjqlu1XMZGadYazla9IRNesvqw4UY5 j09g== X-Gm-Message-State: AJIora++rEJqHnMWVeyga1S9xpQFBQoQql3AjzfXb/M26jsFGJzLUsf+ BZEuzb3oY7gEu5PD59q4+wPXnifrjJ+9JmmbPLCSZQash60CYHGJgz+bZpHJiCJoaVMQ9g0EeX7 E+vHffFUOp/zZwujrF3JTmQ/Sz3qr/r9XMD5gjdBxZyFJ3sHU2gvimrs/Ew8mz1DY40IeLMlB X-Google-Smtp-Source: AGRyM1us6MLr1PJ/EHpCUXmwTwiBBgpEPArK9WyBoVdM3Qtak3DgE3vimbQIEBEpi09h87XBzW/1l2Y= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:7ed4:5864:d5e1:ffe1]) (user=pcc job=sendgmr) by 2002:a25:8404:0:b0:66e:fe43:4f93 with SMTP id u4-20020a258404000000b0066efe434f93mr1180184ybk.284.1658454667788; Thu, 21 Jul 2022 18:51:07 -0700 (PDT) Date: Thu, 21 Jul 2022 18:50:31 -0700 In-Reply-To: <20220722015034.809663-1-pcc@google.com> Message-Id: <20220722015034.809663-6-pcc@google.com> Mime-Version: 1.0 References: <20220722015034.809663-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.359.gd136c6c3e2-goog Subject: [PATCH v2 5/7] KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Peter Collingbourne , Cornelia Huck , Catalin Marinas , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220721_185109_090842_BB6F27F5 X-CRM114-Status: GOOD ( 15.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Previously we allowed creating a memslot containing a private mapping that was not VM_MTE_ALLOWED, but would later reject KVM_RUN with -EFAULT. Now we reject the memory region at memslot creation time. Since this is a minor tweak to the ABI (a VMM that created one of these memslots would fail later anyway), no VMM to my knowledge has MTE support yet, and the hardware with the necessary features is not generally available, we can probably make this ABI change at this point. Signed-off-by: Peter Collingbourne --- arch/arm64/kvm/mmu.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 750a69a97994..d54be80e31dd 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1073,6 +1073,19 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, } } +static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) +{ + /* + * VM_SHARED mappings are not allowed with MTE to avoid races + * when updating the PG_mte_tagged page flag, see + * sanitise_mte_tags for more details. + */ + if (vma->vm_flags & VM_SHARED) + return false; + + return vma->vm_flags & VM_MTE_ALLOWED; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long hva, unsigned long fault_status) @@ -1249,9 +1262,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) { - /* Check the VMM hasn't introduced a new VM_SHARED VMA */ - if ((vma->vm_flags & VM_MTE_ALLOWED) && - !(vma->vm_flags & VM_SHARED)) { + /* Check the VMM hasn't introduced a new disallowed VMA */ + if (kvm_vma_mte_allowed(vma)) { sanitise_mte_tags(kvm, pfn, vma_pagesize); } else { ret = -EFAULT; @@ -1695,12 +1707,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, if (!vma) break; - /* - * VM_SHARED mappings are not allowed with MTE to avoid races - * when updating the PG_mte_tagged page flag, see - * sanitise_mte_tags for more details. - */ - if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) { + if (kvm_has_mte(kvm) && !kvm_vma_mte_allowed(vma)) { ret = -EINVAL; break; }