From patchwork Wed Aug 10 19:30:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12940953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C9F50C00140 for ; Wed, 10 Aug 2022 19:33:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Z0goaqYiADXUP4uUseOIu6PIDJvR4MPtUbIaLHrUNY4=; b=0gQF8aX1n0ml1CnuqxGjXfp++8 sbf7BfmCwf2TrzKEuiaaNKXrZe46AWr69EkvNG2uhesBaAokucgR8vGnWy7Py34WYc2EyuuWJEBoY b1YYK74daB6lKDXXVN6xpvWaVIR1zg4wUD8HqoMSRdIhH9Zg0akC0rT4n6wVAuDAGpG9PbfCsitFd TwGcU0bIovi1gF+i3ESeCvEAKKl3n7RyWRwOk1Jr2x51JoJqfYSdb5GrXe2UxeXsmlYe3UZnqZ2Ge beKCvsesgRcPFi7h2m/MlZHeMov2c1zLla1nBV+xF2bNAHyvSX5RvvfU7ZEdeRyRsXtwp9rkDyv9L Hf7uZUqA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrQk-00ECLH-J8; Wed, 10 Aug 2022 19:31:50 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrPp-00EBlG-2a for linux-arm-kernel@lists.infradead.org; Wed, 10 Aug 2022 19:30:54 +0000 Received: by mail-pj1-x1049.google.com with SMTP id kb4-20020a17090ae7c400b001f783657496so1594260pjb.6 for ; Wed, 10 Aug 2022 12:30:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=ISyf//eKqE0iV9Jad3fzzOnB3Q21JHgqLZOFoVPjKMI=; b=Rpt4p2M0H/10ExEF5hENxTIqXeEe2mpTkCUkrm5mtB5m2DgJTHM9DgTYaYdVWE1t70 Ibz1PyMQMU+JQzBpHh8y77NSSLzNoDgdyNwu4bKW/LeONh1O27Fnef3UG+NqKSxvPOuI 7K5fxKaOkDXvsux3jkaLsMXNVuVmWuNRkWnfsR63ijxZRE5cHQ/nyYoJ1XnGvlI58xWk kt3Niw42xbj0USCsk5HkRyo8Ah6LIVH1L+LqjR6TfQ2sF15B6Vu3JZpCDOCHmf3+oBSB JHz7f6lZgjxsPK+fZKVWj/4jKrUcP23UzygWOTRSqFOKGgDwvFbc7eOYYLCGaghgxPf+ gcsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=ISyf//eKqE0iV9Jad3fzzOnB3Q21JHgqLZOFoVPjKMI=; b=P5DL3tZTZhd0B8PP6BxMAW8Lgfy5Xf6AMeSxGlT4bxeD7cyHrrKIkqqhoS1h7lSjMw kMg1RxGbxqTjt4ctBXNCeyG9k+qVsST99Yk1k6i6P8VEUBG+FOWrvvRRTkpfy13aIamg fhupUi0jjSTZjyaKkOV4ryWkHIemYapfOZLvy0T6yVQwOTVUNI5PaH7jB8+3QZR474Ze b3YZ5YaUQuFOd0YUjp7cwWyMPw1ixJhPwChWNM3e2VYw5v67DogZJ33lF06+d9Ejx3ja CLlyAUcTHHTsXPhn8eWhPGdzKwS43nC95evZqCQPoQ7kKgcYpjliq7efrahD4eZ8Kp5g PkUg== X-Gm-Message-State: ACgBeo2xRM+YyyKW34q6Egeqw2BFIfjd/SNavOc1GwBmjRyC7yYzw1jN x/c5jYcxzssq7q7K7vpRm26rPUno5A580/BFHY9e+tvltdmpS2L2Oy6dABz5nclb0qE9ouFvoom putP1bVaJLB2XLYT5qqWeczhsQw0y0Xl9tM18/Bgv9kqaOGGFq3psn9Yv21RQT7ivYhVxD4qB X-Google-Smtp-Source: AA6agR4gQa4UYbSdbQyWQV6Bwq7wnb+1/ubEBVorjvVta/rIBUuFcRv4Qi6HBrOfdJr722FFyo+zG5I= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:4d8b:fb2a:2ecb:c2bb]) (user=pcc job=sendgmr) by 2002:a17:902:d54c:b0:170:9ba1:f38f with SMTP id z12-20020a170902d54c00b001709ba1f38fmr17494564plf.32.1660159850191; Wed, 10 Aug 2022 12:30:50 -0700 (PDT) Date: Wed, 10 Aug 2022 12:30:31 -0700 In-Reply-To: <20220810193033.1090251-1-pcc@google.com> Message-Id: <20220810193033.1090251-6-pcc@google.com> Mime-Version: 1.0 References: <20220810193033.1090251-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.559.g78731f0fdb-goog Subject: [PATCH v3 5/7] KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Peter Collingbourne , Cornelia Huck , Catalin Marinas , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220810_123053_157737_7DB9F7C4 X-CRM114-Status: GOOD ( 16.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Previously we allowed creating a memslot containing a private mapping that was not VM_MTE_ALLOWED, but would later reject KVM_RUN with -EFAULT. Now we reject the memory region at memslot creation time. Since this is a minor tweak to the ABI (a VMM that created one of these memslots would fail later anyway), no VMM to my knowledge has MTE support yet, and the hardware with the necessary features is not generally available, we can probably make this ABI change at this point. Signed-off-by: Peter Collingbourne Reviewed-by: Catalin Marinas Reviewed-by: Steven Price --- arch/arm64/kvm/mmu.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 750a69a97994..d54be80e31dd 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1073,6 +1073,19 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, } } +static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) +{ + /* + * VM_SHARED mappings are not allowed with MTE to avoid races + * when updating the PG_mte_tagged page flag, see + * sanitise_mte_tags for more details. + */ + if (vma->vm_flags & VM_SHARED) + return false; + + return vma->vm_flags & VM_MTE_ALLOWED; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long hva, unsigned long fault_status) @@ -1249,9 +1262,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) { - /* Check the VMM hasn't introduced a new VM_SHARED VMA */ - if ((vma->vm_flags & VM_MTE_ALLOWED) && - !(vma->vm_flags & VM_SHARED)) { + /* Check the VMM hasn't introduced a new disallowed VMA */ + if (kvm_vma_mte_allowed(vma)) { sanitise_mte_tags(kvm, pfn, vma_pagesize); } else { ret = -EFAULT; @@ -1695,12 +1707,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, if (!vma) break; - /* - * VM_SHARED mappings are not allowed with MTE to avoid races - * when updating the PG_mte_tagged page flag, see - * sanitise_mte_tags for more details. - */ - if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) { + if (kvm_has_mte(kvm) && !kvm_vma_mte_allowed(vma)) { ret = -EINVAL; break; }