From patchwork Wed Sep 21 03:51:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12983195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 54803C32771 for ; Wed, 21 Sep 2022 03:55:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=62lseBFvavsmk7fIU89OMZ9QK7405xTNf/MPKM86nus=; b=ac9aPGfH6aHuVbwVgfKG0nMfoX mKtHTxygj+lNeiwsiZ1DJS/GgXMBTSqFwIQxRYKr3w5+QCnZFluXU8qDmNn1M3UlBnxy1uXQPJ4OM jy855jTCaiCV+0f++eRHkxjEhRh32nRgx1nCW3NgnC8EUw1NTLi7maarl+TUKovzy6sxiOJYMSAAU iRoB8oSXVTyPtVItOdlLj3YOYnr4ntRxpMvJw/OwDKHk3BDyU2UlsIoWZMpDiXBoc58dryPJlx9uG yzg7ucl41PtrKj3ZYcAN73W0yaozPZdbAkQpE9B0OzjyStAV3cjKsgahLx+8tbwzEvuEkIEX4SJ07 qk617Eyg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oaqoF-008gD9-CY; Wed, 21 Sep 2022 03:54:03 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oaqnL-008frV-MD for linux-arm-kernel@lists.infradead.org; Wed, 21 Sep 2022 03:53:09 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-34577a9799dso41229377b3.6 for ; Tue, 20 Sep 2022 20:53:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date; bh=gA2O8oSq7e+d9nyvptw2rqsk9DDWzgsNC9HG6nZ9ojk=; b=XIO6ctUJSy+0azvt2pjBnryBUWNbF6hif5gqbPYYTgs6hWqcNPUhpEe9lgmOAUb5lK FIEztD6adqYqFeEvBypKlXChVdd8QisgiiSfY+985vqxZPImM0ex7t6ynV1Q5O6tezgt NDwiBirb38U/1dEirAo7Y3vns06/c4B8HudINdNTwx88KQNxgx1TmYWG0eQQg8lejZAZ /bU52MCOnyPXF6tZePuOkxwp5FG/90ihIjaJ57kWjHQ5T3L7z6Q1dLQcob1PqehxKNrc 0m9GKeaaZ/knEEBovy0cI9BPTzeBrDuufHMEvKZD7vIi8VDKzbEhuAnwKtKy8Vj4b9pN sirw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=gA2O8oSq7e+d9nyvptw2rqsk9DDWzgsNC9HG6nZ9ojk=; b=qb+c0+SHeMlpQHyDiPf6cP6HMteM4WkvruoyKbCFxi6Vi6OcukLkV84+L7eSvE/0Mr armAuVOODrcYDT49izGUsZi0ZGPQ2mMY5f83enbEvRxvmU2OGWco9YPuqpZZ4Jt/Dn/t U5LLK/3hcO4VQEfSrLTANJa6+ICY2+FckWQz4YAGPOkJWI7oqGl4jUENbw2bRLU48zec UkeY06iEG6AEBZdoupJv2q/IFCkKuDpsGIY7VKcy91bMJiKMuWK6vDq+AW6A85/vBTPu yH0dhxFvhvkGeygPE6x+fPahxXXHeI81KjNwH3pQaJaTNhp4Vm7ZxH4ly6TnywC7+iuq lUxA== X-Gm-Message-State: ACrzQf0Gzl4y6CC1JrfO0pVZ2so3alwT7HiT2aLksGZgbZf8YOjg+v7z jhwWuleuLqRdyT8qQlYk6BUMK3IOpiZL0WZquh3ezhXPl1a4QzAJFc0vpPiVnRwM8kZjG9lMbaX ffdxhFkacqF0bQbP+3kTk6XvSPww4VTK0VSChWN4ArwdPgGkD7i/xCwAs7Jh0qOzLrLAjnQ// X-Google-Smtp-Source: AMsMyM70KZGF5iTWN4vyFkxGmYL4qy9cjP2UuKMDiQgL9O5HmyDhF0srIJOhVM93rWn1O+AMl83PUqU= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:1b89:96f1:d30:e3c]) (user=pcc job=sendgmr) by 2002:a05:690c:823:b0:349:bc6c:630c with SMTP id by3-20020a05690c082300b00349bc6c630cmr21877503ywb.223.1663732386243; Tue, 20 Sep 2022 20:53:06 -0700 (PDT) Date: Tue, 20 Sep 2022 20:51:38 -0700 In-Reply-To: <20220921035140.57513-1-pcc@google.com> Message-Id: <20220921035140.57513-7-pcc@google.com> Mime-Version: 1.0 References: <20220921035140.57513-1-pcc@google.com> X-Mailer: git-send-email 2.37.3.968.ga6b4b080e4-goog Subject: [PATCH v4 6/8] KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Peter Collingbourne , Cornelia Huck , Catalin Marinas , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220920_205307_779675_B9C3A1C0 X-CRM114-Status: GOOD ( 16.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Previously we allowed creating a memslot containing a private mapping that was not VM_MTE_ALLOWED, but would later reject KVM_RUN with -EFAULT. Now we reject the memory region at memslot creation time. Since this is a minor tweak to the ABI (a VMM that created one of these memslots would fail later anyway), no VMM to my knowledge has MTE support yet, and the hardware with the necessary features is not generally available, we can probably make this ABI change at this point. Signed-off-by: Peter Collingbourne Reviewed-by: Catalin Marinas Reviewed-by: Steven Price --- arch/arm64/kvm/mmu.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index bebfd1e0bbf0..e34fbabd8b93 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1073,6 +1073,19 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, } } +static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) +{ + /* + * VM_SHARED mappings are not allowed with MTE to avoid races + * when updating the PG_mte_tagged page flag, see + * sanitise_mte_tags for more details. + */ + if (vma->vm_flags & VM_SHARED) + return false; + + return vma->vm_flags & VM_MTE_ALLOWED; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long hva, unsigned long fault_status) @@ -1249,9 +1262,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) { - /* Check the VMM hasn't introduced a new VM_SHARED VMA */ - if ((vma->vm_flags & VM_MTE_ALLOWED) && - !(vma->vm_flags & VM_SHARED)) { + /* Check the VMM hasn't introduced a new disallowed VMA */ + if (kvm_vma_mte_allowed(vma)) { sanitise_mte_tags(kvm, pfn, vma_pagesize); } else { ret = -EFAULT; @@ -1695,12 +1707,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, if (!vma) break; - /* - * VM_SHARED mappings are not allowed with MTE to avoid races - * when updating the PG_mte_tagged page flag, see - * sanitise_mte_tags for more details. - */ - if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) { + if (kvm_has_mte(kvm) && !kvm_vma_mte_allowed(vma)) { ret = -EINVAL; break; }