From patchwork Wed Sep 21 03:51:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12983192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 294F1C32771 for ; Wed, 21 Sep 2022 03:54:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=dOJPJCcjBu9TqgV2l1Wo/le39smF7TV2QdQijdQQxGQ=; b=k8PDtC8fBZNqo4LkIUBcq9YlIA /QFEJBoaA967ilP75OAhkd36RuVZ8qqkmL0rolh8kyCTn7Sl8Bq6JO8Ai//Nnu/9IwgmuATBT+nzn 42hs0o/XUsbbl35jAc2dDYVUWP83iTXg7JAxCQEBYy7CtOXzO1gH3SpvQwFk9BhhZBYBeIaINWOJ7 f8ygv5sSXmIIqSrU+mTPvkH1Qf8r5hknSnmIhRru1veQZ2KHinjYXA2iKz9/kBpISs74C4dGFDp7X ohORtxA0OgjhCJNjsQA/kz1Bpfqm9EtpvoZWC7SAzAL2N1/XnQszdIojnfQ4fnDtlNt9VMggrX4RB QjfAtJqQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oaqnY-008fxj-V2; Wed, 21 Sep 2022 03:53:21 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oaqnF-008fnn-HR for linux-arm-kernel@lists.infradead.org; Wed, 21 Sep 2022 03:53:03 +0000 Received: by mail-yb1-xb49.google.com with SMTP id u12-20020a25094c000000b006a9ad6b2cebso4052718ybm.15 for ; Tue, 20 Sep 2022 20:53:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date; bh=EMcTlhwm+0MXO5tregQXVf8gPEMsmR53PqRHl8pwyyQ=; b=LVzYthgNJ4MJSFyI7Yr+gWoA3J+unXvf2eTvU1YYdV5/swtfLL+UhrykFTqgEpuz0m YMAzov9jSVJ2fPoQvPw7fZnqfZNkTs/5tR7IWGqgV/+cpJog8jmjQjaF9q91aoQCt23t refvXjf2LcK+JeAGUwRveBDd7dGYLskSa2Mn+3OGjOieQFH8QwWEmetbBFTQ7fWzyLvk WI9Fo3SmJJ1EpWnoG91D7UyxtTIAFhbGaSqCm7HtNw4qCjPmkMZhwwIYZrNkSEiOQO6L rLBLgIvT+AzB379+2CrHfzHQhDh7h+7ssmv/PzYZQsaGr/9P7R8/2zyg3mD4ExyqEjbj jt9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=EMcTlhwm+0MXO5tregQXVf8gPEMsmR53PqRHl8pwyyQ=; b=AZn0oSd92J1wmpRiPCXtJdTA/IDmuWu1ER1hs9ozlQh/HlTp+xPrCGFIQoII3kmF2o BZZuTHsM0HgrCLHX/Q/wL/H5r5Be9Bc7qNjkLOky3tXP/hD7gyPcnEnTTA0ezmczlR8E dq0xi+n/8PkzMPHvejTuowClwx1p6BpmYu6gStl4O1QQTlh5uu58KBFPFrBiGUOte7Ii IVNFTFYPwjr54xnLxFCArLYAhz3VzV3z1fjDxZH8UtCUuARIaXnIiaVS2BbULt/S7JnY P/TMhpkop9Zu7qQRBBBi8+lX4a9lM1d5KZ6GI7owpUoGnXZPw2im50J/6FK7uNW0Bvx/ D5yg== X-Gm-Message-State: ACrzQf0lKs22GQWjTnMzIw3gxpHT0L14Q2f3ODOmJo/SNN7cRFe46Gsj tX6gHJHpXuWTYVvhovrd85WQmYOE8amCWD/2d7d/eunTf1D/8HpVapsBAzaXPElPZMf7EqMksyZ t3PqyW3dOsJWLhSmWWYGwQSAWZ/NMtr8Jp7/KxPpyCgiFD4JwE+1gRf0ELekDZnkYPJH8VmRm X-Google-Smtp-Source: AMsMyM4PqdVKxrHkRyCh1Awuzr+9Odp/SkteLlxHYxsGzOWIc6doSJ2hERrqtsqxHvSKi54tkhpDEYQ= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:1b89:96f1:d30:e3c]) (user=pcc job=sendgmr) by 2002:a05:6902:150b:b0:6af:2bfa:81a0 with SMTP id q11-20020a056902150b00b006af2bfa81a0mr23312787ybu.176.1663732380097; Tue, 20 Sep 2022 20:53:00 -0700 (PDT) Date: Tue, 20 Sep 2022 20:51:35 -0700 In-Reply-To: <20220921035140.57513-1-pcc@google.com> Message-Id: <20220921035140.57513-4-pcc@google.com> Mime-Version: 1.0 References: <20220921035140.57513-1-pcc@google.com> X-Mailer: git-send-email 2.37.3.968.ga6b4b080e4-goog Subject: [PATCH v4 3/8] KVM: arm64: Simplify the sanitise_mte_tags() logic From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Catalin Marinas , Cornelia Huck , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino , Peter Collingbourne X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220920_205301_626911_77D2884A X-CRM114-Status: GOOD ( 22.69 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas Currently sanitise_mte_tags() checks if it's an online page before attempting to sanitise the tags. Such detection should be done in the caller via the VM_MTE_ALLOWED vma flag. Since kvm_set_spte_gfn() does not have the vma, leave the page unmapped if not already tagged. Tag initialisation will be done on a subsequent access fault in user_mem_abort(). Signed-off-by: Catalin Marinas [pcc@google.com: fix the page initializer] Signed-off-by: Peter Collingbourne Reviewed-by: Steven Price Cc: Will Deacon Cc: Marc Zyngier Cc: Peter Collingbourne --- arch/arm64/kvm/mmu.c | 40 +++++++++++++++------------------------- 1 file changed, 15 insertions(+), 25 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 012ed1bc0762..5a131f009cf9 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1056,23 +1056,14 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva) * - mmap_lock protects between a VM faulting a page in and the VMM performing * an mprotect() to add VM_MTE */ -static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, - unsigned long size) +static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, + unsigned long size) { unsigned long i, nr_pages = size >> PAGE_SHIFT; - struct page *page; + struct page *page = pfn_to_page(pfn); if (!kvm_has_mte(kvm)) - return 0; - - /* - * pfn_to_online_page() is used to reject ZONE_DEVICE pages - * that may not support tags. - */ - page = pfn_to_online_page(pfn); - - if (!page) - return -EFAULT; + return; for (i = 0; i < nr_pages; i++, page++) { if (!page_mte_tagged(page)) { @@ -1080,8 +1071,6 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, set_page_mte_tagged(page); } } - - return 0; } static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, @@ -1092,7 +1081,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, bool write_fault, writable, force_pte = false; bool exec_fault; bool device = false; - bool shared; unsigned long mmu_seq; struct kvm *kvm = vcpu->kvm; struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; @@ -1142,8 +1130,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, vma_shift = get_vma_page_shift(vma, hva); } - shared = (vma->vm_flags & VM_SHARED); - switch (vma_shift) { #ifndef __PAGETABLE_PMD_FOLDED case PUD_SHIFT: @@ -1264,12 +1250,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) { /* Check the VMM hasn't introduced a new VM_SHARED VMA */ - if (!shared) - ret = sanitise_mte_tags(kvm, pfn, vma_pagesize); - else + if ((vma->vm_flags & VM_MTE_ALLOWED) && + !(vma->vm_flags & VM_SHARED)) { + sanitise_mte_tags(kvm, pfn, vma_pagesize); + } else { ret = -EFAULT; - if (ret) goto out_unlock; + } } if (writable) @@ -1491,15 +1478,18 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { kvm_pfn_t pfn = pte_pfn(range->pte); - int ret; if (!kvm->arch.mmu.pgt) return false; WARN_ON(range->end - range->start != 1); - ret = sanitise_mte_tags(kvm, pfn, PAGE_SIZE); - if (ret) + /* + * If the page isn't tagged, defer to user_mem_abort() for sanitising + * the MTE tags. The S2 pte should have been unmapped by + * mmu_notifier_invalidate_range_end(). + */ + if (kvm_has_mte(kvm) && !page_mte_tagged(pfn_to_page(pfn))) return false; /*