From patchwork Wed Aug 10 19:30:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12940950 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 070E8C00140 for ; Wed, 10 Aug 2022 19:32:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=jTxoNHPaQzIZDiun6vavFyeBF1BIjz2bd6qH403IoOQ=; b=gWWnZ0rPlOT4x53dg12K59biUq nKDeeIAmOtWJQCKgqZfhZ2cmMh+e+urDv1m7zTx36abvpX6G0Z2gWDBWTj6F5Sw1O5G5dQ0ZvEqPu 876Pf3pB0D8v2diOspOY1XUu2z6Ks7ceXyX7IMK+L4QRmAOZXyoNqqY0CQGRx4/3pxo4eGK1Hv/Ik jIbH71vP0U/Uj5vn5JT8AkIgzp9LNRlGH3K0lt6t44i67/AmXixeD5R0sNLw5dXtV2z+nODSmXSWb iHb2Cux7ihNC4X7rZwxk3hCw0NB4iYDfCVVDzfE2YudL5dN7UeOZhIY/uGoZinWJHCIXthjOBLILg Q8vmgIyQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrQ1-00EBt7-UW; Wed, 10 Aug 2022 19:31:06 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrPf-00EBe6-Ns for linux-arm-kernel@lists.infradead.org; Wed, 10 Aug 2022 19:30:46 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 207-20020a2505d8000000b0067709d8d3eeso12886244ybf.18 for ; Wed, 10 Aug 2022 12:30:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=mFqP1F0ShkyQorPrJQoLgbSuKmrSiDmNhHy0lRjOzJE=; b=qMMDDHNedwjA7zqPuw6Nex1YvjI+wnTEu3dTcGVr4YFt2dlvPdqZc4ocMMn9OcaJ1u bNOv6WPuFMx17L605tZK2feM/UniON1PqKa8mmfHK5wP9GDR9nJkh6o7w8T7WQtX6yY5 IieYw2WVZgwKDHZ0RsssIZNyrRKvBXHUll+9i2SAzXbQKQSmrI/eEgIYo2qz1LkXxGxx F8+Ozn862bgg+OhFTcwKqmsEXfksNIxUKTBI8ybmwrYZSuK/mHe5NbkTP2LmxZ4Fx5Ls ZcQs0NK70uyil6rXC++AD3JBymVRgk25Vt2ukme7wKzbtixCF3+h1rQid3l6nyDQPJjt TrTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=mFqP1F0ShkyQorPrJQoLgbSuKmrSiDmNhHy0lRjOzJE=; b=4PbatyGQ0gjQYZYrIuDA2QmDoMh3s2NxSneD+8JMzpoSQizva1e8HdGF9X9SbUWPjt 0N+faDlPb7DUqGs/fTg3/AoAaWgLcM1LrzKp/gtJCMDv26+su4RX1qX3VXPzg8nhaQuk Po5BqgUHshRN0W7M6xpcReaVx7kbEBItxL9cx2347raLTr4Ri1wrrQyYG7M9yru+Ufvh lHbwD8D+1iLRygqUp+4wEIFlcaGmTfCqOgBrmLlyKjeu8x0k5BdCI5UZxmdmDcqX2az8 n5/XYOcdDk98qyK2HGczz9zlQYYgLbmu0gcXPqP1a5+77puyjBOiTRmLIpewu/qbpZqb XD8g== X-Gm-Message-State: ACgBeo3FGl6EVWMcD9xqJMdnP181hBtAcJpyqQZT7nTk4FNwHzlQRUec ajCitpvRBQLAz5AcAmxjVW/+2yUhle/TOMMthWZeECoHG88d7A1k5I+4tIij+6qedg0l/EmOrzq NR0aksb/VvpWtW7I3AjL8nNJRtx4zvBrwouiekVGL0/YNyg+SfIjmedYAyq0j7+Drjs6cOQGy X-Google-Smtp-Source: AA6agR706JjdcLT2M4esNrx3T5Owpv6PWcuQ0vDvB4kKku+q3s3am/yxaVtaQ/ck4iSq1T5v728hqWA= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:4d8b:fb2a:2ecb:c2bb]) (user=pcc job=sendgmr) by 2002:a25:e542:0:b0:671:7f71:6895 with SMTP id c63-20020a25e542000000b006717f716895mr26223708ybh.7.1660159842232; Wed, 10 Aug 2022 12:30:42 -0700 (PDT) Date: Wed, 10 Aug 2022 12:30:28 -0700 In-Reply-To: <20220810193033.1090251-1-pcc@google.com> Message-Id: <20220810193033.1090251-3-pcc@google.com> Mime-Version: 1.0 References: <20220810193033.1090251-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.559.g78731f0fdb-goog Subject: [PATCH v3 2/7] KVM: arm64: Simplify the sanitise_mte_tags() logic From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Catalin Marinas , Cornelia Huck , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino , Peter Collingbourne X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220810_123043_813474_090F8380 X-CRM114-Status: GOOD ( 22.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas Currently sanitise_mte_tags() checks if it's an online page before attempting to sanitise the tags. Such detection should be done in the caller via the VM_MTE_ALLOWED vma flag. Since kvm_set_spte_gfn() does not have the vma, leave the page unmapped if not already tagged. Tag initialisation will be done on a subsequent access fault in user_mem_abort(). Signed-off-by: Catalin Marinas Cc: Will Deacon Cc: Marc Zyngier Cc: Steven Price Cc: Peter Collingbourne Reviewed-by: Steven Price --- arch/arm64/kvm/mmu.c | 40 +++++++++++++++------------------------- 1 file changed, 15 insertions(+), 25 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c9012707f69c..1a3707aeb41f 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1056,23 +1056,14 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva) * - mmap_lock protects between a VM faulting a page in and the VMM performing * an mprotect() to add VM_MTE */ -static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, - unsigned long size) +static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, + unsigned long size) { unsigned long i, nr_pages = size >> PAGE_SHIFT; - struct page *page; + struct page *page = pfn_to_page(pfn); if (!kvm_has_mte(kvm)) - return 0; - - /* - * pfn_to_online_page() is used to reject ZONE_DEVICE pages - * that may not support tags. - */ - page = pfn_to_online_page(pfn); - - if (!page) - return -EFAULT; + return; for (i = 0; i < nr_pages; i++, page++) { if (!page_mte_tagged(page)) { @@ -1080,8 +1071,6 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, set_page_mte_tagged(page); } } - - return 0; } static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, @@ -1092,7 +1081,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, bool write_fault, writable, force_pte = false; bool exec_fault; bool device = false; - bool shared; unsigned long mmu_seq; struct kvm *kvm = vcpu->kvm; struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; @@ -1142,8 +1130,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, vma_shift = get_vma_page_shift(vma, hva); } - shared = (vma->vm_flags & VM_SHARED); - switch (vma_shift) { #ifndef __PAGETABLE_PMD_FOLDED case PUD_SHIFT: @@ -1264,12 +1250,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) { /* Check the VMM hasn't introduced a new VM_SHARED VMA */ - if (!shared) - ret = sanitise_mte_tags(kvm, pfn, vma_pagesize); - else + if ((vma->vm_flags & VM_MTE_ALLOWED) && + !(vma->vm_flags & VM_SHARED)) { + sanitise_mte_tags(kvm, pfn, vma_pagesize); + } else { ret = -EFAULT; - if (ret) goto out_unlock; + } } if (writable) @@ -1491,15 +1478,18 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { kvm_pfn_t pfn = pte_pfn(range->pte); - int ret; if (!kvm->arch.mmu.pgt) return false; WARN_ON(range->end - range->start != 1); - ret = sanitise_mte_tags(kvm, pfn, PAGE_SIZE); - if (ret) + /* + * If the page isn't tagged, defer to user_mem_abort() for sanitising + * the MTE tags. The S2 pte should have been unmapped by + * mmu_notifier_invalidate_range_end(). + */ + if (kvm_has_mte(kvm) && !page_mte_tagged(pfn_to_page(pfn))) return false; /*