From patchwork Wed Dec 15 16:12:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12696311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 67CF4C433F5 for ; Wed, 15 Dec 2021 16:38:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=xhlk4int95oDGyTo+iyPhOhJEeA2xLcwR74djj4i2pk=; b=JKww3dWWZemmD+FE6dUN3BnsiZ I40pBPo8/YnCp/0QOkKRFeljqEix39FHzxEZLIg/qKzJAAH4ANz1e2D1VYJ903FW0psAZGCDgScPq 8j6W8SeR2B5YOlCwcfTEa7h+p6YToamG8sBBWfxbPv8vNYyHqTWbRpNYcIOhd6A7eMWHKr3nIoQVe iam/T0sXvxnRirrvmprC1xzbh136ROeM1F/I/V9U46nqrjZ1gFOm63mscf4+ZWdK50X+fTCwPUsMV dKZEJue4jG6Vqx8gGpORTaAtDobaG8qJIAfOFyv26YCrQFz7Ae1ttB7TGevKjUiRfLINp1nwuLt5U yyf8nTsQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxXEr-001kuG-0S; Wed, 15 Dec 2021 16:34:46 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxWuP-001cOs-NR for linux-arm-kernel@lists.infradead.org; Wed, 15 Dec 2021 16:13:39 +0000 Received: by mail-wm1-x34a.google.com with SMTP id bg20-20020a05600c3c9400b0033a9300b44bso9228034wmb.2 for ; Wed, 15 Dec 2021 08:13:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=tend1ygRxA410KA7BvbTkFVqKoEy0L8UqyutJi9DG3g=; b=mMYmfDUg0+o7R7mkU2Rcir84qGTKQHmpLPxS4DDM4Yxvx6YxAPGRCg5RoA1VmxvYrm +Wy3GOV768b4gp9aQma4kHl9Caa26XgrHjw08g6sKfJamUn+1pi+nIVG3YsudL+GqU6X zhrGfKtBy+IIWtTM+SyDSic0+eTsMjkYOBY1cwDkmoEXfsKgXFcy+U2JYcjvqVH4++HW jAszwehfVYDifbvb+k3R+7yyDyHvZnSGnwGq1EFv8NxDJZ8LWmuAqwVauEu30VCZPOVk tVpmpvxi6tijPKEcKqcQX+bvBG6TQ0USoIGiBjG+BU8HCJ1KMuHsorkEbUam/+UNSu94 MPiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=tend1ygRxA410KA7BvbTkFVqKoEy0L8UqyutJi9DG3g=; b=t4MFshLpOl3aAu3jpBJ+o2n/sDJF5/2yo5t0eaasSPrUxa+SriqothSxp0v5vRiEBR M31vjHZqIky44WUnMJQuMZmQuGY9CqfxKdB+QWiLZ0zKwMSBZTmNKz/VF5yUYK2g4jkM 4/fqMXCHDKTG/UCqzSMaOlNMgtYEKfxVHikvWOJ9uvLch1cghdcOVvAhyocMpQPSLQVJ fB0PZbMrOnLVG+U3cFZItJySeu75jVLNXVdaBQo0WqszUoJsuPzQy4xRsiKsaAR6dFU0 Q7Tbs+8nfs5hCKE2zmimmIthfWysHvigj1mmlmPuWwSahHr/IJEPjcsh+hAvx8wKESxE d92w== X-Gm-Message-State: AOAM530B36ubEZN3UCB5fvwl5OlbiXNpa7b+VKyEVeLNKyCrIMr3myQk Jy1fSicOrsN/5vlzJTsNjuCsqqT7mFLD X-Google-Smtp-Source: ABdhPJybuNLpOu1Q9Rk0RrxyKIrEfRt0Wng6CPDHefyALS4cUZsP294GGppWk7kprjh8m4P74HvXasEclk1J X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:fc03:4f5b:4e9b:3ec1]) (user=qperret job=sendgmr) by 2002:a1c:5445:: with SMTP id p5mr505391wmi.137.1639584815288; Wed, 15 Dec 2021 08:13:35 -0800 (PST) Date: Wed, 15 Dec 2021 16:12:31 +0000 In-Reply-To: <20211215161232.1480836-1-qperret@google.com> Message-Id: <20211215161232.1480836-15-qperret@google.com> Mime-Version: 1.0 References: <20211215161232.1480836-1-qperret@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH v4 14/14] KVM: arm64: pkvm: Unshare guest structs during teardown From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: qperret@google.com, qwandor@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211215_081337_821304_C25D32AE X-CRM114-Status: GOOD ( 22.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Make use of the newly introduced unshare hypercall during guest teardown to unmap guest-related data structures from the hyp stage-1. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/include/asm/kvm_mmu.h | 1 + arch/arm64/kvm/arm.c | 2 ++ arch/arm64/kvm/fpsimd.c | 34 ++++++++++++++++++++++--- arch/arm64/kvm/mmu.c | 42 +++++++++++++++++++++++++++++++ arch/arm64/kvm/reset.c | 8 +++++- 6 files changed, 85 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index cf858a7e3533..9360a2804df1 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -321,6 +321,7 @@ struct kvm_vcpu_arch { struct kvm_guest_debug_arch external_debug_state; struct user_fpsimd_state *host_fpsimd_state; /* hyp VA */ + struct task_struct *parent_task; struct { /* {Break,watch}point registers */ @@ -737,6 +738,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu); +void kvm_vcpu_unshare_task_fp(struct kvm_vcpu *vcpu); static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr) { diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 185d0f62b724..81839e9a8a24 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -151,6 +151,7 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v) #include int kvm_share_hyp(void *from, void *to); +void kvm_unshare_hyp(void *from, void *to); int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot); int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size, void __iomem **kaddr, diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index c202abb448b1..6057f3c5aafe 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -188,6 +188,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm) } } atomic_set(&kvm->online_vcpus, 0); + + kvm_unshare_hyp(kvm, kvm + 1); } int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 86899d3aa9a9..2f48fd362a8c 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -14,6 +14,19 @@ #include #include +void kvm_vcpu_unshare_task_fp(struct kvm_vcpu *vcpu) +{ + struct task_struct *p = vcpu->arch.parent_task; + struct user_fpsimd_state *fpsimd; + + if (!is_protected_kvm_enabled() || !p) + return; + + fpsimd = &p->thread.uw.fpsimd_state; + kvm_unshare_hyp(fpsimd, fpsimd + 1); + put_task_struct(p); +} + /* * Called on entry to KVM_RUN unless this vcpu previously ran at least * once and the most recent prior KVM_RUN for this vcpu was called from @@ -29,12 +42,27 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) struct user_fpsimd_state *fpsimd = ¤t->thread.uw.fpsimd_state; + kvm_vcpu_unshare_task_fp(vcpu); + /* Make sure the host task fpsimd state is visible to hyp: */ ret = kvm_share_hyp(fpsimd, fpsimd + 1); - if (!ret) - vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd); + if (ret) + return ret; + + vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd); + + /* + * We need to keep current's task_struct pinned until its data has been + * unshared with the hypervisor to make sure it is not re-used by the + * kernel and donated to someone else while already shared -- see + * kvm_vcpu_unshare_task_fp() for the matching put_task_struct(). + */ + if (is_protected_kvm_enabled()) { + get_task_struct(current); + vcpu->arch.parent_task = current; + } - return ret; + return 0; } /* diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index f26d83e3aa00..b53e5bc3f4c3 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -344,6 +344,32 @@ static int share_pfn_hyp(u64 pfn) return ret; } +static int unshare_pfn_hyp(u64 pfn) +{ + struct rb_node **node, *parent; + struct hyp_shared_pfn *this; + int ret = 0; + + mutex_lock(&hyp_shared_pfns_lock); + this = find_shared_pfn(pfn, &node, &parent); + if (WARN_ON(!this)) { + ret = -ENOENT; + goto unlock; + } + + this->count--; + if (this->count) + goto unlock; + + rb_erase(&this->node, &hyp_shared_pfns); + kfree(this); + ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, pfn, 1); +unlock: + mutex_unlock(&hyp_shared_pfns_lock); + + return ret; +} + int kvm_share_hyp(void *from, void *to) { phys_addr_t start, end, cur; @@ -376,6 +402,22 @@ int kvm_share_hyp(void *from, void *to) return 0; } +void kvm_unshare_hyp(void *from, void *to) +{ + phys_addr_t start, end, cur; + u64 pfn; + + if (is_kernel_in_hyp_mode() || kvm_host_owns_hyp_mappings() || !from) + return; + + start = ALIGN_DOWN(__pa(from), PAGE_SIZE); + end = PAGE_ALIGN(__pa(to)); + for (cur = start; cur < end; cur += PAGE_SIZE) { + pfn = __phys_to_pfn(cur); + WARN_ON(unshare_pfn_hyp(pfn)); + } +} + /** * create_hyp_mappings - duplicate a kernel virtual address range in Hyp mode * @from: The virtual kernel start address of the range diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index e3e2a79fbd75..798a84eddbde 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -150,7 +150,13 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu) { - kfree(vcpu->arch.sve_state); + void *sve_state = vcpu->arch.sve_state; + + kvm_vcpu_unshare_task_fp(vcpu); + kvm_unshare_hyp(vcpu, vcpu + 1); + if (sve_state) + kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); + kfree(sve_state); } static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu)