From patchwork Thu Jun 30 13:57:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901882 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F11E7C43334 for ; Thu, 30 Jun 2022 14:10:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jxfMF8OybEq9brg+x3PANH9p5xVXClND/QCdosVo/9U=; b=2SAulN9kC/Cfbj 6MLu59fL+2eMGk1IrkmhAevzzfKAErKn2MqtO6+mpjPQcQ/knNPflbx82n37ISxFSkvJ8Dbac+RMz 91r6soy4M6JP68ZOZIFkKDXhOf1m7lVHhHke32vUPYuRPmYbKlkCb7pPHfXU1fCy+Zlov9w8N/j3A 3fXh+aL+pqPOFMbhvORU8Cjk0KJwkL1BL2IpTeCYtIgs2i0OMCKWIuHR+4Zk85neI7F6tXbK5xzyr LacJKWlfAvpEH3nq0/uaeStQpi/C9XCWu+rVJNDB0ObKzhxfTCRRXPczaLTSMHw96WBAbCgX9m1eQ jZ/c+WVn7rvUhbVJSilQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6urS-00HXl4-D6; Thu, 30 Jun 2022 14:09:38 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uhM-00HSwX-EW for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:59:19 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 08C2BB82A7D; Thu, 30 Jun 2022 13:59:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7D787C34115; Thu, 30 Jun 2022 13:59:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597549; bh=30dNE0wWVFK96CPhSt0kble4nLGBd3sqOAIzyYktuZM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fBQ1FC/rmwtl/nbtkcDgDbzXz/I0tdCnVXtgM8mVz/8KUxQu0Jof6biBGWYfLwsfJ /XjWJHdiYqHcA9IkP03K/BkbIYz5zLukiuyYdL3ZH+iCuVIidreGvURcSymuElUeSX g5hL3PKru9ueEu08z7f5PEBPyDBTukzlEgzgaRJu0Htp7Pje82e8sB5NwmWBcppP1A r+IkzMwR4EZJ1pkoM4JsSotXtgzUwxR+dHUyjJnJlnZ4fl3BHkg5DROU8IvXGEp8Br WBbmqSgPR9UJK1PFBUfWal91DACjASrGyG5aoeXi/t8MixGwDz9oBL4/gW5lR9c4Na e3FOUm4t4JmYA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 19/24] KVM: arm64: Return guest memory from EL2 via dedicated teardown memcache Date: Thu, 30 Jun 2022 14:57:42 +0100 Message-Id: <20220630135747.26983-20-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065912_842508_0994CAAC X-CRM114-Status: GOOD ( 17.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret Rather than relying on the host to free the shadow VM pages explicitly on teardown, introduce a dedicated teardown memcache which allows the host to reclaim guest memory resources without having to keep track of all of the allocations made by EL2. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_host.h | 6 +----- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 +- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 17 +++++++++++------ arch/arm64/kvm/hyp/nvhe/pkvm.c | 8 +++++++- arch/arm64/kvm/pkvm.c | 12 +----------- 5 files changed, 21 insertions(+), 24 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 70a2db91665d..09481268c224 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -175,11 +175,7 @@ struct kvm_smccc_features { struct kvm_protected_vm { unsigned int shadow_handle; struct mutex shadow_lock; - - struct { - void *pgd; - void *shadow; - } hyp_donations; + struct kvm_hyp_memcache teardown_mc; }; struct kvm_arch { diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 36eea31a1c5f..663019992b67 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -76,7 +76,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); int hyp_pin_shared_mem(void *from, void *to); void hyp_unpin_shared_mem(void *from, void *to); -void reclaim_guest_pages(struct kvm_shadow_vm *vm); +void reclaim_guest_pages(struct kvm_shadow_vm *vm, struct kvm_hyp_memcache *mc); int refill_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages, struct kvm_hyp_memcache *host_mc); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 5b22bba77e57..bcfdba1881c1 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -260,19 +260,24 @@ int kvm_guest_prepare_stage2(struct kvm_shadow_vm *vm, void *pgd) return 0; } -void reclaim_guest_pages(struct kvm_shadow_vm *vm) +void reclaim_guest_pages(struct kvm_shadow_vm *vm, struct kvm_hyp_memcache *mc) { - unsigned long nr_pages, pfn; - - nr_pages = kvm_pgtable_stage2_pgd_size(vm->kvm.arch.vtcr) >> PAGE_SHIFT; - pfn = hyp_virt_to_pfn(vm->pgt.pgd); + void *addr; + /* Dump all pgtable pages in the hyp_pool */ guest_lock_component(vm); kvm_pgtable_stage2_destroy(&vm->pgt); vm->kvm.arch.mmu.pgd_phys = 0ULL; guest_unlock_component(vm); - WARN_ON(__pkvm_hyp_donate_host(pfn, nr_pages)); + /* Drain the hyp_pool into the memcache */ + addr = hyp_alloc_pages(&vm->pool, 0); + while (addr) { + memset(hyp_virt_to_page(addr), 0, sizeof(struct hyp_page)); + push_hyp_memcache(mc, addr, hyp_virt_to_phys); + WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(addr), 1)); + addr = hyp_alloc_pages(&vm->pool, 0); + } } int __pkvm_prot_finalize(void) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 114c5565de7d..a4a518b2a43b 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -546,8 +546,10 @@ int __pkvm_init_shadow(struct kvm *kvm, unsigned long shadow_hva, int __pkvm_teardown_shadow(unsigned int shadow_handle) { + struct kvm_hyp_memcache *mc; struct kvm_shadow_vm *vm; size_t shadow_size; + void *addr; int err; /* Lookup then remove entry from the shadow table. */ @@ -569,7 +571,8 @@ int __pkvm_teardown_shadow(unsigned int shadow_handle) hyp_spin_unlock(&shadow_lock); /* Reclaim guest pages (including page-table pages) */ - reclaim_guest_pages(vm); + mc = &vm->host_kvm->arch.pkvm.teardown_mc; + reclaim_guest_pages(vm, mc); unpin_host_vcpus(vm->shadow_vcpu_states, vm->kvm.created_vcpus); /* Push the metadata pages to the teardown memcache */ @@ -577,6 +580,9 @@ int __pkvm_teardown_shadow(unsigned int shadow_handle) hyp_unpin_shared_mem(vm->host_kvm, vm->host_kvm + 1); memset(vm, 0, shadow_size); + for (addr = vm; addr < (void *)vm + shadow_size; addr += PAGE_SIZE) + push_hyp_memcache(mc, addr, hyp_virt_to_phys); + unmap_donated_memory_noclear(vm, shadow_size); return 0; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index b4466b31d7c8..b174d6dfde36 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -160,8 +160,6 @@ static int __kvm_shadow_create(struct kvm *kvm) /* Store the shadow handle given by hyp for future call reference. */ kvm->arch.pkvm.shadow_handle = shadow_handle; - kvm->arch.pkvm.hyp_donations.pgd = pgd; - kvm->arch.pkvm.hyp_donations.shadow = shadow_addr; return 0; free_shadow: @@ -185,20 +183,12 @@ int kvm_shadow_create(struct kvm *kvm) void kvm_shadow_destroy(struct kvm *kvm) { - size_t pgd_sz, shadow_sz; - if (kvm->arch.pkvm.shadow_handle) WARN_ON(kvm_call_hyp_nvhe(__pkvm_teardown_shadow, kvm->arch.pkvm.shadow_handle)); kvm->arch.pkvm.shadow_handle = 0; - - shadow_sz = PAGE_ALIGN(KVM_SHADOW_VM_SIZE + - KVM_SHADOW_VCPU_STATE_SIZE * kvm->created_vcpus); - pgd_sz = kvm_pgtable_stage2_pgd_size(kvm->arch.vtcr); - - free_pages_exact(kvm->arch.pkvm.hyp_donations.shadow, shadow_sz); - free_pages_exact(kvm->arch.pkvm.hyp_donations.pgd, pgd_sz); + free_hyp_memcache(&kvm->arch.pkvm.teardown_mc); } int kvm_init_pvm(struct kvm *kvm)