From patchwork Thu Jun 30 13:57:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901855 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E8FF9C43334 for ; Thu, 30 Jun 2022 14:05:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rcQdw9rI35ZqIgEyv/C0XfY8aT+bezxvsgLqeADcHHw=; b=dY30+1WlMql610 VAdluPfAGkiCKqJ+gXaH0YWhGwjQPgsghtLqDJVrWC+LOttRA+GKNJXTB/6/gfsIoueFvYrNMS2ZR gWbDAUeZCpZpKEJK5ZxgUBcXCjAv9Q/1NzdgHnBj5RqDB1ujeJY3z0WsOvpmvnpZ7Uup+PWaL+p6x gKvXyApurgXWCigOxK84ZEAeBZpJmUBVZd/i9f3nDKrvSHY/580LGtspCKH+hgsUfaMdJx758zo3b nHFgce/9R4fX2MHlEZ2pTsORZD9nOJagNU2RMVt3i5ajw1YeJccXgnZOgyjwu/m6zKvxFzNC2ib1h alEAwBm8DtCa+nPApGsg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6um3-00HV1N-Cn; Thu, 30 Jun 2022 14:04:03 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugj-00HSg2-0S for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:58:34 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 117D26219B; Thu, 30 Jun 2022 13:58:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 330B1C36AE2; Thu, 30 Jun 2022 13:58:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597511; bh=V/L/0x+OKaLzrUxstXASpxXa331O8AEl8jt8xmSkGQ4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b9/7jZzOmwgkz88fObBL8dl4MgH2ZTknKClhvobiXJYIg/gjUOKE4MEVREqXKRx/p RqJG9QBUzxHUiEIOdtrLb/iuH7MwwrPnd70t17h2R2Tb4Y4xfwm3F048O/v6MWXRDG zaoEK0q4FHu/A8TFegMoAk1tOCnrno7Tj3wn6P7pM5SogiKIg+QfNMrp5uA4Qw2VPs 1nj3+PXXU2b46aHQBaxmmrrPHa3c998h6zBwhVJgauo+jeDZXzwt57leeJuWnbcVpe SLSp/V9SXG+JGi81gUeyIgMXL1y9OmijfYBbcZm0BS4lEyRTHK434M38RzfD/T2KPH CMp6/Lsu+o55g== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 09/24] KVM: arm64: Add helpers to pin memory shared with hyp Date: Thu, 30 Jun 2022 14:57:32 +0100 Message-Id: <20220630135747.26983-10-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065833_204136_A2378E41 X-CRM114-Status: GOOD ( 17.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret Add helpers allowing the hypervisor to check whether a range of pages are currently shared by the host, and 'pin' them if so by blocking host unshare operations until the memory has been unpinned. This will allow the hypervisor to take references on host-provided data-structures (struct kvm and such) and be guaranteed these pages will remain in a stable state until it decides to release them, e.g. during guest teardown. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 3 ++ arch/arm64/kvm/hyp/include/nvhe/memory.h | 7 ++- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 48 +++++++++++++++++++ 3 files changed, 57 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index c87b19b2d468..998bf165af71 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -69,6 +69,9 @@ int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id); int kvm_host_prepare_stage2(void *pgt_pool_base); void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); +int hyp_pin_shared_mem(void *from, void *to); +void hyp_unpin_shared_mem(void *from, void *to); + static __always_inline void __load_host_stage2(void) { if (static_branch_likely(&kvm_protected_mode_initialized)) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 2681f632e1c1..29f2ebe306bc 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -52,10 +52,15 @@ static inline void hyp_page_ref_inc(struct hyp_page *p) p->refcount++; } -static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) +static inline void hyp_page_ref_dec(struct hyp_page *p) { BUG_ON(!p->refcount); p->refcount--; +} + +static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) +{ + hyp_page_ref_dec(p); return (p->refcount == 0); } diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index e7015bbefbea..e2e3b30b072e 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -629,6 +629,9 @@ static int hyp_ack_unshare(u64 addr, const struct pkvm_mem_transition *tx) { u64 size = tx->nr_pages * PAGE_SIZE; + if (tx->initiator.id == PKVM_ID_HOST && hyp_page_count((void *)addr)) + return -EBUSY; + if (__hyp_ack_skip_pgtable_check(tx)) return 0; @@ -1042,3 +1045,48 @@ int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages) return ret; } + +int hyp_pin_shared_mem(void *from, void *to) +{ + u64 cur, start = ALIGN_DOWN((u64)from, PAGE_SIZE); + u64 end = PAGE_ALIGN((u64)to); + u64 size = end - start; + int ret; + + host_lock_component(); + hyp_lock_component(); + + ret = __host_check_page_state_range(__hyp_pa(start), size, + PKVM_PAGE_SHARED_OWNED); + if (ret) + goto unlock; + + ret = __hyp_check_page_state_range(start, size, + PKVM_PAGE_SHARED_BORROWED); + if (ret) + goto unlock; + + for (cur = start; cur < end; cur += PAGE_SIZE) + hyp_page_ref_inc(hyp_virt_to_page(cur)); + +unlock: + hyp_unlock_component(); + host_unlock_component(); + + return ret; +} + +void hyp_unpin_shared_mem(void *from, void *to) +{ + u64 cur, start = ALIGN_DOWN((u64)from, PAGE_SIZE); + u64 end = PAGE_ALIGN((u64)to); + + host_lock_component(); + hyp_lock_component(); + + for (cur = start; cur < end; cur += PAGE_SIZE) + hyp_page_ref_dec(hyp_virt_to_page(cur)); + + hyp_unlock_component(); + host_unlock_component(); +}