From patchwork Thu May 19 13:41:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855134 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 64F3CC433EF for ; Thu, 19 May 2022 14:12:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nLqksSRyTFxBSL54DzvVeYvaUA3yZ+eXz2xfGVh5BjM=; b=D8GP3ZK3K0BqNl uA08wS999hvzKbtaaJJ68Z95cR572SAfJGxDjwuMmONTBqUSZImghG/4Ed+TO4xGUZEpAiye0gGFB YXqc1vRf0cLMOwF/lG13eVB5b+N1I/4FfqK/1HCMnFD5PJ/zOG1A9zkKCOhwkwVyBT3vFDWID5NHD bZBnCIUZJDndDUQCV8Jx2nCl/zvXedoS7JUm4eUFM2GvIfCY1LHSOerv+2Zki9+IW/T2MbZP840WK A/hyMeO8tzR7E8zbA8Sp1oxSZQfVscw7sEH4NO5WCYY+r3Tjvw59m9PkroQtIHDLyxFBQN7Enz+5j odLCcfNlwzfAHiAA722Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgrl-007HvM-Bn; Thu, 19 May 2022 14:11:01 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgSc-007671-Lz for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:45:04 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 36706B824AF; Thu, 19 May 2022 13:45:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E9DBC36AE5; Thu, 19 May 2022 13:44:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967900; bh=SxLFdfnMnEkcdrJsQkkeO75KQ4eQ7yY1tbJ5lT5ChX8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XPqQc4zFWaMfABPQw5/dfWp2s/7zHYM4gze5jPW/dQaRUBPmocGR+TcexDTEnwGMM WJvO3tIobJKM1MzgXouAiiYjA6CHFTaOdjqSOS7lvHtFc1O+EfmIX3u5EgX2Z+2OUM rmB9N6bKE5jmxc9UB4Uz85VqPcUA9C2pduwt5TbVOmXPD9dt+ZCNk+XaHvw391dfX7 bkGM4ItJ/0wz5rTkUnSMJDbIKUL3Ofi3USzyfEbSkeyh8W6b5rgCg5PK+vS17guiNl fbt/1HMMOHa6ZTqaEkpF/0aqIgAskIulImvKfGdF2DHv6grWpZrwdvlssJWnyolUGG A4MDzh0YrL17Q== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 39/89] KVM: arm64: Extend memory donation to allow host-to-guest transitions Date: Thu, 19 May 2022 14:41:14 +0100 Message-Id: <20220519134204.5379-40-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064503_115624_FDDDE747 X-CRM114-Status: GOOD ( 16.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for supporting protected guests, where guest memory defaults to being inaccessible to the host, extend our memory protection mechanisms to support donation of pages from the host to a specific guest. Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 62 +++++++++++++++++++ arch/arm64/kvm/hyp/pgtable.c | 2 +- 3 files changed, 64 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 364432276fe0..b01b5cdb38de 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -69,6 +69,7 @@ int __pkvm_host_reclaim_page(u64 pfn); int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct kvm_vcpu *vcpu); +int __pkvm_host_donate_guest(u64 pfn, u64 gfn, struct kvm_vcpu *vcpu); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 2e92be8bb463..d0544259eb01 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -890,6 +890,14 @@ static int guest_ack_share(u64 addr, const struct pkvm_mem_transition *tx, size, PKVM_NOPAGE); } +static int guest_ack_donation(u64 addr, const struct pkvm_mem_transition *tx) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + + return __guest_check_page_state_range(tx->completer.guest.vcpu, addr, + size, PKVM_NOPAGE); +} + static int guest_complete_share(u64 addr, const struct pkvm_mem_transition *tx, enum kvm_pgtable_prot perms) { @@ -903,6 +911,17 @@ static int guest_complete_share(u64 addr, const struct pkvm_mem_transition *tx, prot, &vcpu->arch.pkvm_memcache); } +static int guest_complete_donation(u64 addr, const struct pkvm_mem_transition *tx) +{ + enum kvm_pgtable_prot prot = pkvm_mkstate(KVM_PGTABLE_PROT_RWX, PKVM_PAGE_OWNED); + struct kvm_vcpu *vcpu = tx->completer.guest.vcpu; + struct kvm_shadow_vm *vm = get_shadow_vm(vcpu); + u64 size = tx->nr_pages * PAGE_SIZE; + + return kvm_pgtable_stage2_map(&vm->pgt, addr, size, tx->completer.guest.phys, + prot, &vcpu->arch.pkvm_memcache); +} + static int check_share(struct pkvm_mem_share *share) { const struct pkvm_mem_transition *tx = &share->tx; @@ -1088,6 +1107,9 @@ static int check_donation(struct pkvm_mem_donation *donation) case PKVM_ID_HYP: ret = hyp_ack_donation(completer_addr, tx); break; + case PKVM_ID_GUEST: + ret = guest_ack_donation(completer_addr, tx); + break; default: ret = -EINVAL; } @@ -1122,6 +1144,9 @@ static int __do_donate(struct pkvm_mem_donation *donation) case PKVM_ID_HYP: ret = hyp_complete_donation(completer_addr, tx); break; + case PKVM_ID_GUEST: + ret = guest_complete_donation(completer_addr, tx); + break; default: ret = -EINVAL; } @@ -1362,6 +1387,43 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct kvm_vcpu *vcpu) return ret; } +int __pkvm_host_donate_guest(u64 pfn, u64 gfn, struct kvm_vcpu *vcpu) +{ + int ret; + u64 host_addr = hyp_pfn_to_phys(pfn); + u64 guest_addr = hyp_pfn_to_phys(gfn); + struct kvm_shadow_vm *vm = get_shadow_vm(vcpu); + struct pkvm_mem_donation donation = { + .tx = { + .nr_pages = 1, + .initiator = { + .id = PKVM_ID_HOST, + .addr = host_addr, + .host = { + .completer_addr = guest_addr, + }, + }, + .completer = { + .id = PKVM_ID_GUEST, + .guest = { + .vcpu = vcpu, + .phys = host_addr, + }, + }, + }, + }; + + host_lock_component(); + guest_lock_component(vm); + + ret = do_donate(&donation); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} + static int hyp_zero_page(phys_addr_t phys) { void *addr; diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index a6676fd14cf9..2069e6833831 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -47,7 +47,7 @@ KVM_PTE_LEAF_ATTR_HI_S2_XN) #define KVM_INVALID_PTE_OWNER_MASK GENMASK(9, 2) -#define KVM_MAX_OWNER_ID 1 +#define KVM_MAX_OWNER_ID FIELD_MAX(KVM_INVALID_PTE_OWNER_MASK) struct kvm_pgtable_walk_data { struct kvm_pgtable *pgt;