From patchwork Thu Jun 30 13:57:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901840 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 188D8C433EF for ; Thu, 30 Jun 2022 13:59:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=J8BcKTeL/ghhl28dyD2cDeVB2EyLvPqTDjYFZ5k1SZc=; b=V62MV7Hv9M/ZdJ loK90xN+4D85HzaRMup2ldNrLKB7G+ITWjnfiJefdChvXG/5OrVv8wlapbiCfYAqkgdTGShhCsW+E Q+f4KsGhc7h+q03x7OSN6akEG3h7v0AnLohvKct6P57utlbuRS5iwOSUTcyDSUqKLc/eJ16N+SiAZ THP9iSDV6+EXg9KdDzC/Lh3r+8l4SZfMTfG4EFAxU1cIQiPIvoDXbxDptT/kRvKeDAqa868cW0o+6 mbQ0am0FR1vwHbAsYTrSEnk3oXMwfesEWDZwcpbUhjB4c0BU9Ycd+ZqOEzdfDtO737QwB4MQ5TGOG I9d5/+IogfG0wW2pRJgQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugO-00HSar-LE; Thu, 30 Jun 2022 13:58:12 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugE-00HSYm-8S for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:58:03 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D57F76218F; Thu, 30 Jun 2022 13:58:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F0889C341CC; Thu, 30 Jun 2022 13:57:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597481; bh=C14puFIZ18CG7PnpKVQP1lUNy+y92Os2YQO02ZGN8Kg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SFoeQjw+NiH7n3qkSp1otTQZiqe5BRQ6KTuqyNG/4OSaTeYIm5D/JUneLvS9utJ+2 epiu50SrsjggJD7Go3BhdC6m2tWnNLJeQAptChm5uEQWZpsfeX3UOOOrH8ZZtILDKv 08juWQ5HSTwfMLcXEmhvyO0nVFznrIqJS+dsqzOu0/EfROn80gBgwQMzki5k7wz7RO IFkAqavuiYjBjVpGp9x80Jv/kzIaXb5FHXYLDrLbWW2cJrocHjKvxR7Ygv5sfBdsP/ AX0FYiEcozxpjEabNcccgeI+LwDhQDAlVj8YraTpB8C6/QzjYvoch5XQnPbfHE2txB 2X13EoRfU1nJA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 01/24] KVM: arm64: Move hyp refcount manipulation helpers Date: Thu, 30 Jun 2022 14:57:24 +0100 Message-Id: <20220630135747.26983-2-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065802_442776_C6152F0C X-CRM114-Status: GOOD ( 12.34 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret We will soon need to manipulate struct hyp_page refcounts from outside page_alloc.c, so move the helpers to a header file. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/memory.h | 18 ++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/page_alloc.c | 19 ------------------- 2 files changed, 18 insertions(+), 19 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 592b7edb3edb..418b66a82a50 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -45,4 +45,22 @@ static inline int hyp_page_count(void *addr) return p->refcount; } +static inline void hyp_page_ref_inc(struct hyp_page *p) +{ + BUG_ON(p->refcount == USHRT_MAX); + p->refcount++; +} + +static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) +{ + BUG_ON(!p->refcount); + p->refcount--; + return (p->refcount == 0); +} + +static inline void hyp_set_page_refcounted(struct hyp_page *p) +{ + BUG_ON(p->refcount); + p->refcount = 1; +} #endif /* __KVM_HYP_MEMORY_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index d40f0b30b534..1ded09fc9b10 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -144,25 +144,6 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, return p; } -static inline void hyp_page_ref_inc(struct hyp_page *p) -{ - BUG_ON(p->refcount == USHRT_MAX); - p->refcount++; -} - -static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) -{ - BUG_ON(!p->refcount); - p->refcount--; - return (p->refcount == 0); -} - -static inline void hyp_set_page_refcounted(struct hyp_page *p) -{ - BUG_ON(p->refcount); - p->refcount = 1; -} - static void __hyp_put_page(struct hyp_pool *pool, struct hyp_page *p) { if (hyp_page_ref_dec_and_test(p)) From patchwork Thu Jun 30 13:57:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6A8DCC433EF for ; Thu, 30 Jun 2022 13:59:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=HTxTFX8b08BuMhwkisNJXZaN9QZ6TJsYu8K7NkU0X6I=; b=obPZ5y6AnrNJjt RBd4bYplKbTJU++SCw40VgDM0KbAEnbblaQMt5eOfoNzBdFUAmAUxWEXLw2G6X5MX1tNKXjmxM0OB MGpoR4b2JsAFtRNhjsWle8oJ48xNDK+nV7r4JV/4VcpOpq1A9PAWez6w8g0wNjwWBRtUEx3RLqDgZ 92VgmZP4K26nTgRoE9BcUEd0m4hH5I88R4RVgXma3SD7FzZL/R/LPuGQvfpj1v1u8tKB+IhxWly0V ODK6uHs1sm5JnwRSn0QwUVYU7tQ53p3wYobLly1QKHW2huFBHoEIrN3cnVc1OabpruRcKdvYcwLIB kEZ6vHoelNjAgMAbHdFw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugX-00HScb-B3; Thu, 30 Jun 2022 13:58:21 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugI-00HSZu-4Z for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:58:07 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A2794620E4; Thu, 30 Jun 2022 13:58:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BABA0C341CB; Thu, 30 Jun 2022 13:58:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597485; bh=CGAYlm140TLjTsKj1pjG8aKb8iu3Aml36A9Qr4wRzmg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XspaDEvsZuFNNTyleQt8+dPtY3ge5fn5Ul2s8DJXkA3le/YemYo96lThuC7JJZrn2 Cv1kViRakFl0XkCxswQdZ1leknhMDEGDD7YMWegfcaQOiRfcB07oa4nJG39avCne5m QNzqEV2Q0jByX6oyeMic32oijHe07wpP72Ug5pqOpBW8Z+EVkt35QrTgGb947oXDji UzeW+3yQ2qEK2Q1Ze7zm4LU/cT5WGJc+oz854b3dP0XIRj/uSVdTixb+iSp6JPJSm0 yTu5pIZavoy+2uHLq5MuPfp3mcZtVXxUBLTxudj7utPXFOyTvaJq3iydcbmFzH0XMj hTMu2mRBcAy4g== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 02/24] KVM: arm64: Allow non-coalescable pages in a hyp_pool Date: Thu, 30 Jun 2022 14:57:25 +0100 Message-Id: <20220630135747.26983-3-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065806_263426_EAE2D77B X-CRM114-Status: GOOD ( 15.89 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret All the contiguous pages used to initialize a hyp_pool are considered coalescable, which means that the hyp page allocator will actively try to merge them with their buddies on the hyp_put_page() path. However, using hyp_put_page() on a page that is not part of the inital memory range given to a hyp_pool() is currently unsupported. In order to allow dynamically extending hyp pools at run-time, add a check to __hyp_attach_page() to allow inserting 'external' pages into the free-list of order 0. This will be necessary to allow lazy donation of pages from the host to the hypervisor when allocating guest stage-2 page-table pages at EL2. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 1ded09fc9b10..0d15227aced8 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -93,11 +93,15 @@ static inline struct hyp_page *node_to_page(struct list_head *node) static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { + phys_addr_t phys = hyp_page_to_phys(p); unsigned short order = p->order; struct hyp_page *buddy; memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order); + if (phys < pool->range_start || phys >= pool->range_end) + goto insert; + /* * Only the first struct hyp_page of a high-order page (otherwise known * as the 'head') should have p->order set. The non-head pages should @@ -116,6 +120,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, p = min(p, buddy); } +insert: /* Mark the new head, and insert it */ p->order = order; page_add_to_list(p, &pool->free_area[order]); From patchwork Thu Jun 30 13:57:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901842 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0F677CCA482 for ; Thu, 30 Jun 2022 13:59:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Qu53ZNlvulEsMzZ/wWWrH8ycGSAMSA0cAx1RqrdCIEo=; b=WzLXsKccPxc+aL QK9uPuvzyU2eewa3JMlOAhHcSXWWQPi9AyFerToGHodtf7sbyQJmPIpa/zpV7KHv4gZBQgBseyXWH DKv1oBC1xU9Djtk4OndiMvQ2MT5lFEZMFi9P7A6110WZum2/3gbtlnfAaGMcqNzJErlnkpgQO0L31 m2PQY2wzXfz2bmS6fx3TUk0Rald951SH4Ojn0IqYGkr25/QpHmYwgXKfUGoON+ZEoq8Xf6pe6AV2s Mh5nqoYWonFL/3/2yi3Q8JPFLykqLfG3VzJegaVUcPAM7WeaNqXH6KyG6Yzj5/nS+5T5X/tMlPHtY LV5ToU8PdBklHcwU0Xkg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugg-00HSes-SL; Thu, 30 Jun 2022 13:58:30 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugL-00HSaR-Pq for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:58:11 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 65BAA61FDB; Thu, 30 Jun 2022 13:58:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 83526C341CC; Thu, 30 Jun 2022 13:58:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597488; bh=yadXOfubMUxLUzRkyxEAup5usFqPCH4T2xXyNSgVPtw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=g5X7ki+Y2Iy3DsMlQSU47oCGLslPC8BXKPiNdBsuha0zfhvZriVDYnqaMkXUD4fqm 2MKbrKYs574+R4I9Ce/DhzDa1lWOaILCtu0sqh+8xd7i9Aq8T1o+2mL6Yfdnwz+syW EWOY/rXh7Q9Lc+fOlK80RUsCaiUAUmJ36Ecjuj+nGAOZN4f4M7vF0FLCjqAYMsdXCB iR4L8UHqytGxe4gggGLQmJeFpGP6aTBn8vKDKMnyYqJBB4di1rNGb6AR73XuonBpHs CxIMpn2I8Cw5ZaVStk5gphHEuZJilIS/piD9o1Cu0x1FoPq9rEEx6LkxktNYDjEams eKj0a69NV7uPQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 03/24] KVM: arm64: Add flags to struct hyp_page Date: Thu, 30 Jun 2022 14:57:26 +0100 Message-Id: <20220630135747.26983-4-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065809_946561_155723A8 X-CRM114-Status: GOOD ( 17.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret Add a 'flags' field to struct hyp_page, and reduce the size of the order field to u8 to avoid growing the struct size. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 6 +++--- arch/arm64/kvm/hyp/include/nvhe/memory.h | 3 ++- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 14 +++++++------- 3 files changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index 0a048dc06a7d..9330b13075f8 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -7,7 +7,7 @@ #include #include -#define HYP_NO_ORDER USHRT_MAX +#define HYP_NO_ORDER 0xff struct hyp_pool { /* @@ -19,11 +19,11 @@ struct hyp_pool { struct list_head free_area[MAX_ORDER]; phys_addr_t range_start; phys_addr_t range_end; - unsigned short max_order; + u8 max_order; }; /* Allocation */ -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order); +void *hyp_alloc_pages(struct hyp_pool *pool, u8 order); void hyp_split_page(struct hyp_page *page); void hyp_get_page(struct hyp_pool *pool, void *addr); void hyp_put_page(struct hyp_pool *pool, void *addr); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 418b66a82a50..2681f632e1c1 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -9,7 +9,8 @@ struct hyp_page { unsigned short refcount; - unsigned short order; + u8 order; + u8 flags; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 0d15227aced8..e6e4b550752b 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -32,7 +32,7 @@ u64 __hyp_vmemmap; */ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { phys_addr_t addr = hyp_page_to_phys(p); @@ -51,7 +51,7 @@ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, /* Find a buddy page currently available for allocation */ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { struct hyp_page *buddy = __find_buddy_nocheck(pool, p, order); @@ -94,8 +94,8 @@ static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { phys_addr_t phys = hyp_page_to_phys(p); - unsigned short order = p->order; struct hyp_page *buddy; + u8 order = p->order; memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order); @@ -128,7 +128,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { struct hyp_page *buddy; @@ -182,7 +182,7 @@ void hyp_get_page(struct hyp_pool *pool, void *addr) void hyp_split_page(struct hyp_page *p) { - unsigned short order = p->order; + u8 order = p->order; unsigned int i; p->order = 0; @@ -194,10 +194,10 @@ void hyp_split_page(struct hyp_page *p) } } -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order) +void *hyp_alloc_pages(struct hyp_pool *pool, u8 order) { - unsigned short i = order; struct hyp_page *p; + u8 i = order; hyp_spin_lock(&pool->lock); From patchwork Thu Jun 30 13:57:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3C173C433EF for ; Thu, 30 Jun 2022 14:00:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GCQ09/+QS/JIyKcpbaM/pX0eXVDEuRMqPO3RrTPNrQk=; b=JlXQBvmCD63McC Ddx2E9GveiIhkynsjFEry/NGnx826ShI2SukxTDRGkiiI1eOhPU7dsC3X+Ng3gIg3hM5QqJsAW9ZN mp788cmDInRd0i0tlcjF6BYqAV0+KCH4izu9lxV01X5md60/HHr2azl966w95D5pHswVvwkmhCUKR Jt6bzJPnfZS/U05A7gYzcz+y2L6L5EGCjAxX5EtIpC2pexHnC03L4pj0T6L/0ccegNxooR2KuLqYY tyG/z15b0wql+00EeOITBBDlYGy+Sr5zr+kT36vGs8YIgxdJrghArAFdrwoupHy8/ajYXS1wXy8Sd oVXXTv3uOdsfwv+uQn9Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uh0-00HSmK-VG; Thu, 30 Jun 2022 13:58:51 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugR-00HSb2-AO for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:58:17 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DE4FDB82AEF; Thu, 30 Jun 2022 13:58:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 50356C341CD; Thu, 30 Jun 2022 13:58:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597492; bh=LWPbj07zdr1J5u/vxz1+ial058U/0WpGZVOfRtvuDrs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=u8usHZIihd9a312Z0uCw/7sRaSff7f9LYAeqoxQWBjSUwFhsxvcu2Kca/P+GvIn11 JjfPlRvd+MH8l2sNoaAxTHQGgqvu09z0USPJzyQYscDAbOn1FjPHjLVJ6yuA7JePvg qmfY62qmwFGRoc4b3TRr1DgXxBGnrk6YBY0zZMg/WSANMVXHP1n/nyzRvE9UmTu8q9 p9aH8I7S3LczWk6sYt7sDGQyF/D7z7sGr0ylhISVOmd/afM/Sb8+zx+ziDcyX+4xtM WwnnC8kqb+sQHCzesZM8Cw08Ko+gt+hScQLkr/oyzo8H8hh8MBzM+7oLuFjZb6eTKD qJf45FF7PsH2w== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 04/24] KVM: arm64: Back hyp_vmemmap for all of memory Date: Thu, 30 Jun 2022 14:57:27 +0100 Message-Id: <20220630135747.26983-5-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065815_719803_D5492A3C X-CRM114-Status: GOOD ( 24.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret The EL2 vmemmap in nVHE Protected mode is currently very sparse: only memory pages owned by the hypervisor itself have a matching struct hyp_page. But since the size of these structs has been reduced significantly, it appears that we can afford backing the vmemmap for all of memory. This will simplify a lot memory tracking as the hypervisor will have a place to store metadata (e.g. refcounts) that wouldn't otherwise fit in the 4 SW bits we have in the host stage-2 page-table for instance. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_pkvm.h | 26 +++++++++++++++++++++++ arch/arm64/kvm/hyp/include/nvhe/mm.h | 14 +------------ arch/arm64/kvm/hyp/nvhe/mm.c | 31 ++++++++++++++++++++++++---- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 4 +--- arch/arm64/kvm/hyp/nvhe/setup.c | 7 +++---- arch/arm64/kvm/pkvm.c | 18 ++-------------- 6 files changed, 60 insertions(+), 40 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index 9f4ad2a8df59..8f7b8a2314bb 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -14,6 +14,32 @@ extern struct memblock_region kvm_nvhe_sym(hyp_memory)[]; extern unsigned int kvm_nvhe_sym(hyp_memblock_nr); +static inline unsigned long +hyp_vmemmap_memblock_size(struct memblock_region *reg, size_t vmemmap_entry_size) +{ + unsigned long nr_pages = reg->size >> PAGE_SHIFT; + unsigned long start, end; + + start = (reg->base >> PAGE_SHIFT) * vmemmap_entry_size; + end = start + nr_pages * vmemmap_entry_size; + start = ALIGN_DOWN(start, PAGE_SIZE); + end = ALIGN(end, PAGE_SIZE); + + return end - start; +} + +static inline unsigned long hyp_vmemmap_pages(size_t vmemmap_entry_size) +{ + unsigned long res = 0, i; + + for (i = 0; i < kvm_nvhe_sym(hyp_memblock_nr); i++) { + res += hyp_vmemmap_memblock_size(&kvm_nvhe_sym(hyp_memory)[i], + vmemmap_entry_size); + } + + return res >> PAGE_SHIFT; +} + static inline unsigned long __hyp_pgtable_max_pages(unsigned long nr_pages) { unsigned long total = 0, i; diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h index 42d8eb9bfe72..b2ee6d5df55b 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -15,7 +15,7 @@ extern hyp_spinlock_t pkvm_pgd_lock; int hyp_create_idmap(u32 hyp_va_bits); int hyp_map_vectors(void); -int hyp_back_vmemmap(phys_addr_t phys, unsigned long size, phys_addr_t back); +int hyp_back_vmemmap(phys_addr_t back); int pkvm_cpu_set_vector(enum arm64_hyp_spectre_vector slot); int pkvm_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot); int pkvm_create_mappings_locked(void *from, void *to, enum kvm_pgtable_prot prot); @@ -24,16 +24,4 @@ int __pkvm_create_private_mapping(phys_addr_t phys, size_t size, unsigned long *haddr); int pkvm_alloc_private_va_range(size_t size, unsigned long *haddr); -static inline void hyp_vmemmap_range(phys_addr_t phys, unsigned long size, - unsigned long *start, unsigned long *end) -{ - unsigned long nr_pages = size >> PAGE_SHIFT; - struct hyp_page *p = hyp_phys_to_page(phys); - - *start = (unsigned long)p; - *end = *start + nr_pages * sizeof(struct hyp_page); - *start = ALIGN_DOWN(*start, PAGE_SIZE); - *end = ALIGN(*end, PAGE_SIZE); -} - #endif /* __KVM_HYP_MM_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index 96193cb31a39..d3a3b47181de 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -129,13 +129,36 @@ int pkvm_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot) return ret; } -int hyp_back_vmemmap(phys_addr_t phys, unsigned long size, phys_addr_t back) +int hyp_back_vmemmap(phys_addr_t back) { - unsigned long start, end; + unsigned long i, start, size, end = 0; + int ret; - hyp_vmemmap_range(phys, size, &start, &end); + for (i = 0; i < hyp_memblock_nr; i++) { + start = hyp_memory[i].base; + start = ALIGN_DOWN((u64)hyp_phys_to_page(start), PAGE_SIZE); + /* + * The begining of the hyp_vmemmap region for the current + * memblock may already be backed by the page backing the end + * the previous region, so avoid mapping it twice. + */ + start = max(start, end); + + end = hyp_memory[i].base + hyp_memory[i].size; + end = PAGE_ALIGN((u64)hyp_phys_to_page(end)); + if (start >= end) + continue; + + size = end - start; + ret = __pkvm_create_mappings(start, size, back, PAGE_HYP); + if (ret) + return ret; + + memset(hyp_phys_to_virt(back), 0, size); + back += size; + } - return __pkvm_create_mappings(start, end - start, back, PAGE_HYP); + return 0; } static void *__hyp_bp_vect_base; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index e6e4b550752b..01976a58d850 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -235,10 +235,8 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, /* Init the vmemmap portion */ p = hyp_phys_to_page(phys); - for (i = 0; i < nr_pages; i++) { - p[i].order = 0; + for (i = 0; i < nr_pages; i++) hyp_set_page_refcounted(&p[i]); - } /* Attach the unused pages to the buddy tree */ for (i = reserved_pages; i < nr_pages; i++) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index e8d4ea2fcfa0..579eb4f73476 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -31,12 +31,11 @@ static struct hyp_pool hpool; static int divide_memory_pool(void *virt, unsigned long size) { - unsigned long vstart, vend, nr_pages; + unsigned long nr_pages; hyp_early_alloc_init(virt, size); - hyp_vmemmap_range(__hyp_pa(virt), size, &vstart, &vend); - nr_pages = (vend - vstart) >> PAGE_SHIFT; + nr_pages = hyp_vmemmap_pages(sizeof(struct hyp_page)); vmemmap_base = hyp_early_alloc_contig(nr_pages); if (!vmemmap_base) return -ENOMEM; @@ -78,7 +77,7 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, if (ret) return ret; - ret = hyp_back_vmemmap(phys, size, hyp_virt_to_phys(vmemmap_base)); + ret = hyp_back_vmemmap(hyp_virt_to_phys(vmemmap_base)); if (ret) return ret; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index ebecb7c045f4..34229425b25d 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -53,7 +53,7 @@ static int __init register_memblock_regions(void) void __init kvm_hyp_reserve(void) { - u64 nr_pages, prev, hyp_mem_pages = 0; + u64 hyp_mem_pages = 0; int ret; if (!is_hyp_mode_available() || is_kernel_in_hyp_mode()) @@ -71,21 +71,7 @@ void __init kvm_hyp_reserve(void) hyp_mem_pages += hyp_s1_pgtable_pages(); hyp_mem_pages += host_s2_pgtable_pages(); - - /* - * The hyp_vmemmap needs to be backed by pages, but these pages - * themselves need to be present in the vmemmap, so compute the number - * of pages needed by looking for a fixed point. - */ - nr_pages = 0; - do { - prev = nr_pages; - nr_pages = hyp_mem_pages + prev; - nr_pages = DIV_ROUND_UP(nr_pages * STRUCT_HYP_PAGE_SIZE, - PAGE_SIZE); - nr_pages += __hyp_pgtable_max_pages(nr_pages); - } while (nr_pages != prev); - hyp_mem_pages += nr_pages; + hyp_mem_pages += hyp_vmemmap_pages(STRUCT_HYP_PAGE_SIZE); /* * Try to allocate a PMD-aligned region to reduce TLB pressure once From patchwork Thu Jun 30 13:57:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3B1DBCCA47B for ; Thu, 30 Jun 2022 14:01:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ghCBw7XTbbf8/j4vwAdPxL6ZjpE17lGGRoejPO9R5r0=; b=BvcFkvcrrLHjRl 7YHhzHizP4/dwivt3D78bzBI17GsagoBUwj8wQfLiR5dUFIXYvH3qUgIToLudvHoQAsvHLAGXFF/L DkahdG/d5HduAU4XneF0dAab10dECw48bg96vH9W7PTSsp6OSGjea1SrzBuBjHATZl6PGdr+IhrFt MYe9Pj2slpKYvva+b7hsNSvgsXZxIX0X/zvnWBLFcpUnmJIhCwcQ6kuTroENck0zsfwibsgTuYCCb 1qMLffsOaqxrSECSxy2rubWCVJgg3qPnioUwZtUrup7Naf6V0reubtP46+LEDyGvyXcEYzQLY1II3 yCaeWEF2YFJB+c8S1kRw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uhx-00HTCL-Cq; Thu, 30 Jun 2022 13:59:49 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugT-00HSbr-Kj for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:58:19 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EDF1F620E4; Thu, 30 Jun 2022 13:58:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1899CC341CB; Thu, 30 Jun 2022 13:58:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597496; bh=czGrMKVmcAgfqMuBOqGAxm/Bl1F3qGCKOPa4Z2x4jgM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BEhtAorXncZaZBdv8J+ZmsKppz0vF4m6YFziJGxh6rfOC+rHG2q0EjK7RF30UFvBj B3yzmL/mGT9+qpGunIsAJneAFDY3rMhPZQJXPfu4fGvmT0bgtylmuWGmeQOEjt6zmc Vx+QyIRwVbCHGCWHuBsnhUDIsA2LtdQX8Lm31m0jKOg7riKYQf8MsnbeEd/oQRfrTh QMnlmfimZB3N0Nx3POZyfvCqqD1FOBj6vEdUUCMB4PkjQtTeKSp9WRNzTGXV/GEy7x d/0TRxrpONVxPToUcJtzINi0CzeJA5EpL3fpseck+WvsnadRyUishnhnw2zf1o28un zP0GXkhB6bwLg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 05/24] KVM: arm64: Make hyp stage-1 refcnt correct on the whole range Date: Thu, 30 Jun 2022 14:57:28 +0100 Message-Id: <20220630135747.26983-6-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065817_803306_75E8FCC2 X-CRM114-Status: GOOD ( 19.10 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret We currently fixup the hypervisor stage-1 refcount only for specific portions of the hyp stage-1 VA space. In order to allow unmapping pages outside of these ranges, let's fixup the refcount for the entire hyp VA space. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/nvhe/setup.c | 62 +++++++++++++++++++++++---------- 1 file changed, 43 insertions(+), 19 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 579eb4f73476..8f2726d7e201 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -185,12 +185,11 @@ static void hpool_put_page(void *addr) hyp_put_page(&hpool, addr); } -static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level, - kvm_pte_t *ptep, - enum kvm_pgtable_walk_flags flag, - void * const arg) +static int fix_host_ownership_walker(u64 addr, u64 end, u32 level, + kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, + void * const arg) { - struct kvm_pgtable_mm_ops *mm_ops = arg; enum kvm_pgtable_prot prot; enum pkvm_page_state state; kvm_pte_t pte = *ptep; @@ -199,15 +198,6 @@ static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level, if (!kvm_pte_valid(pte)) return 0; - /* - * Fix-up the refcount for the page-table pages as the early allocator - * was unable to access the hyp_vmemmap and so the buddy allocator has - * initialised the refcount to '1'. - */ - mm_ops->get_page(ptep); - if (flag != KVM_PGTABLE_WALK_LEAF) - return 0; - if (level != (KVM_PGTABLE_MAX_LEVELS - 1)) return -EINVAL; @@ -236,12 +226,30 @@ static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level, return host_stage2_idmap_locked(phys, PAGE_SIZE, prot); } -static int finalize_host_mappings(void) +static int fix_hyp_pgtable_refcnt_walker(u64 addr, u64 end, u32 level, + kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, + void * const arg) +{ + struct kvm_pgtable_mm_ops *mm_ops = arg; + kvm_pte_t pte = *ptep; + + /* + * Fix-up the refcount for the page-table pages as the early allocator + * was unable to access the hyp_vmemmap and so the buddy allocator has + * initialised the refcount to '1'. + */ + if (kvm_pte_valid(pte)) + mm_ops->get_page(ptep); + + return 0; +} + +static int fix_host_ownership(void) { struct kvm_pgtable_walker walker = { - .cb = finalize_host_mappings_walker, - .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, - .arg = pkvm_pgtable.mm_ops, + .cb = fix_host_ownership_walker, + .flags = KVM_PGTABLE_WALK_LEAF, }; int i, ret; @@ -257,6 +265,18 @@ static int finalize_host_mappings(void) return 0; } +static int fix_hyp_pgtable_refcnt(void) +{ + struct kvm_pgtable_walker walker = { + .cb = fix_hyp_pgtable_refcnt_walker, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + .arg = pkvm_pgtable.mm_ops, + }; + + return kvm_pgtable_walk(&pkvm_pgtable, 0, BIT(pkvm_pgtable.ia_bits), + &walker); +} + void __noreturn __pkvm_init_finalise(void) { struct kvm_host_data *host_data = this_cpu_ptr(&kvm_host_data); @@ -286,7 +306,11 @@ void __noreturn __pkvm_init_finalise(void) }; pkvm_pgtable.mm_ops = &pkvm_pgtable_mm_ops; - ret = finalize_host_mappings(); + ret = fix_host_ownership(); + if (ret) + goto out; + + ret = fix_hyp_pgtable_refcnt(); if (ret) goto out; From patchwork Thu Jun 30 13:57:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901846 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 01984C43334 for ; Thu, 30 Jun 2022 14:02:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=53dt9+Qe1+FxeUlLcJj/2m81iZsZdlDxPjfrle7ZS9I=; b=Pkz4qKGGVniB80 gMK/j6XPXZECmdr9BvMSe1EoIqGaorCYxdWmY94EYyFleEucmW5JCBCN3cheCgIJGOtjQstOUdaOF FNamvfOkGst79K418OlzJr85KCIg7FV6Xuh/mMDU90rhgs7Sr8olDor+hdMNFE1OiOovsKJx/9Ekk GmlHNegXiTXnYk8mNQ4B5vUFNgSg+Ouq8wrnMKFc2bBANs7ZqM7gWzMkQqPn4Se9/0+aFHL+sS5Pa 9WHmKQpVkc5yrNl0qu2OVD/vnAjBVNS5I513/zMdMWPs5fY6i9gQ1mevTKWv4qzfhJbEKnd1zE/NR /MyFXB3x0A2gualBARBw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uit-00HTdM-Dm; Thu, 30 Jun 2022 14:00:47 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugY-00HSct-PX for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:58:24 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 72687B82AF3; Thu, 30 Jun 2022 13:58:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D5287C34115; Thu, 30 Jun 2022 13:58:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597500; bh=XvT/NCRTnrJVqvVlVy8yC1jsjzB/S3LNKS9+VwFbCGM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sd+BqIIGQ2bJHAOVyxIMzoXgTEavLc4O9w5bkMyu34ru44TI+oll/hr6TDhenZy6b 5/ZKBL8MANJxBe930B21QriautEF7KzHBzF+dcmA1OqOvUjo1Rq9+dfP0N1M/tPNKZ WceV0ZlIgZX75lYEB9iGzOKob/J2gV5OMh0PFGh54DqltXoLuuk7Oidnq7bL0V/Z+j 4qdPAtCtarkC176cVgL2k2DG5ul1VAaDydxMWeEYEoqsRAyR56u5Fx1QcvvORKORUJ ciWnLHr/wcHuZ3GgJL+m8ati3Qr6wEOuZRmL/hqakRZ2CRv+d+TbqHLzH0FK2WY3/G 7ZxK8xa22GAww== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 06/24] KVM: arm64: Unify identifiers used to distinguish host and hypervisor Date: Thu, 30 Jun 2022 14:57:29 +0100 Message-Id: <20220630135747.26983-7-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065823_179909_6E028118 X-CRM114-Status: GOOD ( 16.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The 'pkvm_component_id' enum type provides constants to refer to the host and the hypervisor, yet this information is duplicated by the 'pkvm_hyp_id' constant. Remove the definition of 'pkvm_hyp_id' and move the 'pkvm_component_id' type definition to 'mem_protect.h' so that it can be used outside of the memory protection code. Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 6 +++++- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 8 -------- arch/arm64/kvm/hyp/nvhe/setup.c | 2 +- 3 files changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 80e99836eac7..f5705a1e972f 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -51,7 +51,11 @@ struct host_kvm { }; extern struct host_kvm host_kvm; -extern const u8 pkvm_hyp_id; +/* This corresponds to page-table locking order */ +enum pkvm_component_id { + PKVM_ID_HOST, + PKVM_ID_HYP, +}; int __pkvm_prot_finalize(void); int __pkvm_host_share_hyp(u64 pfn); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 78edf077fa3b..10390b8dc841 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -26,8 +26,6 @@ struct host_kvm host_kvm; static struct hyp_pool host_s2_pool; -const u8 pkvm_hyp_id = 1; - static void host_lock_component(void) { hyp_spin_lock(&host_kvm.lock); @@ -384,12 +382,6 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt) BUG_ON(ret && ret != -EAGAIN); } -/* This corresponds to locking order */ -enum pkvm_component_id { - PKVM_ID_HOST, - PKVM_ID_HYP, -}; - struct pkvm_mem_transition { u64 nr_pages; diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 8f2726d7e201..0312c9c74a5a 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -212,7 +212,7 @@ static int fix_host_ownership_walker(u64 addr, u64 end, u32 level, state = pkvm_getstate(kvm_pgtable_hyp_pte_prot(pte)); switch (state) { case PKVM_PAGE_OWNED: - return host_stage2_set_owner_locked(phys, PAGE_SIZE, pkvm_hyp_id); + return host_stage2_set_owner_locked(phys, PAGE_SIZE, PKVM_ID_HYP); case PKVM_PAGE_SHARED_OWNED: prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_BORROWED); break; From patchwork Thu Jun 30 13:57:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901853 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 54717C43334 for ; Thu, 30 Jun 2022 14:03:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4P0TYKAPd5t2voMoX6v916dgez64MO13PcQIRkobGdw=; b=sWlXfAIit//FK8 Wd1aybyjrbjaT/Ou7rgxyOQEuwQguDDAX1n6Xhh7w8IfsoHa8AmQ9gIBxanLm5RrAsRKB9fcWaJ/7 OK1GjIm1g9f0h7X/ZY/MdV37RM8VCAVMM2HvF77hyHOsmgseN2LhJ8wRZXlX4J7uuYhIaDKySqnMt 3SLY3YiMgoMMGXjafgnae2WtbWAwqeIO1ZHfZs8WPm0GREM0xanmr+mXzKTGIRsR/1h5eZriESlLN LQIq31zaKkfoYvwoiAyBlOXH+WHdSO1h0/S5riR0QkAw+BsUyXBVYA2ToTukfsdu0kjEMyNmiLt0G yg5Kgix2AlQMcOBoOdgw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uk2-00HUCC-Kc; Thu, 30 Jun 2022 14:01:59 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugc-00HSdn-Mn for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:58:28 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 25349B82AEE; Thu, 30 Jun 2022 13:58:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9DA00C341CC; Thu, 30 Jun 2022 13:58:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597503; bh=4jvhAvWd7DlxQP1ElFGB37Cbym7UjvcFAUmt5hM3j/s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nwvIw3DhaBmEdok9KfrzFFendkiyXLa4HQDJEFlcDJYBmElIPxFkgiusd4wPocVBL /HWGQGylf7UCCp5WOZoq+ZY0BuWydOEyXccQPWTCTrhuAF9cbjc0TZIVKha2VbNXUT ivm8vZZ+GXRk7JFTLYxQO1pOdtVXut8QCC8i6/cGNI1VicATQsXeuYucShRG7UUZGw QXmY2w0vxTSdpr6F5fhoMmKTJT+G9JerR8jDiT+YWuF5RPBEncHw40NAy1aePrKXXL zUYOKDuTVPfxiutT2tFlG0izrx9r8cTwotpwk52ok1uRf2bHKnRjsEEfKfIELFh+I+ rlOJmZ7kITrEg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 07/24] KVM: arm64: Implement do_donate() helper for donating memory Date: Thu, 30 Jun 2022 14:57:30 +0100 Message-Id: <20220630135747.26983-8-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065827_073748_18AED3C5 X-CRM114-Status: GOOD ( 17.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret Transferring ownership information of a memory region from one component to another can be achieved using a "donate" operation, which results in the previous owner losing access to the underlying pages entirely. Implement a do_donate() helper, along the same lines as do_{un,}share, and provide this functionality for the host-{to,from}-hyp cases as this will later be used to donate/reclaim memory pages to store VM metadata at EL2. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 239 ++++++++++++++++++ 2 files changed, 241 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index f5705a1e972f..c87b19b2d468 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -60,6 +60,8 @@ enum pkvm_component_id { int __pkvm_prot_finalize(void); int __pkvm_host_share_hyp(u64 pfn); int __pkvm_host_unshare_hyp(u64 pfn); +int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); +int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 10390b8dc841..f475d554c9fd 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -395,6 +395,9 @@ struct pkvm_mem_transition { /* Address in the completer's address space */ u64 completer_addr; } host; + struct { + u64 completer_addr; + } hyp; }; } initiator; @@ -408,6 +411,10 @@ struct pkvm_mem_share { const enum kvm_pgtable_prot completer_prot; }; +struct pkvm_mem_donation { + const struct pkvm_mem_transition tx; +}; + struct check_walk_data { enum pkvm_page_state desired; enum pkvm_page_state (*get_page_state)(kvm_pte_t pte); @@ -507,6 +514,46 @@ static int host_initiate_unshare(u64 *completer_addr, return __host_set_page_state_range(addr, size, PKVM_PAGE_OWNED); } +static int host_initiate_donation(u64 *completer_addr, + const struct pkvm_mem_transition *tx) +{ + u8 owner_id = tx->completer.id; + u64 size = tx->nr_pages * PAGE_SIZE; + + *completer_addr = tx->initiator.host.completer_addr; + return host_stage2_set_owner_locked(tx->initiator.addr, size, owner_id); +} + +static bool __host_ack_skip_pgtable_check(const struct pkvm_mem_transition *tx) +{ + return !(IS_ENABLED(CONFIG_NVHE_EL2_DEBUG) || + tx->initiator.id != PKVM_ID_HYP); +} + +static int __host_ack_transition(u64 addr, const struct pkvm_mem_transition *tx, + enum pkvm_page_state state) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + + if (__host_ack_skip_pgtable_check(tx)) + return 0; + + return __host_check_page_state_range(addr, size, state); +} + +static int host_ack_donation(u64 addr, const struct pkvm_mem_transition *tx) +{ + return __host_ack_transition(addr, tx, PKVM_NOPAGE); +} + +static int host_complete_donation(u64 addr, const struct pkvm_mem_transition *tx) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + u8 host_id = tx->completer.id; + + return host_stage2_set_owner_locked(addr, size, host_id); +} + static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte) { if (!kvm_pte_valid(pte)) @@ -527,6 +574,27 @@ static int __hyp_check_page_state_range(u64 addr, u64 size, return check_page_state_range(&pkvm_pgtable, addr, size, &d); } +static int hyp_request_donation(u64 *completer_addr, + const struct pkvm_mem_transition *tx) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + u64 addr = tx->initiator.addr; + + *completer_addr = tx->initiator.hyp.completer_addr; + return __hyp_check_page_state_range(addr, size, PKVM_PAGE_OWNED); +} + +static int hyp_initiate_donation(u64 *completer_addr, + const struct pkvm_mem_transition *tx) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + int ret; + + *completer_addr = tx->initiator.hyp.completer_addr; + ret = kvm_pgtable_hyp_unmap(&pkvm_pgtable, tx->initiator.addr, size); + return (ret != size) ? -EFAULT : 0; +} + static bool __hyp_ack_skip_pgtable_check(const struct pkvm_mem_transition *tx) { return !(IS_ENABLED(CONFIG_NVHE_EL2_DEBUG) || @@ -558,6 +626,16 @@ static int hyp_ack_unshare(u64 addr, const struct pkvm_mem_transition *tx) PKVM_PAGE_SHARED_BORROWED); } +static int hyp_ack_donation(u64 addr, const struct pkvm_mem_transition *tx) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + + if (__hyp_ack_skip_pgtable_check(tx)) + return 0; + + return __hyp_check_page_state_range(addr, size, PKVM_NOPAGE); +} + static int hyp_complete_share(u64 addr, const struct pkvm_mem_transition *tx, enum kvm_pgtable_prot perms) { @@ -576,6 +654,15 @@ static int hyp_complete_unshare(u64 addr, const struct pkvm_mem_transition *tx) return (ret != size) ? -EFAULT : 0; } +static int hyp_complete_donation(u64 addr, + const struct pkvm_mem_transition *tx) +{ + void *start = (void *)addr, *end = start + (tx->nr_pages * PAGE_SIZE); + enum kvm_pgtable_prot prot = pkvm_mkstate(PAGE_HYP, PKVM_PAGE_OWNED); + + return pkvm_create_mappings_locked(start, end, prot); +} + static int check_share(struct pkvm_mem_share *share) { const struct pkvm_mem_transition *tx = &share->tx; @@ -728,6 +815,94 @@ static int do_unshare(struct pkvm_mem_share *share) return WARN_ON(__do_unshare(share)); } +static int check_donation(struct pkvm_mem_donation *donation) +{ + const struct pkvm_mem_transition *tx = &donation->tx; + u64 completer_addr; + int ret; + + switch (tx->initiator.id) { + case PKVM_ID_HOST: + ret = host_request_owned_transition(&completer_addr, tx); + break; + case PKVM_ID_HYP: + ret = hyp_request_donation(&completer_addr, tx); + break; + default: + ret = -EINVAL; + } + + if (ret) + return ret; + + switch (tx->completer.id){ + case PKVM_ID_HOST: + ret = host_ack_donation(completer_addr, tx); + break; + case PKVM_ID_HYP: + ret = hyp_ack_donation(completer_addr, tx); + break; + default: + ret = -EINVAL; + } + + return ret; +} + +static int __do_donate(struct pkvm_mem_donation *donation) +{ + const struct pkvm_mem_transition *tx = &donation->tx; + u64 completer_addr; + int ret; + + switch (tx->initiator.id) { + case PKVM_ID_HOST: + ret = host_initiate_donation(&completer_addr, tx); + break; + case PKVM_ID_HYP: + ret = hyp_initiate_donation(&completer_addr, tx); + break; + default: + ret = -EINVAL; + } + + if (ret) + return ret; + + switch (tx->completer.id){ + case PKVM_ID_HOST: + ret = host_complete_donation(completer_addr, tx); + break; + case PKVM_ID_HYP: + ret = hyp_complete_donation(completer_addr, tx); + break; + default: + ret = -EINVAL; + } + + return ret; +} + +/* + * do_donate(): + * + * The page owner transfers ownership to another component, losing access + * as a consequence. + * + * Initiator: OWNED => NOPAGE + * Completer: NOPAGE => OWNED + */ +static int do_donate(struct pkvm_mem_donation *donation) +{ + int ret; + + ret = check_donation(donation); + if (ret) + return ret; + + return WARN_ON(__do_donate(donation)); +} + int __pkvm_host_share_hyp(u64 pfn) { int ret; @@ -793,3 +968,67 @@ int __pkvm_host_unshare_hyp(u64 pfn) return ret; } + +int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages) +{ + int ret; + u64 host_addr = hyp_pfn_to_phys(pfn); + u64 hyp_addr = (u64)__hyp_va(host_addr); + struct pkvm_mem_donation donation = { + .tx = { + .nr_pages = nr_pages, + .initiator = { + .id = PKVM_ID_HOST, + .addr = host_addr, + .host = { + .completer_addr = hyp_addr, + }, + }, + .completer = { + .id = PKVM_ID_HYP, + }, + }, + }; + + host_lock_component(); + hyp_lock_component(); + + ret = do_donate(&donation); + + hyp_unlock_component(); + host_unlock_component(); + + return ret; +} + +int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages) +{ + int ret; + u64 host_addr = hyp_pfn_to_phys(pfn); + u64 hyp_addr = (u64)__hyp_va(host_addr); + struct pkvm_mem_donation donation = { + .tx = { + .nr_pages = nr_pages, + .initiator = { + .id = PKVM_ID_HYP, + .addr = hyp_addr, + .hyp = { + .completer_addr = host_addr, + }, + }, + .completer = { + .id = PKVM_ID_HOST, + }, + }, + }; + + host_lock_component(); + hyp_lock_component(); + + ret = do_donate(&donation); + + hyp_unlock_component(); + host_unlock_component(); + + return ret; +} From patchwork Thu Jun 30 13:57:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901854 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9B6F1C43334 for ; Thu, 30 Jun 2022 14:04:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4Epr5U+y8rXXxGKehiTJGpB5DtsFNbe4zFMnDGf4+AU=; b=SDmWg1CeTdElR3 kGR08nc5T9AAuWFP2Zd5Hbeelvdhy5fqgFAAiQ6HDO4jPUWIUTZsF49a016vNTwoSO3wJUR2dphgh lA6jl8ZRUX9/pa4UxKirxUHWDbT23VcSPUUlhbYQX01JqJJ4twByoRBxrlVzetzTxN8saA6UFKKlf PpE+0j7bpz5w+MZj7vJIXFA7MVPcwSpRHoR1oYM/IkFIg+BSORsween96B7Nc044FMPDK08uBVeHj 1M3w6or3yUV49jHJ8TEwklWOBHzIbb5diLSDmDL8vMm87xEQb8erB8Z9PjMkum9EKu8L0edvoazJ0 6BHWtOFsxxUBfBbtfG4w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ul4-00HUZK-RH; Thu, 30 Jun 2022 14:03:03 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugg-00HSei-Bj for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:58:32 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 045BAB82AEF; Thu, 30 Jun 2022 13:58:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 66962C34115; Thu, 30 Jun 2022 13:58:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597507; bh=vQhtWqNapQ5J/g7Q7VK4YV9y0KdMgU3VBqeX83rHXNI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SoEKnFL3hgkDfN/MGe8myg97zESuSdsnvp9F8LErpnaS1LVdLuod7/7AZfVX6OAdI l0piGZBd7Q2oPlR3d3a4sSflQpEnoPYgox6Q+kWqjBsjiGqXVxOCNTvqzii+uKF5HU pT6IUoEKKNH/0eNM2XYE/V6Bzs85DiNNNKDnIroz/Fu8rOevTdiEKrrnaLDbFnosbD lpL3mKpMYjbw5Bje8W1lt8X/ZkR8wKJHkoVuMKgxWORQzB4XeHkG/YJWaNECBQ9CYM cpuWeC+yCZLi1PBmDw57lP21NWAbXX/sIdRC8BkHsA+efxKwWzVFMADP2Rb+OCoXWf Cn3x8CEqoCGDQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 08/24] KVM: arm64: Prevent the donation of no-map pages Date: Thu, 30 Jun 2022 14:57:31 +0100 Message-Id: <20220630135747.26983-9-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065830_734984_FF8416B0 X-CRM114-Status: GOOD ( 15.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret Memory regions marked as no-map in DT routinely include TrustZone carevouts and such. Although donating such pages to the hypervisor may not breach confidentiality, it may be used to corrupt its state in uncontrollable ways. To prevent this, let's block host-initiated memory transitions targeting no-map pages altogether in nVHE protected mode as there should be no valid reason to do this currently. Thankfully, the pKVM EL2 hypervisor has a full copy of the host's list of memblock regions, hence allowing to check for the presence of the MEMBLOCK_NOMAP flag on any given region at EL2 easily. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index f475d554c9fd..e7015bbefbea 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -193,7 +193,7 @@ struct kvm_mem_range { u64 end; }; -static bool find_mem_range(phys_addr_t addr, struct kvm_mem_range *range) +static struct memblock_region *find_mem_range(phys_addr_t addr, struct kvm_mem_range *range) { int cur, left = 0, right = hyp_memblock_nr; struct memblock_region *reg; @@ -216,18 +216,28 @@ static bool find_mem_range(phys_addr_t addr, struct kvm_mem_range *range) } else { range->start = reg->base; range->end = end; - return true; + return reg; } } - return false; + return NULL; } bool addr_is_memory(phys_addr_t phys) { struct kvm_mem_range range; - return find_mem_range(phys, &range); + return !!find_mem_range(phys, &range); +} + +static bool addr_is_allowed_memory(phys_addr_t phys) +{ + struct memblock_region *reg; + struct kvm_mem_range range; + + reg = find_mem_range(phys, &range); + + return reg && !(reg->flags & MEMBLOCK_NOMAP); } static bool is_in_mem_range(u64 addr, struct kvm_mem_range *range) @@ -350,7 +360,7 @@ static bool host_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot pr static int host_stage2_idmap(u64 addr) { struct kvm_mem_range range; - bool is_memory = find_mem_range(addr, &range); + bool is_memory = !!find_mem_range(addr, &range); enum kvm_pgtable_prot prot; int ret; @@ -428,7 +438,7 @@ static int __check_page_state_visitor(u64 addr, u64 end, u32 level, struct check_walk_data *d = arg; kvm_pte_t pte = *ptep; - if (kvm_pte_valid(pte) && !addr_is_memory(kvm_pte_to_phys(pte))) + if (kvm_pte_valid(pte) && !addr_is_allowed_memory(kvm_pte_to_phys(pte))) return -EINVAL; return d->get_page_state(pte) == d->desired ? 0 : -EPERM; From patchwork Thu Jun 30 13:57:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901855 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E8FF9C43334 for ; Thu, 30 Jun 2022 14:05:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rcQdw9rI35ZqIgEyv/C0XfY8aT+bezxvsgLqeADcHHw=; b=dY30+1WlMql610 VAdluPfAGkiCKqJ+gXaH0YWhGwjQPgsghtLqDJVrWC+LOttRA+GKNJXTB/6/gfsIoueFvYrNMS2ZR gWbDAUeZCpZpKEJK5ZxgUBcXCjAv9Q/1NzdgHnBj5RqDB1ujeJY3z0WsOvpmvnpZ7Uup+PWaL+p6x gKvXyApurgXWCigOxK84ZEAeBZpJmUBVZd/i9f3nDKrvSHY/580LGtspCKH+hgsUfaMdJx758zo3b nHFgce/9R4fX2MHlEZ2pTsORZD9nOJagNU2RMVt3i5ajw1YeJccXgnZOgyjwu/m6zKvxFzNC2ib1h alEAwBm8DtCa+nPApGsg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6um3-00HV1N-Cn; Thu, 30 Jun 2022 14:04:03 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugj-00HSg2-0S for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:58:34 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 117D26219B; Thu, 30 Jun 2022 13:58:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 330B1C36AE2; Thu, 30 Jun 2022 13:58:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597511; bh=V/L/0x+OKaLzrUxstXASpxXa331O8AEl8jt8xmSkGQ4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b9/7jZzOmwgkz88fObBL8dl4MgH2ZTknKClhvobiXJYIg/gjUOKE4MEVREqXKRx/p RqJG9QBUzxHUiEIOdtrLb/iuH7MwwrPnd70t17h2R2Tb4Y4xfwm3F048O/v6MWXRDG zaoEK0q4FHu/A8TFegMoAk1tOCnrno7Tj3wn6P7pM5SogiKIg+QfNMrp5uA4Qw2VPs 1nj3+PXXU2b46aHQBaxmmrrPHa3c998h6zBwhVJgauo+jeDZXzwt57leeJuWnbcVpe SLSp/V9SXG+JGi81gUeyIgMXL1y9OmijfYBbcZm0BS4lEyRTHK434M38RzfD/T2KPH CMp6/Lsu+o55g== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 09/24] KVM: arm64: Add helpers to pin memory shared with hyp Date: Thu, 30 Jun 2022 14:57:32 +0100 Message-Id: <20220630135747.26983-10-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065833_204136_A2378E41 X-CRM114-Status: GOOD ( 17.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret Add helpers allowing the hypervisor to check whether a range of pages are currently shared by the host, and 'pin' them if so by blocking host unshare operations until the memory has been unpinned. This will allow the hypervisor to take references on host-provided data-structures (struct kvm and such) and be guaranteed these pages will remain in a stable state until it decides to release them, e.g. during guest teardown. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 3 ++ arch/arm64/kvm/hyp/include/nvhe/memory.h | 7 ++- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 48 +++++++++++++++++++ 3 files changed, 57 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index c87b19b2d468..998bf165af71 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -69,6 +69,9 @@ int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id); int kvm_host_prepare_stage2(void *pgt_pool_base); void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); +int hyp_pin_shared_mem(void *from, void *to); +void hyp_unpin_shared_mem(void *from, void *to); + static __always_inline void __load_host_stage2(void) { if (static_branch_likely(&kvm_protected_mode_initialized)) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 2681f632e1c1..29f2ebe306bc 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -52,10 +52,15 @@ static inline void hyp_page_ref_inc(struct hyp_page *p) p->refcount++; } -static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) +static inline void hyp_page_ref_dec(struct hyp_page *p) { BUG_ON(!p->refcount); p->refcount--; +} + +static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) +{ + hyp_page_ref_dec(p); return (p->refcount == 0); } diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index e7015bbefbea..e2e3b30b072e 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -629,6 +629,9 @@ static int hyp_ack_unshare(u64 addr, const struct pkvm_mem_transition *tx) { u64 size = tx->nr_pages * PAGE_SIZE; + if (tx->initiator.id == PKVM_ID_HOST && hyp_page_count((void *)addr)) + return -EBUSY; + if (__hyp_ack_skip_pgtable_check(tx)) return 0; @@ -1042,3 +1045,48 @@ int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages) return ret; } + +int hyp_pin_shared_mem(void *from, void *to) +{ + u64 cur, start = ALIGN_DOWN((u64)from, PAGE_SIZE); + u64 end = PAGE_ALIGN((u64)to); + u64 size = end - start; + int ret; + + host_lock_component(); + hyp_lock_component(); + + ret = __host_check_page_state_range(__hyp_pa(start), size, + PKVM_PAGE_SHARED_OWNED); + if (ret) + goto unlock; + + ret = __hyp_check_page_state_range(start, size, + PKVM_PAGE_SHARED_BORROWED); + if (ret) + goto unlock; + + for (cur = start; cur < end; cur += PAGE_SIZE) + hyp_page_ref_inc(hyp_virt_to_page(cur)); + +unlock: + hyp_unlock_component(); + host_unlock_component(); + + return ret; +} + +void hyp_unpin_shared_mem(void *from, void *to) +{ + u64 cur, start = ALIGN_DOWN((u64)from, PAGE_SIZE); + u64 end = PAGE_ALIGN((u64)to); + + host_lock_component(); + hyp_lock_component(); + + for (cur = start; cur < end; cur += PAGE_SIZE) + hyp_page_ref_dec(hyp_virt_to_page(cur)); + + hyp_unlock_component(); + host_unlock_component(); +} From patchwork Thu Jun 30 13:57:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901856 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2991EC433EF for ; Thu, 30 Jun 2022 14:05:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=u46PQCaCqHFENT+c5XyuaDDLsNKYiQn+smU6v8Md77E=; b=GIj3F6PdhJjUZw HSKIgX85oFuPaNnir5HdQJFhAFBQIhZ8Ag1CXP9EWwpEPAhjSTRazZnPR9kh+PCt5vgE0RXy/PHbn 8GJrcz5CK7Qmf1dLw6Cd6eBqO3qjgUlwVEJ68OGVS6i1zGJ7QK71QIzIRlKQlncOBZvRmL8lgT6kZ JpeXax5t/aoDQ5YQcuHsgFSCDp37NiKjeWTUzK8LBZ/CGn22ScK6Cc+qgnwhBFjqJPhWWyfzCxK3Z M78Ui/ZLD9TGDqm/KZijohSW15ZZECbxVudXaOnOR/gX9r4YE4Nuw89Dl+Y6HT+qpXxTPKCRl/uYa MLZbGW2mPQhXKDwYcGSQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6umH-00HV94-J0; Thu, 30 Jun 2022 14:04:17 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugm-00HShD-B7 for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:58:37 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id CC5BF6218F; Thu, 30 Jun 2022 13:58:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED6F5C385A9; Thu, 30 Jun 2022 13:58:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597515; bh=GLDjXjp/5WNyi918MOd9NbgMrab/XFfXxeTihS3gYgw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZnQL2/9/HgMh/6c2XrTkjm/ECa8LoHjscvrW+pW7sGkKChuolqriV3SDgnM4g0YiU d97bXwT4yxA7LzXE1csBh28Yf6GBYg+e1jAwbbBaZAsUzP/69I6BDP3NI7mCqand0e iRfwiY0sUH50McyO/7zSvUMkUNrPHszy1HYaYyYd2KTOB/+y2Kd+PkP4d1rXOWe6zS S3XNgilOp0IaibgGj7svmLGJr2EAoDnUK+l8CIcI5CrchpUQvSCk/OQCIuCFH67AYC nDIraq0hUxPIpqQv/9qT40gBR3/V+oh7o9gYKXP0lw3wk6J8lGV3RK2veZ1WnkDRQj xlDqjAkhjFxyQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 10/24] KVM: arm64: Include asm/kvm_mmu.h in nvhe/mem_protect.h Date: Thu, 30 Jun 2022 14:57:33 +0100 Message-Id: <20220630135747.26983-11-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065836_529195_BBA0194A X-CRM114-Status: GOOD ( 11.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org nvhe/mem_protect.h refers to __load_stage2() in the definition of __load_host_stage2() but doesn't include the relevant header. Include asm/kvm_mmu.h in nvhe/mem_protect.h so that users of the latter don't have to do this themselves. Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 998bf165af71..3bea816296dc 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -8,6 +8,7 @@ #define __KVM_NVHE_MEM_PROTECT__ #include #include +#include #include #include #include From patchwork Thu Jun 30 13:57:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901857 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 94CAACCA47B for ; Thu, 30 Jun 2022 14:05:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=t7HZK5KmmR8RWfhLyXqNhMLEerq07nQYwoOjBBSM6ic=; b=TVp4rrUqiYDsHx 0t4RhiSFRBhKr41XEV233qbQXlcBHMekAJwio06e09I3ijJW51LbqRm1QNIT3AeqrVv+CxW8CQ/J6 HW+/3F2HLzNb2q4m72/JG6WO38m/r6F5qiOdaDRHNBub4ltnACbhqJlUkbFO5XPlBU5DQcvOt+m7Y snArE7FV5C0gBgRBOx83PkikYoPhj+y3/JkGKbQhXnSjpsKHRvTV7D4gsa9Dy7R7Fa8fzOnf1zNPA h8BNc6UREQ0xyHM1Hu6XEamL+6detK6xGrlmjX0/6wW4yXWuOGxHNWDKTLy6kbULMb9KqR58bEDEO 9hErzcaq55c2g8zwCb0g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6umV-00HVFZ-PY; Thu, 30 Jun 2022 14:04:31 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugq-00HSiO-0W for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:58:41 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 975FA6219B; Thu, 30 Jun 2022 13:58:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B52EBC3411E; Thu, 30 Jun 2022 13:58:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597519; bh=XHbnIjAE99mI8EWtHamyVofcSABl8SLt6G91nUjU0Wk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=djZ9rJLiTmkEkMgU8m3dAbaovV7dnjqgP4FHeesZI88oxKQfgmAJOCgOGPR5Ywfil 5OWdnUQ6Jv/CSsZdRdmyXuwLIx2eIEmFr+m7SJp1Si8KEUGegKJqcFLEeNw/4he9FD PejgF/riPHzoi+Kjhaqg9Fy0LCPAhHM2QN9MQ7mIWkvsb11IYHBCeX44xqmIT8IrzQ hnjdXXFRK3PDct60/YW7T2mOzY/iZSly6GsVX5bvxWwXIOSAuXf9hgPKH4k7a3+tgq 80xv7AzV68FN2E+ZYeSawtj8wOgd+HBodhTb1Q0wdcPnIa0B3E04vZiqfurX8hBtO0 0va+VjQFycWYA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 11/24] KVM: arm64: Add hyp_spinlock_t static initializer Date: Thu, 30 Jun 2022 14:57:34 +0100 Message-Id: <20220630135747.26983-12-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065840_169191_12A6D4D0 X-CRM114-Status: GOOD ( 12.86 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Having a static initializer for hyp_spinlock_t simplifies its use when there isn't an initializing function. No functional change intended. Signed-off-by: Fuad Tabba Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/spinlock.h | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/spinlock.h b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h index 4652fd04bdbe..7c7ea8c55405 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/spinlock.h +++ b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h @@ -28,9 +28,17 @@ typedef union hyp_spinlock { }; } hyp_spinlock_t; +#define __HYP_SPIN_LOCK_INITIALIZER \ + { .__val = 0 } + +#define __HYP_SPIN_LOCK_UNLOCKED \ + ((hyp_spinlock_t) __HYP_SPIN_LOCK_INITIALIZER) + +#define DEFINE_HYP_SPINLOCK(x) hyp_spinlock_t x = __HYP_SPIN_LOCK_UNLOCKED + #define hyp_spin_lock_init(l) \ do { \ - *(l) = (hyp_spinlock_t){ .__val = 0 }; \ + *(l) = __HYP_SPIN_LOCK_UNLOCKED; \ } while (0) static inline void hyp_spin_lock(hyp_spinlock_t *lock) From patchwork Thu Jun 30 13:57:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901858 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3DB47C43334 for ; Thu, 30 Jun 2022 14:06:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wtnQC9nzT8vXsLOEWiwaFYOLmQ+mUr31eynMQF0sA9U=; b=h0eOy90kWEh9hl FJQ6eb4mrvfF4QxlpuWuNiFBoCdNt74SO2bKHlfq5ARae9QIqi2LlxjJE0l3+DqNUrYmfvc+iEbF2 JvhPvXW2ZRL6sAy80OTQ1rBygDVPzG1QS3BxVhVi/LvjqyEbpv4xR36nPJc5JvNutvqQ8YeHIYiYr 8TZ7pfUmC40oY9NuCgRnaeCB06dwmB8HEH4hyfrJmcgE5K1o+v89QwScXq6RnAIM0Ay1YZJNCg0e2 kH3ypvZ3Ris4csvPX0QWwR4vxDNL+YWFAaLHbxLMrljsSPBzeENrzXvBDNHBUNzDkKLj42Mvnb2PA x0AtjyioCDWwLkxf1AaQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ums-00HVRk-OG; Thu, 30 Jun 2022 14:04:54 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugv-00HSk8-N8 for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:58:49 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 370B6B82AD8; Thu, 30 Jun 2022 13:58:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8086FC341CC; Thu, 30 Jun 2022 13:58:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597523; bh=YA1JdkQfJIo63VsfW/ctzNVM2gStbJ6z2dfPdQHmKlU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JxEtgP/kXcIkd8SefAmwQVkyrzKpPufR6JKZwBLDWTHy3jrmFnAHbpCzlb+mEkeD8 Yen3PSJnt4JVtgzGskUPxmpkFQC9vjj+WTI1hz2Y4cnOVqDGClcJbnN8PZ990jzTOM g/YT/Uc24DiDwojygtO4zSYeftjUmOqi7Jk9bwUR+8SHKpApPIh2FRIupQxIj3c4PB Xi2CBLn8CK/RXCuyfxpO0Jo+R5ZwQSc5sUrov+DDOYxfAFiH6hZf0zVNkO2kd11lWG 1xs3RBCPb4SCWySCBl7euHLH4UEg8I5iWnWeHOFQ1jexqJr6b+OiqAfPFPqpSvDifr 0QpaM/scR/+sw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 12/24] KVM: arm64: Introduce shadow VM state at EL2 Date: Thu, 30 Jun 2022 14:57:35 +0100 Message-Id: <20220630135747.26983-13-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065846_121458_77C7A379 X-CRM114-Status: GOOD ( 30.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Introduce a table of shadow VM structures at EL2 and provide hypercalls to the host for creating and destroying shadow VMs. Signed-off-by: Fuad Tabba Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_asm.h | 2 + arch/arm64/include/asm/kvm_host.h | 6 + arch/arm64/include/asm/kvm_pgtable.h | 8 + arch/arm64/include/asm/kvm_pkvm.h | 8 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 3 + arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 60 +++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 21 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 14 + arch/arm64/kvm/hyp/nvhe/pkvm.c | 398 ++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/setup.c | 8 + arch/arm64/kvm/hyp/pgtable.c | 9 + arch/arm64/kvm/pkvm.c | 1 + 12 files changed, 538 insertions(+) create mode 100644 arch/arm64/kvm/hyp/include/nvhe/pkvm.h diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 2e277f2ed671..fac4ed699913 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -76,6 +76,8 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs, __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_init_traps, + __KVM_HOST_SMCCC_FUNC___pkvm_init_shadow, + __KVM_HOST_SMCCC_FUNC___pkvm_teardown_shadow, }; #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 2cc42e1fec18..41348ac728f9 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -115,6 +115,10 @@ struct kvm_smccc_features { unsigned long vendor_hyp_bmap; }; +struct kvm_protected_vm { + unsigned int shadow_handle; +}; + struct kvm_arch { struct kvm_s2_mmu mmu; @@ -166,6 +170,8 @@ struct kvm_arch { /* Hypercall features firmware registers' descriptor */ struct kvm_smccc_features smccc_feat; + + struct kvm_protected_vm pkvm; }; struct kvm_vcpu_fault_info { diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 9f339dffbc1a..2d6b5058f7d3 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -288,6 +288,14 @@ u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); */ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift); +/* + * kvm_pgtable_stage2_pgd_size() - Helper to compute size of a stage-2 PGD + * @vtcr: Content of the VTCR register. + * + * Return: the size (in bytes) of the stage-2 PGD + */ +size_t kvm_pgtable_stage2_pgd_size(u64 vtcr); + /** * __kvm_pgtable_stage2_init() - Initialise a guest stage-2 page-table. * @pgt: Uninitialised page-table structure to initialise. diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index 8f7b8a2314bb..11526e89fe5c 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -9,6 +9,9 @@ #include #include +/* Maximum number of protected VMs that can be created. */ +#define KVM_MAX_PVMS 255 + #define HYP_MEMBLOCK_REGIONS 128 extern struct memblock_region kvm_nvhe_sym(hyp_memory)[]; @@ -40,6 +43,11 @@ static inline unsigned long hyp_vmemmap_pages(size_t vmemmap_entry_size) return res >> PAGE_SHIFT; } +static inline unsigned long hyp_shadow_table_pages(void) +{ + return PAGE_ALIGN(KVM_MAX_PVMS * sizeof(void *)) >> PAGE_SHIFT; +} + static inline unsigned long __hyp_pgtable_max_pages(unsigned long nr_pages) { unsigned long total = 0, i; diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 3bea816296dc..3a0817b5c739 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -11,6 +11,7 @@ #include #include #include +#include #include /* @@ -68,10 +69,12 @@ bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id); int kvm_host_prepare_stage2(void *pgt_pool_base); +int kvm_guest_prepare_stage2(struct kvm_shadow_vm *vm, void *pgd); void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); int hyp_pin_shared_mem(void *from, void *to); void hyp_unpin_shared_mem(void *from, void *to); +void reclaim_guest_pages(struct kvm_shadow_vm *vm); static __always_inline void __load_host_stage2(void) { diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h new file mode 100644 index 000000000000..1d0a33f70879 --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -0,0 +1,60 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2021 Google LLC + * Author: Fuad Tabba + */ + +#ifndef __ARM64_KVM_NVHE_PKVM_H__ +#define __ARM64_KVM_NVHE_PKVM_H__ + +#include + +/* + * Holds the relevant data for maintaining the vcpu state completely at hyp. + */ +struct kvm_shadow_vcpu_state { + /* The data for the shadow vcpu. */ + struct kvm_vcpu shadow_vcpu; + + /* A pointer to the host's vcpu. */ + struct kvm_vcpu *host_vcpu; + + /* A pointer to the shadow vm. */ + struct kvm_shadow_vm *shadow_vm; +}; + +/* + * Holds the relevant data for running a protected vm. + */ +struct kvm_shadow_vm { + /* The data for the shadow kvm. */ + struct kvm kvm; + + /* The host's kvm structure. */ + struct kvm *host_kvm; + + /* The total size of the donated shadow area. */ + size_t shadow_area_size; + + struct kvm_pgtable pgt; + + /* Array of the shadow state per vcpu. */ + struct kvm_shadow_vcpu_state shadow_vcpu_states[0]; +}; + +static inline struct kvm_shadow_vcpu_state *get_shadow_state(struct kvm_vcpu *shadow_vcpu) +{ + return container_of(shadow_vcpu, struct kvm_shadow_vcpu_state, shadow_vcpu); +} + +static inline struct kvm_shadow_vm *get_shadow_vm(struct kvm_vcpu *shadow_vcpu) +{ + return get_shadow_state(shadow_vcpu)->shadow_vm; +} + +void hyp_shadow_table_init(void *tbl); +int __pkvm_init_shadow(struct kvm *kvm, unsigned long shadow_hva, + size_t shadow_size, unsigned long pgd_hva); +int __pkvm_teardown_shadow(unsigned int shadow_handle); + +#endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 3cea4b6ac23e..a1fbd11c8041 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -15,6 +15,7 @@ #include #include +#include #include DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); @@ -191,6 +192,24 @@ static void handle___pkvm_vcpu_init_traps(struct kvm_cpu_context *host_ctxt) __pkvm_vcpu_init_traps(kern_hyp_va(vcpu)); } +static void handle___pkvm_init_shadow(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(struct kvm *, host_kvm, host_ctxt, 1); + DECLARE_REG(unsigned long, host_shadow_va, host_ctxt, 2); + DECLARE_REG(size_t, shadow_size, host_ctxt, 3); + DECLARE_REG(unsigned long, pgd, host_ctxt, 4); + + cpu_reg(host_ctxt, 1) = __pkvm_init_shadow(host_kvm, host_shadow_va, + shadow_size, pgd); +} + +static void handle___pkvm_teardown_shadow(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(unsigned int, shadow_handle, host_ctxt, 1); + + cpu_reg(host_ctxt, 1) = __pkvm_teardown_shadow(shadow_handle); +} + typedef void (*hcall_t)(struct kvm_cpu_context *); #define HANDLE_FUNC(x) [__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x @@ -220,6 +239,8 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__vgic_v3_save_aprs), HANDLE_FUNC(__vgic_v3_restore_aprs), HANDLE_FUNC(__pkvm_vcpu_init_traps), + HANDLE_FUNC(__pkvm_init_shadow), + HANDLE_FUNC(__pkvm_teardown_shadow), }; static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index e2e3b30b072e..9baf731736be 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -141,6 +141,20 @@ int kvm_host_prepare_stage2(void *pgt_pool_base) return 0; } +int kvm_guest_prepare_stage2(struct kvm_shadow_vm *vm, void *pgd) +{ + vm->pgt.pgd = pgd; + return 0; +} + +void reclaim_guest_pages(struct kvm_shadow_vm *vm) +{ + unsigned long nr_pages; + + nr_pages = kvm_pgtable_stage2_pgd_size(vm->kvm.arch.vtcr) >> PAGE_SHIFT; + WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(vm->pgt.pgd), nr_pages)); +} + int __pkvm_prot_finalize(void) { struct kvm_s2_mmu *mmu = &host_kvm.arch.mmu; diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 99c8d8b73e70..77aeb787670b 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -7,6 +7,9 @@ #include #include #include +#include +#include +#include #include /* @@ -183,3 +186,398 @@ void __pkvm_vcpu_init_traps(struct kvm_vcpu *vcpu) pvm_init_traps_aa64mmfr0(vcpu); pvm_init_traps_aa64mmfr1(vcpu); } + +/* + * Start the shadow table handle at the offset defined instead of at 0. + * Mainly for sanity checking and debugging. + */ +#define HANDLE_OFFSET 0x1000 + +static unsigned int shadow_handle_to_idx(unsigned int shadow_handle) +{ + return shadow_handle - HANDLE_OFFSET; +} + +static unsigned int idx_to_shadow_handle(unsigned int idx) +{ + return idx + HANDLE_OFFSET; +} + +/* + * Spinlock for protecting the shadow table related state. + * Protects writes to shadow_table, nr_shadow_entries, and next_shadow_alloc, + * as well as reads and writes to last_shadow_vcpu_lookup. + */ +static DEFINE_HYP_SPINLOCK(shadow_lock); + +/* + * The table of shadow entries for protected VMs in hyp. + * Allocated at hyp initialization and setup. + */ +static struct kvm_shadow_vm **shadow_table; + +/* Current number of vms in the shadow table. */ +static unsigned int nr_shadow_entries; + +/* The next entry index to try to allocate from. */ +static unsigned int next_shadow_alloc; + +void hyp_shadow_table_init(void *tbl) +{ + WARN_ON(shadow_table); + shadow_table = tbl; +} + +/* + * Return the shadow vm corresponding to the handle. + */ +static struct kvm_shadow_vm *find_shadow_by_handle(unsigned int shadow_handle) +{ + unsigned int shadow_idx = shadow_handle_to_idx(shadow_handle); + + if (unlikely(shadow_idx >= KVM_MAX_PVMS)) + return NULL; + + return shadow_table[shadow_idx]; +} + +static void unpin_host_vcpus(struct kvm_shadow_vcpu_state *shadow_vcpu_states, + unsigned int nr_vcpus) +{ + int i; + + for (i = 0; i < nr_vcpus; i++) { + struct kvm_vcpu *host_vcpu = shadow_vcpu_states[i].host_vcpu; + hyp_unpin_shared_mem(host_vcpu, host_vcpu + 1); + } +} + +static int set_host_vcpus(struct kvm_shadow_vcpu_state *shadow_vcpu_states, + unsigned int nr_vcpus, + struct kvm_vcpu **vcpu_array, + size_t vcpu_array_size) +{ + int i; + + if (vcpu_array_size < sizeof(*vcpu_array) * nr_vcpus) + return -EINVAL; + + for (i = 0; i < nr_vcpus; i++) { + struct kvm_vcpu *host_vcpu = kern_hyp_va(vcpu_array[i]); + + if (hyp_pin_shared_mem(host_vcpu, host_vcpu + 1)) { + unpin_host_vcpus(shadow_vcpu_states, i); + return -EBUSY; + } + + shadow_vcpu_states[i].host_vcpu = host_vcpu; + } + + return 0; +} + +static int init_shadow_structs(struct kvm *kvm, struct kvm_shadow_vm *vm, + struct kvm_vcpu **vcpu_array, + unsigned int nr_vcpus) +{ + int i; + + vm->host_kvm = kvm; + vm->kvm.created_vcpus = nr_vcpus; + vm->kvm.arch.vtcr = host_kvm.arch.vtcr; + + for (i = 0; i < nr_vcpus; i++) { + struct kvm_shadow_vcpu_state *shadow_vcpu_state = &vm->shadow_vcpu_states[i]; + struct kvm_vcpu *shadow_vcpu = &shadow_vcpu_state->shadow_vcpu; + struct kvm_vcpu *host_vcpu = shadow_vcpu_state->host_vcpu; + + shadow_vcpu_state->shadow_vm = vm; + + shadow_vcpu->kvm = &vm->kvm; + shadow_vcpu->vcpu_id = READ_ONCE(host_vcpu->vcpu_id); + shadow_vcpu->vcpu_idx = i; + + shadow_vcpu->arch.hw_mmu = &vm->kvm.arch.mmu; + } + + return 0; +} + +static bool __exists_shadow(struct kvm *host_kvm) +{ + int i; + unsigned int nr_checked = 0; + + for (i = 0; i < KVM_MAX_PVMS && nr_checked < nr_shadow_entries; i++) { + if (!shadow_table[i]) + continue; + + if (unlikely(shadow_table[i]->host_kvm == host_kvm)) + return true; + + nr_checked++; + } + + return false; +} + +/* + * Allocate a shadow table entry and insert a pointer to the shadow vm. + * + * Return a unique handle to the protected VM on success, + * negative error code on failure. + */ +static unsigned int insert_shadow_table(struct kvm *kvm, + struct kvm_shadow_vm *vm, + size_t shadow_size) +{ + struct kvm_s2_mmu *mmu = &vm->kvm.arch.mmu; + unsigned int shadow_handle; + unsigned int vmid; + + hyp_assert_lock_held(&shadow_lock); + + if (unlikely(nr_shadow_entries >= KVM_MAX_PVMS)) + return -ENOMEM; + + /* + * Initializing protected state might have failed, yet a malicious host + * could trigger this function. Thus, ensure that shadow_table exists. + */ + if (unlikely(!shadow_table)) + return -EINVAL; + + /* Check that a shadow hasn't been created before for this host KVM. */ + if (unlikely(__exists_shadow(kvm))) + return -EEXIST; + + /* Find the next free entry in the shadow table. */ + while (shadow_table[next_shadow_alloc]) + next_shadow_alloc = (next_shadow_alloc + 1) % KVM_MAX_PVMS; + shadow_handle = idx_to_shadow_handle(next_shadow_alloc); + + vm->kvm.arch.pkvm.shadow_handle = shadow_handle; + vm->shadow_area_size = shadow_size; + + /* VMID 0 is reserved for the host */ + vmid = next_shadow_alloc + 1; + if (vmid > 0xff) + return -ENOMEM; + + atomic64_set(&mmu->vmid.id, vmid); + mmu->arch = &vm->kvm.arch; + mmu->pgt = &vm->pgt; + + shadow_table[next_shadow_alloc] = vm; + next_shadow_alloc = (next_shadow_alloc + 1) % KVM_MAX_PVMS; + nr_shadow_entries++; + + return shadow_handle; +} + +/* + * Deallocate and remove the shadow table entry corresponding to the handle. + */ +static void remove_shadow_table(unsigned int shadow_handle) +{ + hyp_assert_lock_held(&shadow_lock); + shadow_table[shadow_handle_to_idx(shadow_handle)] = NULL; + nr_shadow_entries--; +} + +static size_t pkvm_get_shadow_size(unsigned int nr_vcpus) +{ + /* Shadow space for the vm struct and all of its vcpu states. */ + return sizeof(struct kvm_shadow_vm) + + sizeof(struct kvm_shadow_vcpu_state) * nr_vcpus; +} + +/* + * Check whether the size of the area donated by the host is sufficient for + * the shadow structures required for nr_vcpus as well as the shadow vm. + */ +static int check_shadow_size(unsigned int nr_vcpus, size_t shadow_size) +{ + if (nr_vcpus < 1 || nr_vcpus > KVM_MAX_VCPUS) + return -EINVAL; + + /* + * Shadow size is rounded up when allocated and donated by the host, + * so it's likely to be larger than the sum of the struct sizes. + */ + if (shadow_size < pkvm_get_shadow_size(nr_vcpus)) + return -ENOMEM; + + return 0; +} + +static void *map_donated_memory_noclear(unsigned long host_va, size_t size) +{ + void *va = (void *)kern_hyp_va(host_va); + + if (!PAGE_ALIGNED(va) || !PAGE_ALIGNED(size)) + return NULL; + + if (__pkvm_host_donate_hyp(hyp_virt_to_pfn(va), size >> PAGE_SHIFT)) + return NULL; + + return va; +} + +static void *map_donated_memory(unsigned long host_va, size_t size) +{ + void *va = map_donated_memory_noclear(host_va, size); + + if (va) + memset(va, 0, size); + + return va; +} + +static void __unmap_donated_memory(void *va, size_t size) +{ + WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(va), size >> PAGE_SHIFT)); +} + +static void unmap_donated_memory(void *va, size_t size) +{ + if (!va) + return; + + memset(va, 0, size); + __unmap_donated_memory(va, size); +} + +static void unmap_donated_memory_noclear(void *va, size_t size) +{ + if (!va) + return; + + __unmap_donated_memory(va, size); +} + +/* + * Initialize the shadow copy of the protected VM state using the memory + * donated by the host. + * + * Unmaps the donated memory from the host at stage 2. + * + * kvm: A pointer to the host's struct kvm (host va). + * shadow_hva: The host va of the area being donated for the shadow state. + * Must be page aligned. + * shadow_size: The size of the area being donated for the shadow state. + * Must be a multiple of the page size. + * pgd_hva: The host va of the area being donated for the stage-2 PGD for + * the VM. Must be page aligned. Its size is implied by the VM's + * VTCR. + * Note: An array to the host KVM VCPUs (host VA) is passed via the pgd, as to + * not to be dependent on how the VCPU's are layed out in struct kvm. + * + * Return a unique handle to the protected VM on success, + * negative error code on failure. + */ +int __pkvm_init_shadow(struct kvm *kvm, unsigned long shadow_hva, + size_t shadow_size, unsigned long pgd_hva) +{ + struct kvm_shadow_vm *vm = NULL; + unsigned int nr_vcpus; + size_t pgd_size = 0; + void *pgd = NULL; + int ret; + + kvm = kern_hyp_va(kvm); + ret = hyp_pin_shared_mem(kvm, kvm + 1); + if (ret) + return ret; + + nr_vcpus = READ_ONCE(kvm->created_vcpus); + ret = check_shadow_size(nr_vcpus, shadow_size); + if (ret) + goto err_unpin_kvm; + + ret = -ENOMEM; + + vm = map_donated_memory(shadow_hva, shadow_size); + if (!vm) + goto err_remove_mappings; + + pgd_size = kvm_pgtable_stage2_pgd_size(host_kvm.arch.vtcr); + pgd = map_donated_memory_noclear(pgd_hva, pgd_size); + if (!pgd) + goto err_remove_mappings; + + ret = set_host_vcpus(vm->shadow_vcpu_states, nr_vcpus, pgd, pgd_size); + if (ret) + goto err_remove_mappings; + + ret = init_shadow_structs(kvm, vm, pgd, nr_vcpus); + if (ret < 0) + goto err_unpin_host_vcpus; + + /* Add the entry to the shadow table. */ + hyp_spin_lock(&shadow_lock); + ret = insert_shadow_table(kvm, vm, shadow_size); + if (ret < 0) + goto err_unlock_unpin_host_vcpus; + + ret = kvm_guest_prepare_stage2(vm, pgd); + if (ret) + goto err_remove_shadow_table; + hyp_spin_unlock(&shadow_lock); + + return vm->kvm.arch.pkvm.shadow_handle; + +err_remove_shadow_table: + remove_shadow_table(vm->kvm.arch.pkvm.shadow_handle); +err_unlock_unpin_host_vcpus: + hyp_spin_unlock(&shadow_lock); +err_unpin_host_vcpus: + unpin_host_vcpus(vm->shadow_vcpu_states, nr_vcpus); +err_remove_mappings: + unmap_donated_memory(vm, shadow_size); + unmap_donated_memory_noclear(pgd, pgd_size); +err_unpin_kvm: + hyp_unpin_shared_mem(kvm, kvm + 1); + return ret; +} + +int __pkvm_teardown_shadow(unsigned int shadow_handle) +{ + struct kvm_shadow_vm *vm; + size_t shadow_size; + int err; + + /* Lookup then remove entry from the shadow table. */ + hyp_spin_lock(&shadow_lock); + vm = find_shadow_by_handle(shadow_handle); + if (!vm) { + err = -ENOENT; + goto err_unlock; + } + + if (WARN_ON(hyp_page_count(vm))) { + err = -EBUSY; + goto err_unlock; + } + + /* Ensure the VMID is clean before it can be reallocated */ + __kvm_tlb_flush_vmid(&vm->kvm.arch.mmu); + remove_shadow_table(shadow_handle); + hyp_spin_unlock(&shadow_lock); + + /* Reclaim guest pages (including page-table pages) */ + reclaim_guest_pages(vm); + unpin_host_vcpus(vm->shadow_vcpu_states, vm->kvm.created_vcpus); + + /* Push the metadata pages to the teardown memcache */ + shadow_size = vm->shadow_area_size; + hyp_unpin_shared_mem(vm->host_kvm, vm->host_kvm + 1); + + memset(vm, 0, shadow_size); + unmap_donated_memory_noclear(vm, shadow_size); + return 0; + +err_unlock: + hyp_spin_unlock(&shadow_lock); + return err; +} diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 0312c9c74a5a..fb0eff15a89f 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -16,6 +16,7 @@ #include #include #include +#include #include unsigned long hyp_nr_cpus; @@ -24,6 +25,7 @@ unsigned long hyp_nr_cpus; (unsigned long)__per_cpu_start) static void *vmemmap_base; +static void *shadow_table_base; static void *hyp_pgt_base; static void *host_s2_pgt_base; static struct kvm_pgtable_mm_ops pkvm_pgtable_mm_ops; @@ -40,6 +42,11 @@ static int divide_memory_pool(void *virt, unsigned long size) if (!vmemmap_base) return -ENOMEM; + nr_pages = hyp_shadow_table_pages(); + shadow_table_base = hyp_early_alloc_contig(nr_pages); + if (!shadow_table_base) + return -ENOMEM; + nr_pages = hyp_s1_pgtable_pages(); hyp_pgt_base = hyp_early_alloc_contig(nr_pages); if (!hyp_pgt_base) @@ -314,6 +321,7 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; + hyp_shadow_table_init(shadow_table_base); out: /* * We tail-called to here from handle___pkvm_init() and will not return, diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 2cb3867eb7c2..1d300313009d 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1200,6 +1200,15 @@ int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, return 0; } +size_t kvm_pgtable_stage2_pgd_size(u64 vtcr) +{ + u32 ia_bits = VTCR_EL2_IPA(vtcr); + u32 sl0 = FIELD_GET(VTCR_EL2_SL0_MASK, vtcr); + u32 start_level = VTCR_EL2_TGRAN_SL0_BASE - sl0; + + return kvm_pgd_pages(ia_bits, start_level) * PAGE_SIZE; +} + static int stage2_free_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, enum kvm_pgtable_walk_flags flag, void * const arg) diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 34229425b25d..3947063cc3a1 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -71,6 +71,7 @@ void __init kvm_hyp_reserve(void) hyp_mem_pages += hyp_s1_pgtable_pages(); hyp_mem_pages += host_s2_pgtable_pages(); + hyp_mem_pages += hyp_shadow_table_pages(); hyp_mem_pages += hyp_vmemmap_pages(STRUCT_HYP_PAGE_SIZE); /* From patchwork Thu Jun 30 13:57:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901859 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C8B81C43334 for ; Thu, 30 Jun 2022 14:06:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=t/GZQr6SY2bxCJcJh5rlskbkSUFWWUmjlo+ocIeedNI=; b=eE2OyvaoeLrAXX OBnwaG5mDcJqHOh1cxECPz5BBBBU90Ck7T0G8u5ERS6aI+5+GlMNij7jN/i9LSd71pNeKWa4MBlUI 7NmuXexutSoIV4Q5UGfG/BNHg8hZ6En5DV0j04TavqiA2rTCB0SWShPEMIt50SjP04MARTn1tsOlB 9hus3DW8VIbP8uhWl0MHRzNYmP4+r80tlfOIsNAnehwlYwsv9bnxlowXnvTJUta/qDMcbB3JDDM6/ 1LVAuS2Fic2RBvb8odgOmYied6o3ltvyhGW8FhUeNLQMMbhGdUtHTr/a1xMQH4+R1ShCVc4H29SIV Teh1Vvh3XOKaNUfEfwPw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6unI-00HVem-8M; Thu, 30 Jun 2022 14:05:21 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugz-00HSmZ-Uv for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:58:52 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id F2998B82AEF; Thu, 30 Jun 2022 13:58:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 71259C3411E; Thu, 30 Jun 2022 13:58:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597526; bh=q0Htei9aMsmdFnftkiVNQa8nf0uhmXaK1/sf4iK03Uw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=obVow5sCNdyrGBuLk2gi2/GrvGAVfAkB+X092uBU/qbnNVnz4xOe38V4ikZS1CNQM la+ZKutCgnjnMRLejK0Y521btvtb+LwS00X1Jf6EY5jCKCMZprlj/xD+RhlvL+pEye WcmB16Qq87GPchcgVKgdXwxr8tfsaffPsdZ9TFeIPDqcHaYbE/52DS2MOuNoUVi/yc ++hVdSv5jOjsxzgsntTYW2eN+dtKea2o44XYi25BX1X0PzKkLva7/jJnTL6SYBnXhI mLFEOiC4Vqdn6uFJEpjzKduE9nNos3XgA6U8odki0SuLa/DkgdLLt5aS01JpHG+ctT R4XFQyH+PwQhA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 13/24] KVM: arm64: Instantiate VM shadow data from EL1 Date: Thu, 30 Jun 2022 14:57:36 +0100 Message-Id: <20220630135747.26983-14-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065850_351050_58DF1A4A X-CRM114-Status: GOOD ( 23.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Now that EL2 provides calls to create and destroy shadow VM structures, plumb these into the KVM code at EL1 so that a shadow VM is created on first vCPU run and destroyed later along with the 'struct kvm' at teardown time. Signed-off-by: Fuad Tabba Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_host.h | 6 ++ arch/arm64/include/asm/kvm_pkvm.h | 4 ++ arch/arm64/kvm/arm.c | 14 ++++ arch/arm64/kvm/hyp/hyp-constants.c | 3 + arch/arm64/kvm/pkvm.c | 112 +++++++++++++++++++++++++++++ 5 files changed, 139 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 41348ac728f9..e91456f63161 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -117,6 +117,12 @@ struct kvm_smccc_features { struct kvm_protected_vm { unsigned int shadow_handle; + struct mutex shadow_lock; + + struct { + void *pgd; + void *shadow; + } hyp_donations; }; struct kvm_arch { diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index 11526e89fe5c..1dc7372950b1 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -14,6 +14,10 @@ #define HYP_MEMBLOCK_REGIONS 128 +int kvm_init_pvm(struct kvm *kvm); +int kvm_shadow_create(struct kvm *kvm); +void kvm_shadow_destroy(struct kvm *kvm); + extern struct memblock_region kvm_nvhe_sym(hyp_memory)[]; extern unsigned int kvm_nvhe_sym(hyp_memblock_nr); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a9dd7ec38f38..66e1d37858f1 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include @@ -150,6 +151,10 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) if (ret) goto out_free_stage2_pgd; + ret = kvm_init_pvm(kvm); + if (ret) + goto out_free_stage2_pgd; + if (!zalloc_cpumask_var(&kvm->arch.supported_cpus, GFP_KERNEL)) goto out_free_stage2_pgd; cpumask_copy(kvm->arch.supported_cpus, cpu_possible_mask); @@ -185,6 +190,9 @@ void kvm_arch_destroy_vm(struct kvm *kvm) kvm_vgic_destroy(kvm); + if (is_protected_kvm_enabled()) + kvm_shadow_destroy(kvm); + kvm_destroy_vcpus(kvm); kvm_unshare_hyp(kvm, kvm + 1); @@ -567,6 +575,12 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) if (ret) return ret; + if (is_protected_kvm_enabled()) { + ret = kvm_shadow_create(kvm); + if (ret) + return ret; + } + if (!irqchip_in_kernel(kvm)) { /* * Tell the rest of the code that there are userspace irqchip diff --git a/arch/arm64/kvm/hyp/hyp-constants.c b/arch/arm64/kvm/hyp/hyp-constants.c index b3742a6691e8..eee79527f901 100644 --- a/arch/arm64/kvm/hyp/hyp-constants.c +++ b/arch/arm64/kvm/hyp/hyp-constants.c @@ -2,9 +2,12 @@ #include #include +#include int main(void) { DEFINE(STRUCT_HYP_PAGE_SIZE, sizeof(struct hyp_page)); + DEFINE(KVM_SHADOW_VM_SIZE, sizeof(struct kvm_shadow_vm)); + DEFINE(KVM_SHADOW_VCPU_STATE_SIZE, sizeof(struct kvm_shadow_vcpu_state)); return 0; } diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 3947063cc3a1..b4466b31d7c8 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -6,6 +6,7 @@ #include #include +#include #include #include @@ -94,3 +95,114 @@ void __init kvm_hyp_reserve(void) kvm_info("Reserved %lld MiB at 0x%llx\n", hyp_mem_size >> 20, hyp_mem_base); } + +/* + * Allocates and donates memory for EL2 shadow structs. + * + * Allocates space for the shadow state, which includes the shadow vm as well as + * the shadow vcpu states. + * + * Stores an opaque handler in the kvm struct for future reference. + * + * Return 0 on success, negative error code on failure. + */ +static int __kvm_shadow_create(struct kvm *kvm) +{ + struct kvm_vcpu *vcpu, **vcpu_array; + unsigned int shadow_handle; + size_t pgd_sz, shadow_sz; + void *pgd, *shadow_addr; + unsigned long idx; + int ret; + + if (kvm->created_vcpus < 1) + return -EINVAL; + + pgd_sz = kvm_pgtable_stage2_pgd_size(kvm->arch.vtcr); + /* + * The PGD pages will be reclaimed using a hyp_memcache which implies + * page granularity. So, use alloc_pages_exact() to get individual + * refcounts. + */ + pgd = alloc_pages_exact(pgd_sz, GFP_KERNEL_ACCOUNT); + if (!pgd) + return -ENOMEM; + + /* Allocate memory to donate to hyp for the kvm and vcpu state. */ + shadow_sz = PAGE_ALIGN(KVM_SHADOW_VM_SIZE + + KVM_SHADOW_VCPU_STATE_SIZE * kvm->created_vcpus); + shadow_addr = alloc_pages_exact(shadow_sz, GFP_KERNEL_ACCOUNT); + if (!shadow_addr) { + ret = -ENOMEM; + goto free_pgd; + } + + /* Stash the vcpu pointers into the PGD */ + BUILD_BUG_ON(KVM_MAX_VCPUS > (PAGE_SIZE / sizeof(u64))); + vcpu_array = pgd; + kvm_for_each_vcpu(idx, vcpu, kvm) { + /* Indexing of the vcpus to be sequential starting at 0. */ + if (WARN_ON(vcpu->vcpu_idx != idx)) { + ret = -EINVAL; + goto free_shadow; + } + + vcpu_array[idx] = vcpu; + } + + /* Donate the shadow memory to hyp and let hyp initialize it. */ + ret = kvm_call_hyp_nvhe(__pkvm_init_shadow, kvm, shadow_addr, shadow_sz, + pgd); + if (ret < 0) + goto free_shadow; + + shadow_handle = ret; + + /* Store the shadow handle given by hyp for future call reference. */ + kvm->arch.pkvm.shadow_handle = shadow_handle; + kvm->arch.pkvm.hyp_donations.pgd = pgd; + kvm->arch.pkvm.hyp_donations.shadow = shadow_addr; + return 0; + +free_shadow: + free_pages_exact(shadow_addr, shadow_sz); +free_pgd: + free_pages_exact(pgd, pgd_sz); + return ret; +} + +int kvm_shadow_create(struct kvm *kvm) +{ + int ret = 0; + + mutex_lock(&kvm->arch.pkvm.shadow_lock); + if (!kvm->arch.pkvm.shadow_handle) + ret = __kvm_shadow_create(kvm); + mutex_unlock(&kvm->arch.pkvm.shadow_lock); + + return ret; +} + +void kvm_shadow_destroy(struct kvm *kvm) +{ + size_t pgd_sz, shadow_sz; + + if (kvm->arch.pkvm.shadow_handle) + WARN_ON(kvm_call_hyp_nvhe(__pkvm_teardown_shadow, + kvm->arch.pkvm.shadow_handle)); + + kvm->arch.pkvm.shadow_handle = 0; + + shadow_sz = PAGE_ALIGN(KVM_SHADOW_VM_SIZE + + KVM_SHADOW_VCPU_STATE_SIZE * kvm->created_vcpus); + pgd_sz = kvm_pgtable_stage2_pgd_size(kvm->arch.vtcr); + + free_pages_exact(kvm->arch.pkvm.hyp_donations.shadow, shadow_sz); + free_pages_exact(kvm->arch.pkvm.hyp_donations.pgd, pgd_sz); +} + +int kvm_init_pvm(struct kvm *kvm) +{ + mutex_init(&kvm->arch.pkvm.shadow_lock); + return 0; +} From patchwork Thu Jun 30 13:57:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901860 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B33D0C43334 for ; Thu, 30 Jun 2022 14:07:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=qOUGRwZXtiMIwWUUhfAgjZVUQlrXmXMLW5Rjkz/aLn4=; b=SiLjF2TAQ8dx4t +zMOHH/SOgMHP0W1zKOeBbU2HH0sArW4+1NsvLMzXSelT7PBMW4VcKcu4ehfY1vl0DXeNfHRSdXXa qo5cKXmE4z3VTyBlnKD9QG0qKcUVbYijUrjKW1JUzwWbaVQ0JKhUfHohbRmgojmFr4LYoBZPwzSCz vq2DpD6eVKY0a7B8R8TLvdljGr8wIKiJ3bd/57ybvvius4nwevQtJy98gITWGo2PtB/DYNeIa07JL pJpo00+m9g9zrhgwo7vjZww6cIiZq8IVldRUiNRSVTPcuZPm6Ro6iqIHJWTqwAO6WeaJYY8MXd1As yLiqjqz0Bt8Vsy9IMDAw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uo4-00HW3E-5b; Thu, 30 Jun 2022 14:06:08 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uh1-00HSnv-Ms for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:58:54 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2317B6219B; Thu, 30 Jun 2022 13:58:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3BF78C341CD; Thu, 30 Jun 2022 13:58:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597530; bh=ImX+07VIs5/ywt24lHtORtVCTgqMYbIoz08sSw7dG5o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ootCpjVRtnqzVmn1mwhZUMtV8bYTxenIQKgiaKFTfCOWzVtLOlGukUgK11Y/QxETv DG/XEDFz/0pNp/+wZK1c2FyiU4zNitzt2vfF9PzkndCP9j0GvTa6V6LEKSRCV6aevu KtmTKlAHDPkvtmcb5a1QULTPxXQtRxtLyefDNMZtydnUFFegyInKie9BZUE34oLANK PhW+PgAlZ0wXGeUgHVaaXeBMrx8C+QhbJxPMWjcvizwhzfhyDqHNWLYdzsS0QfiKwG EcNvudZnUPEPbMW2RAp6ybhehuvN1goNMvErHjySZKVSPh67mOaelr5GPWRWbrXIAZ UpZDzftYs7G7w== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 14/24] KVM: arm64: Add pcpu fixmap infrastructure at EL2 Date: Thu, 30 Jun 2022 14:57:37 +0100 Message-Id: <20220630135747.26983-15-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065851_954275_80CFA403 X-CRM114-Status: GOOD ( 21.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret We will soon need to temporarily map pages into the hypervisor stage-1 in nVHE protected mode. To do this efficiently, let's introduce a per-cpu fixmap allowing to map a single page without needing to take any lock or to allocate memory. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 + arch/arm64/kvm/hyp/include/nvhe/mm.h | 4 ++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 1 - arch/arm64/kvm/hyp/nvhe/mm.c | 72 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/setup.c | 4 ++ 5 files changed, 82 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 3a0817b5c739..d11d9d68a680 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -59,6 +59,8 @@ enum pkvm_component_id { PKVM_ID_HYP, }; +extern unsigned long hyp_nr_cpus; + int __pkvm_prot_finalize(void); int __pkvm_host_share_hyp(u64 pfn); int __pkvm_host_unshare_hyp(u64 pfn); diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h index b2ee6d5df55b..882c5711eda5 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -13,6 +13,10 @@ extern struct kvm_pgtable pkvm_pgtable; extern hyp_spinlock_t pkvm_pgd_lock; +int hyp_create_pcpu_fixmap(void); +void *hyp_fixmap_map(phys_addr_t phys); +int hyp_fixmap_unmap(void); + int hyp_create_idmap(u32 hyp_va_bits); int hyp_map_vectors(void); int hyp_back_vmemmap(phys_addr_t back); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 9baf731736be..a0af23de2640 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -21,7 +21,6 @@ #define KVM_HOST_S2_FLAGS (KVM_PGTABLE_S2_NOFWB | KVM_PGTABLE_S2_IDMAP) -extern unsigned long hyp_nr_cpus; struct host_kvm host_kvm; static struct hyp_pool host_s2_pool; diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index d3a3b47181de..17d689483ec4 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -24,6 +25,7 @@ struct memblock_region hyp_memory[HYP_MEMBLOCK_REGIONS]; unsigned int hyp_memblock_nr; static u64 __io_map_base; +static DEFINE_PER_CPU(void *, hyp_fixmap_base); static int __pkvm_create_mappings(unsigned long start, unsigned long size, unsigned long phys, enum kvm_pgtable_prot prot) @@ -212,6 +214,76 @@ int hyp_map_vectors(void) return 0; } +void *hyp_fixmap_map(phys_addr_t phys) +{ + void *addr = *this_cpu_ptr(&hyp_fixmap_base); + int ret = kvm_pgtable_hyp_map(&pkvm_pgtable, (u64)addr, PAGE_SIZE, + phys, PAGE_HYP); + return ret ? NULL : addr; +} + +int hyp_fixmap_unmap(void) +{ + void *addr = *this_cpu_ptr(&hyp_fixmap_base); + int ret = kvm_pgtable_hyp_unmap(&pkvm_pgtable, (u64)addr, PAGE_SIZE); + + return (ret != PAGE_SIZE) ? -EINVAL : 0; +} + +static int __pin_pgtable_cb(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, void * const arg) +{ + if (!kvm_pte_valid(*ptep) || level != KVM_PGTABLE_MAX_LEVELS - 1) + return -EINVAL; + hyp_page_ref_inc(hyp_virt_to_page(ptep)); + + return 0; +} + +static int hyp_pin_pgtable_pages(u64 addr) +{ + struct kvm_pgtable_walker walker = { + .cb = __pin_pgtable_cb, + .flags = KVM_PGTABLE_WALK_LEAF, + }; + + return kvm_pgtable_walk(&pkvm_pgtable, addr, PAGE_SIZE, &walker); +} + +int hyp_create_pcpu_fixmap(void) +{ + unsigned long addr, i; + int ret; + + for (i = 0; i < hyp_nr_cpus; i++) { + ret = pkvm_alloc_private_va_range(PAGE_SIZE, &addr); + if (ret) + return ret; + + /* + * Create a dummy mapping, to get the intermediate page-table + * pages allocated, then take a reference on the last level + * page to keep it around at all times. + */ + ret = kvm_pgtable_hyp_map(&pkvm_pgtable, addr, PAGE_SIZE, + __hyp_pa(__hyp_bss_start), PAGE_HYP); + if (ret) + return ret; + + ret = hyp_pin_pgtable_pages(addr); + if (ret) + return ret; + + ret = kvm_pgtable_hyp_unmap(&pkvm_pgtable, addr, PAGE_SIZE); + if (ret != PAGE_SIZE) + return -EINVAL; + + *per_cpu_ptr(&hyp_fixmap_base, i) = (void *)addr; + } + + return 0; +} + int hyp_create_idmap(u32 hyp_va_bits) { unsigned long start, end; diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index fb0eff15a89f..3f689ffb2693 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -321,6 +321,10 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; + ret = hyp_create_pcpu_fixmap(); + if (ret) + goto out; + hyp_shadow_table_init(shadow_table_base); out: /* From patchwork Thu Jun 30 13:57:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D9AF3C43334 for ; Thu, 30 Jun 2022 14:08:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bsuTMImYGtOWVmZyNaVBhXiGpSg5vx4JqV+hMqK+flw=; b=N8q7y70f4EqBxh PZKRDRAGGUuwEzZasqav30ssnJ+GaPdnvtiuPQD/wMQWl+atlFjtqctmWqjJg1AW+CHT9WF506bXw 33jJpSQOrDHghcVy+OmsQSQ+cgoYWbns3GFqhN5FC4KiQYjntjWU22bAzM1/CoiJ0a3m09BW11hom 8pPB87eKd+hXwos+Rw2VN5pV16W/eE2yZo6+/l99VNjj9W/k8hW/SGDPYyeVkdDOEtgnJr+u+qZoe lREgcQKg7lZo+02yvaQ83hKjbYjM/EgUyObyhoKJQQ6Qg2i1QX8DDH6EwtGeTaX5YYFHwCJjwsF0Y po0nB9LkHSEZdC9D/CBg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uoU-00HWHD-8s; Thu, 30 Jun 2022 14:06:35 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uh5-00HSpf-Lw for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:58:57 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4469261F5F; Thu, 30 Jun 2022 13:58:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 043D3C341CB; Thu, 30 Jun 2022 13:58:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597534; bh=2z+VzqIdlmDgIeL4Gyw6Deiw7cE868Mkg1FnosIPnHc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rLqIa2JdFBtYk9APFZdiBZ7/eWILkmVwkgPMtwiNknkg+Ojj/xuHVpLfil7crGHkp GbUo1NRNC0lrCOPweNQK/0kRqMPgyA3iwj4ByPepvotoizkNQqWi0nXE/5/6ab4qzH 7Zq7w2RZEuGj1JXJV4VFeFoCysI+L9ABjS8qL4sotsi40JGiOfDIFXAVxcUgjIFsZn TRpWy/kBaUjkPxO0O2XbXSpA81bLDkSNEdXS1v190pfJmVjbP/NPoNc5vRTl6dVJYY tROMdMgEk0g50ZMJYrKdomAgSlupmAid9atnl5na9OU5gDYpledQxjskG/Tplet71u O6ymQpXjN+NYw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 15/24] KVM: arm64: Initialise hyp symbols regardless of pKVM Date: Thu, 30 Jun 2022 14:57:38 +0100 Message-Id: <20220630135747.26983-16-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065855_883101_F479C02D X-CRM114-Status: GOOD ( 14.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The nVHE object at EL2 maintains its own copies of some host variables so that, when pKVM is enabled, the host cannot directly modify the hypervisor state. When running in normal nVHE mode, however, these variables are still mirrored at EL2 but are not initialised. Initialise the hypervisor symbols from the host copies regardless of pKVM, ensuring that any reference to this data at EL2 with normal nVHE will return an sensibly initialised value. Signed-off-by: Will Deacon --- arch/arm64/kvm/arm.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 66e1d37858f1..a2343640c73c 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1891,11 +1891,8 @@ static int do_pkvm_init(u32 hyp_va_bits) return ret; } -static int kvm_hyp_init_protection(u32 hyp_va_bits) +static void kvm_hyp_init_symbols(void) { - void *addr = phys_to_virt(hyp_mem_base); - int ret; - kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1); kvm_nvhe_sym(id_aa64isar0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64ISAR0_EL1); @@ -1904,6 +1901,12 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits) kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1); +} + +static int kvm_hyp_init_protection(u32 hyp_va_bits) +{ + void *addr = phys_to_virt(hyp_mem_base); + int ret; ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP); if (ret) @@ -2078,6 +2081,8 @@ static int init_hyp_mode(void) cpu_prepare_hyp_mode(cpu); } + kvm_hyp_init_symbols(); + if (is_protected_kvm_enabled()) { init_cpu_logical_map(); @@ -2085,9 +2090,7 @@ static int init_hyp_mode(void) err = -ENODEV; goto out_err; } - } - if (is_protected_kvm_enabled()) { err = kvm_hyp_init_protection(hyp_va_bits); if (err) { kvm_err("Failed to init hyp memory protection\n"); From patchwork Thu Jun 30 13:57:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 187C1CCA47B for ; Thu, 30 Jun 2022 14:08:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bp3Ye97ahcW62eDsQM3AvCfezQLlipVsGdnSzSrWbUQ=; b=WyXT5vJuVf0aFm Qs/wJ5ZoMoLyUjroTVQ0uQbQCx4j4pVCsQ5vNRwAm9iUR2byozaF6emY/1sMMAdJZnd3dgPqP08Ul sI0aC2DbrPAOoaFb4T1l2KM7bk2QnthX6BoSdlzl0hA3eq4IRE83r87Y8NpHrH7aKmqp0u8evJyzY 4yqfH9aM5APClUm4pmJKzxIsJUgbI43VsbKU69QZ5cxkfeJDwF2kLdNAshT66TJbI0+oqt8mY9hS1 or9u3zWbshkm7Qhi4sH6pMgaAwePanbn45MKRr1K6GwlaL+RN7Z1KVUfbEY9ztBlgbo/4JTGXfQMb gIQYOQdmik8j0jC5L7Jw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6upD-00HWgF-Vm; Thu, 30 Jun 2022 14:07:21 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uhB-00HSro-6v for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:59:03 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id BE7FDB82AEF; Thu, 30 Jun 2022 13:58:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2E616C34115; Thu, 30 Jun 2022 13:58:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597538; bh=ZveGRgYhfySF35odUPkFs+MNL1f3DyIjjBE7MoE4TyY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=r+1n+DWBniNKsEpzWDzTdUKWl3EUJfZu7PvHRXt9V8WZDDGGsltKsc9x09zvhUb7O biqrgnBhO0oFce9EdmYiK3bjLqYwkdIAn1UaSFQFjcy5EFiJ7sQ4VzBlU9XnfK2N7V jA1flOAivVlcxxI1QQmOHMnDun2ooKzaPPZc56nLTFBPk3Ch1s9IxeRH+yUV/CxlQ+ 2+Co0Z54uTIIB7eAmQ/DwjJINyJFpcufqeBwINwsYHjD3PCECyJc6eZjTu2mpedJ/d xG9mub5UMR8D2FbwPKQJolcf3Kfs4SbBaDGJHUvNuHSo3wij8XPAC9GiWMyHMWofUy abYxdUbMFKDzA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 16/24] KVM: arm64: Provide I-cache invalidation by VA at EL2 Date: Thu, 30 Jun 2022 14:57:39 +0100 Message-Id: <20220630135747.26983-17-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065901_629167_35B0036F X-CRM114-Status: GOOD ( 14.90 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for handling cache maintenance of guest pages at EL2, introduce an EL2 copy of icache_inval_pou() which will later be plumbed into the stage-2 page-table cache maintenance callbacks. Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_hyp.h | 1 + arch/arm64/kernel/image-vars.h | 3 --- arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/hyp/nvhe/cache.S | 11 +++++++++++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 3 +++ 5 files changed, 16 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index aa7fa2a08f06..fd99cf09972d 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -123,4 +123,5 @@ extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val); +extern unsigned long kvm_nvhe_sym(__icache_flags); #endif /* __ARM64_KVM_HYP_H__ */ diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 241c86b67d01..4e3b6d618ac1 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -80,9 +80,6 @@ KVM_NVHE_ALIAS(nvhe_hyp_panic_handler); /* Vectors installed by hyp-init on reset HVC. */ KVM_NVHE_ALIAS(__hyp_stub_vectors); -/* Kernel symbol used by icache_is_vpipt(). */ -KVM_NVHE_ALIAS(__icache_flags); - /* VMID bits set by the KVM VMID allocator */ KVM_NVHE_ALIAS(kvm_arm_vmid_bits); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a2343640c73c..90e0e7f38bb5 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1901,6 +1901,7 @@ static void kvm_hyp_init_symbols(void) kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1); + kvm_nvhe_sym(__icache_flags) = __icache_flags; } static int kvm_hyp_init_protection(u32 hyp_va_bits) diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S index 0c367eb5f4e2..85936c17ae40 100644 --- a/arch/arm64/kvm/hyp/nvhe/cache.S +++ b/arch/arm64/kvm/hyp/nvhe/cache.S @@ -12,3 +12,14 @@ SYM_FUNC_START(__pi_dcache_clean_inval_poc) ret SYM_FUNC_END(__pi_dcache_clean_inval_poc) SYM_FUNC_ALIAS(dcache_clean_inval_poc, __pi_dcache_clean_inval_poc) + +SYM_FUNC_START(__pi_icache_inval_pou) +alternative_if ARM64_HAS_CACHE_DIC + isb + ret +alternative_else_nop_endif + + invalidate_icache_by_line x0, x1, x2, x3 + ret +SYM_FUNC_END(__pi_icache_inval_pou) +SYM_FUNC_ALIAS(icache_inval_pou, __pi_icache_inval_pou) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 77aeb787670b..114c5565de7d 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -12,6 +12,9 @@ #include #include +/* Used by icache_is_vpipt(). */ +unsigned long __icache_flags; + /* * Set trap register values based on features in ID_AA64PFR0. */ From patchwork Thu Jun 30 13:57:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901880 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D56CECCA47B for ; Thu, 30 Jun 2022 14:09:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+uCl2l4QYb44hesRzt1jBEpJuHHjSGE0MEsH997tybQ=; b=duA8prnvUVV4rb QhQP/h8LyM29X/eqcwPRrbPCdV0gohFRbL/Ou4+PQMQRhQ/xMl6+v75pJsczgVr7fkJqCEP98+wB0 lOQoKkkmvdGxH+W8nGZLHSaGo/d5OpOAmU0DDUnqje6QqkbR+Bki3hUPOJeWvunknkWAuCtYMjUy6 hjjHQ/M+3tfcMAhMf3ePd/pAZU7hA/A8plD4PmrIdmStKOIzV3msVfRSvGSFgaQc4Lfza0OcIMllV H+lW/5ddHFib9k9qebi/EanVrgYWzbQVmHmzxCSG45hLFqTUc8xhhN/0Zm2GVONuKEaRLozD/7hg1 hm4l5+R8OQ1A7UfJYAzQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6upu-00HWyR-9S; Thu, 30 Jun 2022 14:08:02 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uhE-00HStY-Sz for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:59:07 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8CDB8B82AF0; Thu, 30 Jun 2022 13:59:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EA91DC385A5; Thu, 30 Jun 2022 13:58:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597542; bh=pcmjIDAYhC3j85a1sosctIpzdHXKCx23xW4Iwyna4lc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cNStHpGxhbCiBe9q0unSuYyjAgJ9EdDlSzImuICtLVKlsUaOzmoqVNj91dJfJC8kZ +CK8rEij3ILiiMNW6FgIsM/fzVX7IFlhMYsgomU8DKTnBcC8rSL3nRgRsTot1xFFcE lYL+6oLw0hX+v0vBsqqACHI23oGBp+KV0FmlLhJsvkRqRMXxVvDs2ms7oV3wycbk9g +mv83yYkX8LueW5m0QwyviqiosbJUe0FyV+Up0fCfsOaZqh361jBi7qpSyLFAoeWVi usHBWt6FMRXO085G7dUSBLjfug5BEq+uyE9gM0dCjopgN1mzeg7LgC6M+1mDUtIxg4 yLaCMeAECE5Og== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 17/24] KVM: arm64: Add generic hyp_memcache helpers Date: Thu, 30 Jun 2022 14:57:40 +0100 Message-Id: <20220630135747.26983-18-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065905_275005_BB3649FE X-CRM114-Status: GOOD ( 17.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret The host and hypervisor will need to dynamically exchange memory pages soon. Indeed, the hypervisor will rely on the host to donate memory pages it can use to create guest stage-2 page-table and to store metadata. In order to ease this process, introduce a struct hyp_memcache which is essentially a linked list of available pages, indexed by physical addresses. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_host.h | 57 +++++++++++++++++++ arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 + arch/arm64/kvm/hyp/nvhe/mm.c | 33 +++++++++++ arch/arm64/kvm/mmu.c | 26 +++++++++ 4 files changed, 118 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e91456f63161..70a2db91665d 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -73,6 +73,63 @@ u32 __attribute_const__ kvm_target_cpu(void); int kvm_reset_vcpu(struct kvm_vcpu *vcpu); void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu); +struct kvm_hyp_memcache { + phys_addr_t head; + unsigned long nr_pages; +}; + +static inline void push_hyp_memcache(struct kvm_hyp_memcache *mc, + phys_addr_t *p, + phys_addr_t (*to_pa)(void *virt)) +{ + *p = mc->head; + mc->head = to_pa(p); + mc->nr_pages++; +} + +static inline void *pop_hyp_memcache(struct kvm_hyp_memcache *mc, + void *(*to_va)(phys_addr_t phys)) +{ + phys_addr_t *p = to_va(mc->head); + + if (!mc->nr_pages) + return NULL; + + mc->head = *p; + mc->nr_pages--; + + return p; +} + +static inline int __topup_hyp_memcache(struct kvm_hyp_memcache *mc, + unsigned long min_pages, + void *(*alloc_fn)(void *arg), + phys_addr_t (*to_pa)(void *virt), + void *arg) +{ + while (mc->nr_pages < min_pages) { + phys_addr_t *p = alloc_fn(arg); + + if (!p) + return -ENOMEM; + push_hyp_memcache(mc, p, to_pa); + } + + return 0; +} + +static inline void __free_hyp_memcache(struct kvm_hyp_memcache *mc, + void (*free_fn)(void *virt, void *arg), + void *(*to_va)(phys_addr_t phys), + void *arg) +{ + while (mc->nr_pages) + free_fn(pop_hyp_memcache(mc, to_va), arg); +} + +void free_hyp_memcache(struct kvm_hyp_memcache *mc); +int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages); + struct kvm_vmid { atomic64_t id; }; diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index d11d9d68a680..36eea31a1c5f 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -77,6 +77,8 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); int hyp_pin_shared_mem(void *from, void *to); void hyp_unpin_shared_mem(void *from, void *to); void reclaim_guest_pages(struct kvm_shadow_vm *vm); +int refill_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages, + struct kvm_hyp_memcache *host_mc); static __always_inline void __load_host_stage2(void) { diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index 17d689483ec4..74730376b992 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -308,3 +308,36 @@ int hyp_create_idmap(u32 hyp_va_bits) return __pkvm_create_mappings(start, end - start, start, PAGE_HYP_EXEC); } + +static void *admit_host_page(void *arg) +{ + struct kvm_hyp_memcache *host_mc = arg; + + if (!host_mc->nr_pages) + return NULL; + + /* + * The host still owns the pages in its memcache, so we need to go + * through a full host-to-hyp donation cycle to change it. Fortunately, + * __pkvm_host_donate_hyp() takes care of races for us, so if it + * succeeds we're good to go. + */ + if (__pkvm_host_donate_hyp(hyp_phys_to_pfn(host_mc->head), 1)) + return NULL; + + return pop_hyp_memcache(host_mc, hyp_phys_to_virt); +} + +/* Refill our local memcache by poping pages from the one provided by the host. */ +int refill_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages, + struct kvm_hyp_memcache *host_mc) +{ + struct kvm_hyp_memcache tmp = *host_mc; + int ret; + + ret = __topup_hyp_memcache(mc, min_pages, admit_host_page, + hyp_virt_to_phys, &tmp); + *host_mc = tmp; + + return ret; +} diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index f5651a05b6a8..5ff0eaffc60f 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -772,6 +772,32 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) } } +static void hyp_mc_free_fn(void *addr, void *unused) +{ + free_page((unsigned long)addr); +} + +static void *hyp_mc_alloc_fn(void *unused) +{ + return (void *)__get_free_page(GFP_KERNEL_ACCOUNT); +} + +void free_hyp_memcache(struct kvm_hyp_memcache *mc) +{ + if (is_protected_kvm_enabled()) + __free_hyp_memcache(mc, hyp_mc_free_fn, + kvm_host_va, NULL); +} + +int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages) +{ + if (!is_protected_kvm_enabled()) + return 0; + + return __topup_hyp_memcache(mc, min_pages, hyp_mc_alloc_fn, + kvm_host_pa, NULL); +} + /** * kvm_phys_addr_ioremap - map a device range to guest IPA * From patchwork Thu Jun 30 13:57:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B6F74C43334 for ; Thu, 30 Jun 2022 14:09:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=K5uOvb9aFuC57cB60lCjb4ED3abY2GDeQ+2L5dOQc40=; b=FLf/sqrxZcB3Mh jGgESlh/bh1Nvjg+WIXbeQLpFYATwcNRME0SYlbyv4XfnhtiiSFX/bncFWo2LcuEK7bCT895ju0x5 +d10vjAAD6X8EkthJw37OOvVO/Pxfd8H65nMQ7unpBGndnVrruKYsobN4S+MEM7KMzWAxgLrkE/gU wRplMuFBpkfs0nXhQSZ9/f97jIksVbqJSoba7CZyEk8MadNnemQ0yBltlmlKriy/lRICHaazXa7qM RyETgG/i4eZaHfZ69LvuI/TGjqUiv3fJ9aK9WE69/Te6zG8nETeP8G3Agw1eCnvfc0SIXsmY1Toc4 TD6ysKJtKEBKibLoWSLA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uqV-00HXGK-Ur; Thu, 30 Jun 2022 14:08:41 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uhI-00HSv6-OW for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:59:11 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 4D22BB82AF4; Thu, 30 Jun 2022 13:59:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B3EAFC3411E; Thu, 30 Jun 2022 13:59:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597546; bh=n6PanqBSLoHwHHp3GucuaygE9tJD0LZxRXIiajqkM3g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EyMFGkqmlJvt6Io/5PPJ/ja4cfg9PwH3AnYqfIunb1kiVqAK4WB4taOZsDIeBEE0a +HEYCc5JH2+0HAgIrYWbxTJrPu3/scLZbZU5ZAqnDgMvirMsP5HIJd5ZxARZ3+gKtS bvBzdswtcMoDWxpqBya7pUqKhisnPcXvD4AhI1A27XuvcoALTjpZP+qIqyldLC4Wtz R0Fp09DmdpLXWTgje4P9M1XSthDRU/u1jjaHI3J8hBkkZU/3chsSVbMsyx7SHRBcTE 9Hd23ry6WmnScCx9P9kLITqqwbbMy1TVWjnyBgfYLAj9+s20o0RnDjCLv3VcYHczat OSCFvNxtkZEHQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 18/24] KVM: arm64: Instantiate guest stage-2 page-tables at EL2 Date: Thu, 30 Jun 2022 14:57:41 +0100 Message-Id: <20220630135747.26983-19-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065909_153202_C95933CA X-CRM114-Status: GOOD ( 20.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret Extend the shadow initialisation at EL2 so that we instantiate a memory pool and a full 'struct kvm_s2_mmu' structure for each VM, with a stage-2 page-table entirely independent from the one managed by the host at EL1. For now, the new page-table is unused as there is no way for the host to map anything into it. Yet. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 6 ++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 127 ++++++++++++++++++++++++- 2 files changed, 130 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index 1d0a33f70879..c0e32a750b6e 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -9,6 +9,9 @@ #include +#include +#include + /* * Holds the relevant data for maintaining the vcpu state completely at hyp. */ @@ -37,6 +40,9 @@ struct kvm_shadow_vm { size_t shadow_area_size; struct kvm_pgtable pgt; + struct kvm_pgtable_mm_ops mm_ops; + struct hyp_pool pool; + hyp_spinlock_t lock; /* Array of the shadow state per vcpu. */ struct kvm_shadow_vcpu_state shadow_vcpu_states[0]; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index a0af23de2640..5b22bba77e57 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -25,6 +25,21 @@ struct host_kvm host_kvm; static struct hyp_pool host_s2_pool; +static DEFINE_PER_CPU(struct kvm_shadow_vm *, __current_vm); +#define current_vm (*this_cpu_ptr(&__current_vm)) + +static void guest_lock_component(struct kvm_shadow_vm *vm) +{ + hyp_spin_lock(&vm->lock); + current_vm = vm; +} + +static void guest_unlock_component(struct kvm_shadow_vm *vm) +{ + current_vm = NULL; + hyp_spin_unlock(&vm->lock); +} + static void host_lock_component(void) { hyp_spin_lock(&host_kvm.lock); @@ -140,18 +155,124 @@ int kvm_host_prepare_stage2(void *pgt_pool_base) return 0; } +static bool guest_stage2_force_pte_cb(u64 addr, u64 end, + enum kvm_pgtable_prot prot) +{ + return true; +} + +static void *guest_s2_zalloc_pages_exact(size_t size) +{ + void *addr = hyp_alloc_pages(¤t_vm->pool, get_order(size)); + + WARN_ON(size != (PAGE_SIZE << get_order(size))); + hyp_split_page(hyp_virt_to_page(addr)); + + return addr; +} + +static void guest_s2_free_pages_exact(void *addr, unsigned long size) +{ + u8 order = get_order(size); + unsigned int i; + + for (i = 0; i < (1 << order); i++) + hyp_put_page(¤t_vm->pool, addr + (i * PAGE_SIZE)); +} + +static void *guest_s2_zalloc_page(void *mc) +{ + struct hyp_page *p; + void *addr; + + addr = hyp_alloc_pages(¤t_vm->pool, 0); + if (addr) + return addr; + + addr = pop_hyp_memcache(mc, hyp_phys_to_virt); + if (!addr) + return addr; + + memset(addr, 0, PAGE_SIZE); + p = hyp_virt_to_page(addr); + memset(p, 0, sizeof(*p)); + p->refcount = 1; + + return addr; +} + +static void guest_s2_get_page(void *addr) +{ + hyp_get_page(¤t_vm->pool, addr); +} + +static void guest_s2_put_page(void *addr) +{ + hyp_put_page(¤t_vm->pool, addr); +} + +static void clean_dcache_guest_page(void *va, size_t size) +{ + __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); + hyp_fixmap_unmap(); +} + +static void invalidate_icache_guest_page(void *va, size_t size) +{ + __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); + hyp_fixmap_unmap(); +} + int kvm_guest_prepare_stage2(struct kvm_shadow_vm *vm, void *pgd) { - vm->pgt.pgd = pgd; + struct kvm_s2_mmu *mmu = &vm->kvm.arch.mmu; + unsigned long nr_pages; + int ret; + + nr_pages = kvm_pgtable_stage2_pgd_size(vm->kvm.arch.vtcr) >> PAGE_SHIFT; + ret = hyp_pool_init(&vm->pool, hyp_virt_to_pfn(pgd), nr_pages, 0); + if (ret) + return ret; + + hyp_spin_lock_init(&vm->lock); + vm->mm_ops = (struct kvm_pgtable_mm_ops) { + .zalloc_pages_exact = guest_s2_zalloc_pages_exact, + .free_pages_exact = guest_s2_free_pages_exact, + .zalloc_page = guest_s2_zalloc_page, + .phys_to_virt = hyp_phys_to_virt, + .virt_to_phys = hyp_virt_to_phys, + .page_count = hyp_page_count, + .get_page = guest_s2_get_page, + .put_page = guest_s2_put_page, + .dcache_clean_inval_poc = clean_dcache_guest_page, + .icache_inval_pou = invalidate_icache_guest_page, + }; + + guest_lock_component(vm); + ret = __kvm_pgtable_stage2_init(mmu->pgt, mmu, &vm->mm_ops, 0, + guest_stage2_force_pte_cb); + guest_unlock_component(vm); + if (ret) + return ret; + + vm->kvm.arch.mmu.pgd_phys = __hyp_pa(vm->pgt.pgd); + return 0; } void reclaim_guest_pages(struct kvm_shadow_vm *vm) { - unsigned long nr_pages; + unsigned long nr_pages, pfn; nr_pages = kvm_pgtable_stage2_pgd_size(vm->kvm.arch.vtcr) >> PAGE_SHIFT; - WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(vm->pgt.pgd), nr_pages)); + pfn = hyp_virt_to_pfn(vm->pgt.pgd); + + guest_lock_component(vm); + kvm_pgtable_stage2_destroy(&vm->pgt); + vm->kvm.arch.mmu.pgd_phys = 0ULL; + guest_unlock_component(vm); + + WARN_ON(__pkvm_hyp_donate_host(pfn, nr_pages)); } int __pkvm_prot_finalize(void) From patchwork Thu Jun 30 13:57:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901882 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F11E7C43334 for ; Thu, 30 Jun 2022 14:10:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jxfMF8OybEq9brg+x3PANH9p5xVXClND/QCdosVo/9U=; b=2SAulN9kC/Cfbj 6MLu59fL+2eMGk1IrkmhAevzzfKAErKn2MqtO6+mpjPQcQ/knNPflbx82n37ISxFSkvJ8Dbac+RMz 91r6soy4M6JP68ZOZIFkKDXhOf1m7lVHhHke32vUPYuRPmYbKlkCb7pPHfXU1fCy+Zlov9w8N/j3A 3fXh+aL+pqPOFMbhvORU8Cjk0KJwkL1BL2IpTeCYtIgs2i0OMCKWIuHR+4Zk85neI7F6tXbK5xzyr LacJKWlfAvpEH3nq0/uaeStQpi/C9XCWu+rVJNDB0ObKzhxfTCRRXPczaLTSMHw96WBAbCgX9m1eQ jZ/c+WVn7rvUhbVJSilQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6urS-00HXl4-D6; Thu, 30 Jun 2022 14:09:38 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uhM-00HSwX-EW for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:59:19 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 08C2BB82A7D; Thu, 30 Jun 2022 13:59:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7D787C34115; Thu, 30 Jun 2022 13:59:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597549; bh=30dNE0wWVFK96CPhSt0kble4nLGBd3sqOAIzyYktuZM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fBQ1FC/rmwtl/nbtkcDgDbzXz/I0tdCnVXtgM8mVz/8KUxQu0Jof6biBGWYfLwsfJ /XjWJHdiYqHcA9IkP03K/BkbIYz5zLukiuyYdL3ZH+iCuVIidreGvURcSymuElUeSX g5hL3PKru9ueEu08z7f5PEBPyDBTukzlEgzgaRJu0Htp7Pje82e8sB5NwmWBcppP1A r+IkzMwR4EZJ1pkoM4JsSotXtgzUwxR+dHUyjJnJlnZ4fl3BHkg5DROU8IvXGEp8Br WBbmqSgPR9UJK1PFBUfWal91DACjASrGyG5aoeXi/t8MixGwDz9oBL4/gW5lR9c4Na e3FOUm4t4JmYA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 19/24] KVM: arm64: Return guest memory from EL2 via dedicated teardown memcache Date: Thu, 30 Jun 2022 14:57:42 +0100 Message-Id: <20220630135747.26983-20-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065912_842508_0994CAAC X-CRM114-Status: GOOD ( 17.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret Rather than relying on the host to free the shadow VM pages explicitly on teardown, introduce a dedicated teardown memcache which allows the host to reclaim guest memory resources without having to keep track of all of the allocations made by EL2. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_host.h | 6 +----- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 +- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 17 +++++++++++------ arch/arm64/kvm/hyp/nvhe/pkvm.c | 8 +++++++- arch/arm64/kvm/pkvm.c | 12 +----------- 5 files changed, 21 insertions(+), 24 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 70a2db91665d..09481268c224 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -175,11 +175,7 @@ struct kvm_smccc_features { struct kvm_protected_vm { unsigned int shadow_handle; struct mutex shadow_lock; - - struct { - void *pgd; - void *shadow; - } hyp_donations; + struct kvm_hyp_memcache teardown_mc; }; struct kvm_arch { diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 36eea31a1c5f..663019992b67 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -76,7 +76,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); int hyp_pin_shared_mem(void *from, void *to); void hyp_unpin_shared_mem(void *from, void *to); -void reclaim_guest_pages(struct kvm_shadow_vm *vm); +void reclaim_guest_pages(struct kvm_shadow_vm *vm, struct kvm_hyp_memcache *mc); int refill_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages, struct kvm_hyp_memcache *host_mc); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 5b22bba77e57..bcfdba1881c1 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -260,19 +260,24 @@ int kvm_guest_prepare_stage2(struct kvm_shadow_vm *vm, void *pgd) return 0; } -void reclaim_guest_pages(struct kvm_shadow_vm *vm) +void reclaim_guest_pages(struct kvm_shadow_vm *vm, struct kvm_hyp_memcache *mc) { - unsigned long nr_pages, pfn; - - nr_pages = kvm_pgtable_stage2_pgd_size(vm->kvm.arch.vtcr) >> PAGE_SHIFT; - pfn = hyp_virt_to_pfn(vm->pgt.pgd); + void *addr; + /* Dump all pgtable pages in the hyp_pool */ guest_lock_component(vm); kvm_pgtable_stage2_destroy(&vm->pgt); vm->kvm.arch.mmu.pgd_phys = 0ULL; guest_unlock_component(vm); - WARN_ON(__pkvm_hyp_donate_host(pfn, nr_pages)); + /* Drain the hyp_pool into the memcache */ + addr = hyp_alloc_pages(&vm->pool, 0); + while (addr) { + memset(hyp_virt_to_page(addr), 0, sizeof(struct hyp_page)); + push_hyp_memcache(mc, addr, hyp_virt_to_phys); + WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(addr), 1)); + addr = hyp_alloc_pages(&vm->pool, 0); + } } int __pkvm_prot_finalize(void) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 114c5565de7d..a4a518b2a43b 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -546,8 +546,10 @@ int __pkvm_init_shadow(struct kvm *kvm, unsigned long shadow_hva, int __pkvm_teardown_shadow(unsigned int shadow_handle) { + struct kvm_hyp_memcache *mc; struct kvm_shadow_vm *vm; size_t shadow_size; + void *addr; int err; /* Lookup then remove entry from the shadow table. */ @@ -569,7 +571,8 @@ int __pkvm_teardown_shadow(unsigned int shadow_handle) hyp_spin_unlock(&shadow_lock); /* Reclaim guest pages (including page-table pages) */ - reclaim_guest_pages(vm); + mc = &vm->host_kvm->arch.pkvm.teardown_mc; + reclaim_guest_pages(vm, mc); unpin_host_vcpus(vm->shadow_vcpu_states, vm->kvm.created_vcpus); /* Push the metadata pages to the teardown memcache */ @@ -577,6 +580,9 @@ int __pkvm_teardown_shadow(unsigned int shadow_handle) hyp_unpin_shared_mem(vm->host_kvm, vm->host_kvm + 1); memset(vm, 0, shadow_size); + for (addr = vm; addr < (void *)vm + shadow_size; addr += PAGE_SIZE) + push_hyp_memcache(mc, addr, hyp_virt_to_phys); + unmap_donated_memory_noclear(vm, shadow_size); return 0; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index b4466b31d7c8..b174d6dfde36 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -160,8 +160,6 @@ static int __kvm_shadow_create(struct kvm *kvm) /* Store the shadow handle given by hyp for future call reference. */ kvm->arch.pkvm.shadow_handle = shadow_handle; - kvm->arch.pkvm.hyp_donations.pgd = pgd; - kvm->arch.pkvm.hyp_donations.shadow = shadow_addr; return 0; free_shadow: @@ -185,20 +183,12 @@ int kvm_shadow_create(struct kvm *kvm) void kvm_shadow_destroy(struct kvm *kvm) { - size_t pgd_sz, shadow_sz; - if (kvm->arch.pkvm.shadow_handle) WARN_ON(kvm_call_hyp_nvhe(__pkvm_teardown_shadow, kvm->arch.pkvm.shadow_handle)); kvm->arch.pkvm.shadow_handle = 0; - - shadow_sz = PAGE_ALIGN(KVM_SHADOW_VM_SIZE + - KVM_SHADOW_VCPU_STATE_SIZE * kvm->created_vcpus); - pgd_sz = kvm_pgtable_stage2_pgd_size(kvm->arch.vtcr); - - free_pages_exact(kvm->arch.pkvm.hyp_donations.shadow, shadow_sz); - free_pages_exact(kvm->arch.pkvm.hyp_donations.pgd, pgd_sz); + free_hyp_memcache(&kvm->arch.pkvm.teardown_mc); } int kvm_init_pvm(struct kvm *kvm) From patchwork Thu Jun 30 13:57:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901884 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6CB10C433EF for ; Thu, 30 Jun 2022 14:12:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4vKplB/BYC+2CteDMPb3Cu00j+vFh3zxldItcPYNKaU=; b=O3QVbisrJYlO7R fOMELNT3ICWAzMUZAODNirrfA1Hs7tqj6BUDRRH4XiRICkCBN6TlsbkLcuVJ4Gn2NWVeScDdVIe0a HjXqcLM+bzjKs6jlZfFqK1SKvwve+xp4J7CWuqtbYHumYG86ZWC3PRC8jjuzFPaMKmwhie+vwFXqO YHP17ZrOvSbNHclSyJq1Tc//OoXrVCykhJ2UDuU72OsjO7Ft3XvTfdXBGSkzMYK9Mlx2yTip3u7Qg kBWcdkXAet3+9+CHNNr6ocXg0ehhRrNN/aV0uBR5cVODCNU8iCqYl5NdwPC2A+6dhlcnWgwAIfazn yM5s+bbIlhRm/BjAG76Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ut6-00HYa0-76; Thu, 30 Jun 2022 14:11:20 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uhP-00HSxz-CQ for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:59:21 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DAB54B82AF8; Thu, 30 Jun 2022 13:59:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 464A5C341D0; Thu, 30 Jun 2022 13:59:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597553; bh=iwY971UQlP6XyQNLg8Rxql9axBhgXqtqK6Nz3Ky5qwk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YkUO4LZzjk1DoBdQROa5avkgKAVygOgVS8aFczH5nw4I8ZkJExZU6kSDQn9Io5Gi7 Bx4CFY31w+Tu3ZnKf2faBW7N6k9rZ5JI/kO09g1V9hCWOQIu9rsUU2dFgsnqmLE3O3 fq0Bje0/4rjFomB4cubNY2Lk4EdXz4kF2L+AXHypKCdyyMm8AR3bA49jfF24nKQI/r DefptzkS7BO9lmHYqNSCARffXpJ9NeqdCyczhwMpxFql5F/EXuoDMCLN31QIdLbl0d mZc6DQmiuRpq4rvc0iFYJx5x+QUJ6WOA1h9PcqxDiL36vqF5oyxkeqOdOwygUR1aUK wQrOYoHhdpCAg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 20/24] KVM: arm64: Unmap kvm_arm_hyp_percpu_base from the host Date: Thu, 30 Jun 2022 14:57:43 +0100 Message-Id: <20220630135747.26983-21-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065915_824872_15F97A29 X-CRM114-Status: GOOD ( 13.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret In pKVM mode, we can't trust the host not to mess with the hypervisor per-cpu offsets, so let's move the array containing them to the nVHE code. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_asm.h | 4 ++-- arch/arm64/kernel/image-vars.h | 3 --- arch/arm64/kvm/arm.c | 9 ++++----- arch/arm64/kvm/hyp/nvhe/hyp-smp.c | 2 ++ 4 files changed, 8 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index fac4ed699913..d92f2ccae74c 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -108,7 +108,7 @@ enum __kvm_host_smccc_func { #define per_cpu_ptr_nvhe_sym(sym, cpu) \ ({ \ unsigned long base, off; \ - base = kvm_arm_hyp_percpu_base[cpu]; \ + base = kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu]; \ off = (unsigned long)&CHOOSE_NVHE_SYM(sym) - \ (unsigned long)&CHOOSE_NVHE_SYM(__per_cpu_start); \ base ? (typeof(CHOOSE_NVHE_SYM(sym))*)(base + off) : NULL; \ @@ -197,7 +197,7 @@ DECLARE_KVM_HYP_SYM(__kvm_hyp_vector); #define __kvm_hyp_init CHOOSE_NVHE_SYM(__kvm_hyp_init) #define __kvm_hyp_vector CHOOSE_HYP_SYM(__kvm_hyp_vector) -extern unsigned long kvm_arm_hyp_percpu_base[NR_CPUS]; +extern unsigned long kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[]; DECLARE_KVM_NVHE_SYM(__per_cpu_start); DECLARE_KVM_NVHE_SYM(__per_cpu_end); diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 4e3b6d618ac1..37a2d833851a 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -102,9 +102,6 @@ KVM_NVHE_ALIAS(gic_nonsecure_priorities); KVM_NVHE_ALIAS(__start___kvm_ex_table); KVM_NVHE_ALIAS(__stop___kvm_ex_table); -/* Array containing bases of nVHE per-CPU memory regions. */ -KVM_NVHE_ALIAS(kvm_arm_hyp_percpu_base); - /* PMU available static key */ #ifdef CONFIG_HW_PERF_EVENTS KVM_NVHE_ALIAS(kvm_arm_pmu_available); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 90e0e7f38bb5..1934fcb2c2d3 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -51,7 +51,6 @@ DEFINE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector); static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); -unsigned long kvm_arm_hyp_percpu_base[NR_CPUS]; DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); static bool vgic_present; @@ -1865,13 +1864,13 @@ static void teardown_hyp_mode(void) free_hyp_pgds(); for_each_possible_cpu(cpu) { free_page(per_cpu(kvm_arm_hyp_stack_page, cpu)); - free_pages(kvm_arm_hyp_percpu_base[cpu], nvhe_percpu_order()); + free_pages(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu], nvhe_percpu_order()); } } static int do_pkvm_init(u32 hyp_va_bits) { - void *per_cpu_base = kvm_ksym_ref(kvm_arm_hyp_percpu_base); + void *per_cpu_base = kvm_ksym_ref(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)); int ret; preempt_disable(); @@ -1975,7 +1974,7 @@ static int init_hyp_mode(void) page_addr = page_address(page); memcpy(page_addr, CHOOSE_NVHE_SYM(__per_cpu_start), nvhe_percpu_size()); - kvm_arm_hyp_percpu_base[cpu] = (unsigned long)page_addr; + kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu] = (unsigned long)page_addr; } /* @@ -2068,7 +2067,7 @@ static int init_hyp_mode(void) } for_each_possible_cpu(cpu) { - char *percpu_begin = (char *)kvm_arm_hyp_percpu_base[cpu]; + char *percpu_begin = (char *)kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu]; char *percpu_end = percpu_begin + nvhe_percpu_size(); /* Map Hyp percpu pages */ diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c index 9f54833af400..04d194583f1e 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c @@ -23,6 +23,8 @@ u64 cpu_logical_map(unsigned int cpu) return hyp_cpu_logical_map[cpu]; } +unsigned long __ro_after_init kvm_arm_hyp_percpu_base[NR_CPUS]; + unsigned long __hyp_per_cpu_offset(unsigned int cpu) { unsigned long *cpu_base_array; From patchwork Thu Jun 30 13:57:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901883 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3ED94C433EF for ; Thu, 30 Jun 2022 14:11:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3OpcqEBSUOjNMcbZDwhRio4YYX8YoZprrbjvfdai72Y=; b=uzfjaShghaXNtF I5rs68Paql8fRZcH9gEP5nzKvS6whhPA8Tt/XKJ/ImRk+KNGs/2iR8y61I5pN2X5Ute4ckvEY1tl5 XaUrRerzVzjUVTYcqgyRdHbhFktN7KfYRF7+eZfuKXsGTaxnw/eEcjYLjeAUPTq7R88W8RBRmvFMR Ft/mwQ4vC9b50jk20UnWAh/ecxmiwiScEfpLaN6SFyKIstah04QNyvcLr/OqYkCvzlrnAzKQXz/TB t21MNnHWZemireYcWqfXu4FoXM6cr19riW9sl2ISMbP90X4ztTQK7vswbSY2rt3kRTl8/cDvOMKKM qz2t79V4GgVU4WoM3NSQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6usG-00HYAr-VM; Thu, 30 Jun 2022 14:10:29 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uhS-00HSzT-GE for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:59:21 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id DD08361F5F; Thu, 30 Jun 2022 13:59:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0DF40C341CE; Thu, 30 Jun 2022 13:59:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597557; bh=hzobqoaj7h3GSxvMaHY728giC2LOivXExMKuupqC9ig=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Y47t9JBw8RLykJWdILmhP9lGaYyY3zojiXUcsCZJ5ocvfeiRuDmzVliGU5K1mi+uz RdE2KGH8eX1TZeJts53UT6qiPEns3us/X0w+bJ4GggulzREJtF+DbujsWV7t7lX259 IESGDT1knZXSHPhjRZGAu69LOIFIU1+2qDhV0RwT7yr6zHz9kxh2HH+5BDSzyLHYm0 qUY05bXc9cTm8fALh2PfVRK+pak4yu+4vr4k+LV9HnZKLYx8pRQLCwIVcEtbpnZDVm MRskc1JpGki1Ym9uHCR9hd7LT1lx+439rhKTzdS3R8cj2gHEj0vxzcV2qmjqnnLbYa CamYheOaDKg5A== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 21/24] KVM: arm64: Maintain a copy of 'kvm_arm_vmid_bits' at EL2 Date: Thu, 30 Jun 2022 14:57:44 +0100 Message-Id: <20220630135747.26983-22-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065918_682661_A657A1C8 X-CRM114-Status: GOOD ( 17.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Sharing 'kvm_arm_vmid_bits' between EL1 and EL2 allows the host to modify the variable arbitrarily, potentially leading to all sorts of shenanians as this is used to configure the VTTBR register for the guest stage-2. In preparation for unmapping host sections entirely from EL2, maintain a copy of 'kvm_arm_vmid_bits' and initialise it from the host value while it is still trusted. Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_hyp.h | 2 ++ arch/arm64/kernel/image-vars.h | 3 --- arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/hyp/nvhe/pkvm.c | 3 +++ 4 files changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index fd99cf09972d..6797eafe7890 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -124,4 +124,6 @@ extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val); extern unsigned long kvm_nvhe_sym(__icache_flags); +extern unsigned int kvm_nvhe_sym(kvm_arm_vmid_bits); + #endif /* __ARM64_KVM_HYP_H__ */ diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 37a2d833851a..3e2489d23ff0 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -80,9 +80,6 @@ KVM_NVHE_ALIAS(nvhe_hyp_panic_handler); /* Vectors installed by hyp-init on reset HVC. */ KVM_NVHE_ALIAS(__hyp_stub_vectors); -/* VMID bits set by the KVM VMID allocator */ -KVM_NVHE_ALIAS(kvm_arm_vmid_bits); - /* Kernel symbols needed for cpus_have_final/const_caps checks. */ KVM_NVHE_ALIAS(arm64_const_caps_ready); KVM_NVHE_ALIAS(cpu_hwcap_keys); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 1934fcb2c2d3..fe249b584115 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1901,6 +1901,7 @@ static void kvm_hyp_init_symbols(void) kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1); kvm_nvhe_sym(__icache_flags) = __icache_flags; + kvm_nvhe_sym(kvm_arm_vmid_bits) = kvm_arm_vmid_bits; } static int kvm_hyp_init_protection(u32 hyp_va_bits) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index a4a518b2a43b..571334fd58ff 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -15,6 +15,9 @@ /* Used by icache_is_vpipt(). */ unsigned long __icache_flags; +/* Used by kvm_get_vttbr(). */ +unsigned int kvm_arm_vmid_bits; + /* * Set trap register values based on features in ID_AA64PFR0. */ From patchwork Thu Jun 30 13:57:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901888 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9B929C433EF for ; Thu, 30 Jun 2022 14:13:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=v7gCit6aUwKCH2Sy+Gf313tQMuch8OimDtbKZk0yKxQ=; b=v9QA3iF7r2UPgs ZuLqIlt+dMm+C2VS1JHF6Mp0VNxrJoC+uCzWEuQzYKUrhOK02EdyRi640FlOEeyZtfRrPqGy2nsG3 yyAHfJopVnqVYmliUmlwesw9q8dGxteRikMA1T3dFtxMhlb3HBRYy47QoexDqb7gi7L0Fcmzf+eeG 1HI5/ZfIP8hqfUEI2YFPiyh+n6+xIM2dOUUlxCKs1N9IjWCHQ4hjLxRSTyAtRyUbVxRRsJ6yUcj7f v8fzQpoOTwbcopKevgObtlXtIAYHzEg1lrEEykZZgxxeupE7s5vTGOh9YQ+cSgJf0EoNstKYr8p7I QAYF8E6UssNe4Fcg6OPA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uu8-00HZ5r-Cc; Thu, 30 Jun 2022 14:12:24 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uhW-00HT19-HS for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:59:24 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id ADDA26219B; Thu, 30 Jun 2022 13:59:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C94C0C341CC; Thu, 30 Jun 2022 13:59:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597561; bh=DH4XkNsCYymvyFFe0cqjxw8F+dRDDN7dxIqHUgu9Es4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gvUE8ibucs56/VWPU1IyQq6UG+YPaYp0zR5VS/L6/WhF9tPYSNWt21rkEtK79gFS/ v4TM6cQYKOAumjALEACmh2hZyWIGlujxURTD/IVouxJdQvPrR13jkFtm5yl9gPAzaE W2J7qOx6xJIiXXsdVF1nHykwshxcschFKY32CVBXj9mx/W916W9ykHJXRzUG6OArWR 3bdVYUvgwb7wzf6RMjderkEx5fdMSKMntTRwzCVHLbrUR0TUFLrqGHVl1dPVO81O35 EyIwFFKUpPpYzQW7SyknMIOccp3us1sByT027FOsMtNAA4BkqHDsmrxOgfRH3SiFtj /4lIWf2EzaeNA== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 22/24] KVM: arm64: Explicitly map kvm_vgic_global_state at EL2 Date: Thu, 30 Jun 2022 14:57:45 +0100 Message-Id: <20220630135747.26983-23-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065922_875862_1070C9D3 X-CRM114-Status: GOOD ( 13.68 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret The pkvm hypervisor may need to read the kvm_vgic_global_state variable at EL2. Make sure to explicitly map it in its stage-1 page-table rather than relying on mapping all of the host .rodata section. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/nvhe/setup.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 3f689ffb2693..fa06828899e1 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -161,6 +161,11 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, if (ret) return ret; + ret = pkvm_create_mappings(&kvm_vgic_global_state, + &kvm_vgic_global_state + 1, prot); + if (ret) + return ret; + return 0; } From patchwork Thu Jun 30 13:57:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7FD4BC433EF for ; Thu, 30 Jun 2022 14:14:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bZ/Ol2+SU2QTneTD7YmWTa3x6BwPHhLE7qNsSLV9yrs=; b=wZiT8/oY2WOqf2 SrE6JLi6QKwHVBTa8i+wzMnxVS3s3jbPGK2/pMLA4S7lMwF93oyUNnU43yNbo8cGM3fq5bpPbY52m bHZqSdiRXgpZDfbac4CbunWwisl3aaTasGyWZUYmVynCHbeS4nKXCRRQf1MpIUpsNpTcUOiFRb2RL vtNuEtt8vzT4knNBLWx/Mwoe2/WD7OR5ttYXLraNmtCH/4AJgNI5qQSEReSfRMmT/ec7dAx3+ZhpI 9K5YTGo97ai7xjIzI/IkpyM9YP+Ti8cKk2k4KsxbMevU3aEQHx/DvAgL5hBUdxnwRxQxauPaHUJSv fMyotc9P6ZTlTsuke1uQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uv6-00HZUP-3m; Thu, 30 Jun 2022 14:13:25 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uha-00HT37-6j for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:59:33 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7CE76621A4; Thu, 30 Jun 2022 13:59:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 942DAC341CB; Thu, 30 Jun 2022 13:59:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597564; bh=lWsIpus1MLKp6f+0tU3cQpn+8L/gGT1gOsXqTNU37AA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CyQIJgHuShUejtTSziEUtZCq/MrKAFOCdqLFqH4JkN5BCuXITbpRrtL8yK+/Va+AM SMpV3weSOX45dXD55exEO30g2TeznLBd7w/D3Mwc84sBwdWlVcTeI4+9ceaa+mhdAq XmQogHrhjnnpgkX4cgphxqRaUoY0I+jw0OGsYpz7xPEhRWgihec81lOxecg3A+Y+x+ XxwxsTvTgDRgE+/pvYAT4rSREwhpYlaEzVrqYt61OmDSV7099cNRIakHnTCp1cUCi1 fP+6U3IRVQr7UOfYSR95mGXRxQy4mJPGFxzpd7Z/DtZbx9dvEkOni0ayc5Rv/Fiv1F 1mbwCxNr0cyVw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 23/24] KVM: arm64: Don't map host sections in pkvm Date: Thu, 30 Jun 2022 14:57:46 +0100 Message-Id: <20220630135747.26983-24-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065926_413037_58848DD0 X-CRM114-Status: GOOD ( 14.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret We no longer need to map the host's .rodata and .bss sections in the pkvm hypervisor, so let's remove those mappings. This will avoid creating dependencies at EL2 on host-controlled data-structures. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/kernel/image-vars.h | 6 ------ arch/arm64/kvm/hyp/nvhe/setup.c | 14 +++----------- 2 files changed, 3 insertions(+), 17 deletions(-) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 3e2489d23ff0..2d4d6836ff47 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -115,12 +115,6 @@ KVM_NVHE_ALIAS_HYP(__memcpy, __pi_memcpy); KVM_NVHE_ALIAS_HYP(__memset, __pi_memset); #endif -/* Kernel memory sections */ -KVM_NVHE_ALIAS(__start_rodata); -KVM_NVHE_ALIAS(__end_rodata); -KVM_NVHE_ALIAS(__bss_start); -KVM_NVHE_ALIAS(__bss_stop); - /* Hyp memory sections */ KVM_NVHE_ALIAS(__hyp_idmap_text_start); KVM_NVHE_ALIAS(__hyp_idmap_text_end); diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index fa06828899e1..1a2e06760c41 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -144,23 +144,15 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, } /* - * Map the host's .bss and .rodata sections RO in the hypervisor, but - * transfer the ownership from the host to the hypervisor itself to - * make sure it can't be donated or shared with another entity. + * Map the host sections RO in the hypervisor, but transfer the + * ownership from the host to the hypervisor itself to make sure they + * can't be donated or shared with another entity. * * The ownership transition requires matching changes in the host * stage-2. This will be done later (see finalize_host_mappings()) once * the hyp_vmemmap is addressable. */ prot = pkvm_mkstate(PAGE_HYP_RO, PKVM_PAGE_SHARED_OWNED); - ret = pkvm_create_mappings(__start_rodata, __end_rodata, prot); - if (ret) - return ret; - - ret = pkvm_create_mappings(__hyp_bss_end, __bss_stop, prot); - if (ret) - return ret; - ret = pkvm_create_mappings(&kvm_vgic_global_state, &kvm_vgic_global_state + 1, prot); if (ret) From patchwork Thu Jun 30 13:57:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901890 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E0CB3C43334 for ; Thu, 30 Jun 2022 14:15:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4u26hLiHsUe5N+i09NYz7iBf4jxJtir5h6pnEqAkixA=; b=K6RbYILzM8KaoW jkCy32hfZEF2vpC4+7y2tq01S4XrmhjWEOFWkutnkxY9zeMlUCJW4eD27p2IV3niizYnZnc3vqzzY iBZAQENYsNttsPtk8dx0bgBCJi0Onr8KFFHaE5wEiMSDPYuP0dXQtL69K3QVpvDdpXA1WiEvT3MP3 8oMbJ+FaN+Q5xuhHnwg80qFJ0rz12dmMckaNPDtYmogsHvFSHZu2FSLT9JtfEwtu23CvUee1cAkiu 62fpQtDEKk83NAFUBw9EKAcgnv0ATSVvGVx6yYGvRg+utTQ2T7bHZbiYmMrtg0Ky/fnjvZSEnAqD8 q7M3DEVPJc88dqzhj9oQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uw4-00Ha0u-1C; Thu, 30 Jun 2022 14:14:25 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6uhd-00HT4b-QD for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:59:35 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 42038621A8; Thu, 30 Jun 2022 13:59:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5CA8BC341CD; Thu, 30 Jun 2022 13:59:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597568; bh=kxCDmTH0o3+9EBgJlfZ5dZqqigAuso9J63UT5faR1Ws=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H1dlMQYRAejf1JRYiR/6kbT1CFEVnB9mdVTS5qoxUUf9xSNYOCyu9hMxkkgupJwKG TeYWVefugAwxQQyWoQRbzvBtAmoHW67txnmGJxEBnOHWLJ9JfajstvgbfVBTDpZ3OB nd1+EAv65AVf80y4oFtwV+iR+WlGA7uOoDUyquEFL68HxxIafVCvlBxwoQifMz/nYI M4rgplrQo7vW++ahbU1dmOQbD8aGJH0+6b2INBBq8Bas6XozYfZW4cbspMQT2L8RJQ BDjoz/Qpc59aA5fL+xwSpWqUDTiiw5J1wW+ftMJmCs1YBw5Wy3G3TCviEiYSfm9NUW z8T2Ki5SN20Mg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH v2 24/24] KVM: arm64: Use the shadow vCPU structure in handle___kvm_vcpu_run() Date: Thu, 30 Jun 2022 14:57:47 +0100 Message-Id: <20220630135747.26983-25-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065930_013078_7A83267E X-CRM114-Status: GOOD ( 19.10 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org As a stepping stone towards deprivileging the host's access to the guest's vCPU structures, introduce some naive flush/sync routines to copy most of the host vCPU into the shadow vCPU on vCPU run and back again on return to EL1. This allows us to run using the shadow structure when KVM is initialised in protected mode. Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 4 ++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 84 +++++++++++++++++++++++++- arch/arm64/kvm/hyp/nvhe/pkvm.c | 28 +++++++++ 3 files changed, 114 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index c0e32a750b6e..0edb3faa4067 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -63,4 +63,8 @@ int __pkvm_init_shadow(struct kvm *kvm, unsigned long shadow_hva, size_t shadow_size, unsigned long pgd_hva); int __pkvm_teardown_shadow(unsigned int shadow_handle); +struct kvm_shadow_vcpu_state * +pkvm_load_shadow_vcpu_state(unsigned int shadow_handle, unsigned int vcpu_idx); +void pkvm_put_shadow_vcpu_state(struct kvm_shadow_vcpu_state *shadow_state); + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index a1fbd11c8041..39d66c7b0560 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -22,11 +22,91 @@ DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt); +static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) +{ + struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; + struct kvm_vcpu *host_vcpu = shadow_state->host_vcpu; + + shadow_vcpu->arch.ctxt = host_vcpu->arch.ctxt; + + shadow_vcpu->arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state); + shadow_vcpu->arch.sve_max_vl = host_vcpu->arch.sve_max_vl; + + shadow_vcpu->arch.hw_mmu = host_vcpu->arch.hw_mmu; + + shadow_vcpu->arch.hcr_el2 = host_vcpu->arch.hcr_el2; + shadow_vcpu->arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; + shadow_vcpu->arch.cptr_el2 = host_vcpu->arch.cptr_el2; + + shadow_vcpu->arch.iflags = host_vcpu->arch.iflags; + shadow_vcpu->arch.fp_state = host_vcpu->arch.fp_state; + + shadow_vcpu->arch.debug_ptr = kern_hyp_va(host_vcpu->arch.debug_ptr); + shadow_vcpu->arch.host_fpsimd_state = host_vcpu->arch.host_fpsimd_state; + + shadow_vcpu->arch.vsesr_el2 = host_vcpu->arch.vsesr_el2; + + shadow_vcpu->arch.vgic_cpu.vgic_v3 = host_vcpu->arch.vgic_cpu.vgic_v3; +} + +static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) +{ + struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; + struct kvm_vcpu *host_vcpu = shadow_state->host_vcpu; + struct vgic_v3_cpu_if *shadow_cpu_if = &shadow_vcpu->arch.vgic_cpu.vgic_v3; + struct vgic_v3_cpu_if *host_cpu_if = &host_vcpu->arch.vgic_cpu.vgic_v3; + unsigned int i; + + host_vcpu->arch.ctxt = shadow_vcpu->arch.ctxt; + + host_vcpu->arch.hcr_el2 = shadow_vcpu->arch.hcr_el2; + host_vcpu->arch.cptr_el2 = shadow_vcpu->arch.cptr_el2; + + host_vcpu->arch.fault = shadow_vcpu->arch.fault; + + host_vcpu->arch.iflags = shadow_vcpu->arch.iflags; + host_vcpu->arch.fp_state = shadow_vcpu->arch.fp_state; + + host_cpu_if->vgic_hcr = shadow_cpu_if->vgic_hcr; + for (i = 0; i < shadow_cpu_if->used_lrs; ++i) + host_cpu_if->vgic_lr[i] = shadow_cpu_if->vgic_lr[i]; +} + static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) { - DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); + DECLARE_REG(struct kvm_vcpu *, host_vcpu, host_ctxt, 1); + int ret; + + host_vcpu = kern_hyp_va(host_vcpu); + + if (unlikely(is_protected_kvm_enabled())) { + struct kvm_shadow_vcpu_state *shadow_state; + struct kvm_vcpu *shadow_vcpu; + struct kvm *host_kvm; + unsigned int handle; + + host_kvm = kern_hyp_va(host_vcpu->kvm); + handle = host_kvm->arch.pkvm.shadow_handle; + shadow_state = pkvm_load_shadow_vcpu_state(handle, + host_vcpu->vcpu_idx); + if (!shadow_state) { + ret = -EINVAL; + goto out; + } + + shadow_vcpu = &shadow_state->shadow_vcpu; + flush_shadow_state(shadow_state); + + ret = __kvm_vcpu_run(shadow_vcpu); + + sync_shadow_state(shadow_state); + pkvm_put_shadow_vcpu_state(shadow_state); + } else { + ret = __kvm_vcpu_run(host_vcpu); + } - cpu_reg(host_ctxt, 1) = __kvm_vcpu_run(kern_hyp_va(vcpu)); +out: + cpu_reg(host_ctxt, 1) = ret; } static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 571334fd58ff..bf92f4443c92 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -247,6 +247,33 @@ static struct kvm_shadow_vm *find_shadow_by_handle(unsigned int shadow_handle) return shadow_table[shadow_idx]; } +struct kvm_shadow_vcpu_state * +pkvm_load_shadow_vcpu_state(unsigned int shadow_handle, unsigned int vcpu_idx) +{ + struct kvm_shadow_vcpu_state *shadow_state = NULL; + struct kvm_shadow_vm *vm; + + hyp_spin_lock(&shadow_lock); + vm = find_shadow_by_handle(shadow_handle); + if (!vm || vm->kvm.created_vcpus <= vcpu_idx) + goto unlock; + + shadow_state = &vm->shadow_vcpu_states[vcpu_idx]; + hyp_page_ref_inc(hyp_virt_to_page(vm)); +unlock: + hyp_spin_unlock(&shadow_lock); + return shadow_state; +} + +void pkvm_put_shadow_vcpu_state(struct kvm_shadow_vcpu_state *shadow_state) +{ + struct kvm_shadow_vm *vm = shadow_state->shadow_vm; + + hyp_spin_lock(&shadow_lock); + hyp_page_ref_dec(hyp_virt_to_page(vm)); + hyp_spin_unlock(&shadow_lock); +} + static void unpin_host_vcpus(struct kvm_shadow_vcpu_state *shadow_vcpu_states, unsigned int nr_vcpus) { @@ -304,6 +331,7 @@ static int init_shadow_structs(struct kvm *kvm, struct kvm_shadow_vm *vm, shadow_vcpu->vcpu_idx = i; shadow_vcpu->arch.hw_mmu = &vm->kvm.arch.mmu; + shadow_vcpu->arch.cflags = READ_ONCE(host_vcpu->arch.cflags); } return 0;