From patchwork Wed Dec 18 19:40:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4A6B4E77188 for ; Wed, 18 Dec 2024 19:43:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=nRdmYoUKXOUG+sswfkSeT6n5jAc9VmmCt4ArLqZWt3Y=; b=gLdpVeumLrBSVDEqOAedHHeGvO kmRGF7eL7TTfwRGMqDqHMBjwXuFsH2uNA+XD4Id3veTMW5uNJokKVMab6nc5zT0fL9sLGhNM0xdvY FSHYAnagAglcCrbhXI+iOQJ1e9bQv7uqDWdMnE+eeCDcKuwm5bf1wc8hAApW/TcNLZez/N5cpHL4t xfVqw7tyE1kDGJwZPMBqiI5utcihmjVvlMPbD77yV59ZJl9Eak+2Ti2sgW4mlw4Qo7WsWgU/Bnr98 F4OXsnbr6O8dLxjn5gKAiSQnj9OR5jPTeqmXXAAnLBDCPTrLtoJvjjjepUwG6FvU/boayVsbK5MsB wO7DAAkQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNzx1-0000000HYo0-2ksV; Wed, 18 Dec 2024 19:43:19 +0000 Received: from mail-ed1-x54a.google.com ([2a00:1450:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzus-0000000HYIE-1r4n for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:07 +0000 Received: by mail-ed1-x54a.google.com with SMTP id 4fb4d7f45d1cf-5d3ff30b566so6976466a12.1 for ; Wed, 18 Dec 2024 11:41:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550864; x=1735155664; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nRdmYoUKXOUG+sswfkSeT6n5jAc9VmmCt4ArLqZWt3Y=; b=uMkMGYq60WmBfVaEuWYSY1/Pl0rDlNKwoZ/Gx5ST6933dDYTZ7MDI0o/oaj99ss0Lw HLnBSucJbf2pUsfvwfevBy3LDOOId+kr05e6iy6HEorqQiioq39gFJkZDebGGye3W+mD zLGEyk9BK1WSPjCl8vpB4hvArxTedEB2aFNmpWFhaUltn8OegJgD8ouXSDlFLt2elOP2 TcLnjq4tFTSDsTYxiyQPduYR56H3rtFrD3YiKeDXhoJ09muLalrPoPz5zmmyGV9VnE2A 8/H2gP4kj69yuVIOzkF7O3qy+CJebmcQRQTe312gWAZT4LqvaKvfsI7QvFQBImVbgvWI oHxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550864; x=1735155664; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nRdmYoUKXOUG+sswfkSeT6n5jAc9VmmCt4ArLqZWt3Y=; b=N2m63BdAsvffDDjzJcVEYCqzXnKPiBrUWSCip9op+z0OXy+0KScr3GBiPoJp94Bly4 2Fj6A/6m7BwQcRzbVI0I8QiRcg4ftiW3yAzCmtptmXWowKcyu1y/Sp+OsQu/zgt1vI+Y bbKhaPpkNvTvj4SPSefZ0UQZctEoiluXcj0k3uj1hFuVPWoLEFZtzMgWoWfIrFlCZ2sW lo+QyxOXDTCkl9t2NlJrJCudq+HffK6/aqDk8Via+P4Csjf1YMYS40JWzqNc8uvWniEU 1Lrh1xw0gBjV8BGU7rU++5LU3Cz1glMrTWmDNjs07nh7l01FVWzN9lEQZLr7ze8czYZ4 EqTQ== X-Forwarded-Encrypted: i=1; AJvYcCV6ePQ6ZPnzxttYOwT8461qsiPWZDAvCdF3LHryFpTbNPQclV3t8+VD2FgyA+WxCe5DVan+gZl77wtdC3bvZpQz@lists.infradead.org X-Gm-Message-State: AOJu0YytfypNToev5TbUTexbeBlI1dJa/bXGroaP2R6raLSmqbxDGSq8 2Io0PhqGV8IUsaezFQ2uaRDQ3rgM5ztUOtSvRyEEYE2ohDnIP15yVqQXvsN0jntPuLNq19ZkEPe w/61CVw== X-Google-Smtp-Source: AGHT+IHvQzDESjuWp5L1AoEov+naUf7YhDv2p3IzRK4XOlwykk+DiFMJtRC57ZjwupiXLJO6FV3a8dSL7apE X-Received: from edbfj18.prod.google.com ([2002:a05:6402:2b92:b0:5d4:34da:32db]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:528a:b0:5cf:e218:984f with SMTP id 4fb4d7f45d1cf-5d7ee3ff473mr4159198a12.27.1734550864372; Wed, 18 Dec 2024 11:41:04 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:42 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-2-qperret@google.com> Subject: [PATCH v4 01/18] KVM: arm64: Change the layout of enum pkvm_page_state From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114106_478048_9D40B83E X-CRM114-Status: GOOD ( 11.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The 'concrete' (a.k.a non-meta) page states are currently encoded using software bits in PTEs. For performance reasons, the abstract pkvm_page_state enum uses the same bits to encode these states as that makes conversions from and to PTEs easy. In order to prepare the ground for moving the 'concrete' state storage to the hyp vmemmap, re-arrange the enum to use bits 0 and 1 for this purpose. No functional changes intended. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 0972faccc2af..5462faf6bfee 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -24,25 +24,27 @@ */ enum pkvm_page_state { PKVM_PAGE_OWNED = 0ULL, - PKVM_PAGE_SHARED_OWNED = KVM_PGTABLE_PROT_SW0, - PKVM_PAGE_SHARED_BORROWED = KVM_PGTABLE_PROT_SW1, - __PKVM_PAGE_RESERVED = KVM_PGTABLE_PROT_SW0 | - KVM_PGTABLE_PROT_SW1, + PKVM_PAGE_SHARED_OWNED = BIT(0), + PKVM_PAGE_SHARED_BORROWED = BIT(1), + __PKVM_PAGE_RESERVED = BIT(0) | BIT(1), /* Meta-states which aren't encoded directly in the PTE's SW bits */ - PKVM_NOPAGE, + PKVM_NOPAGE = BIT(2), }; +#define PKVM_PAGE_META_STATES_MASK (~__PKVM_PAGE_RESERVED) #define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT_SW1) static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot prot, enum pkvm_page_state state) { - return (prot & ~PKVM_PAGE_STATE_PROT_MASK) | state; + prot &= ~PKVM_PAGE_STATE_PROT_MASK; + prot |= FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); + return prot; } static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) { - return prot & PKVM_PAGE_STATE_PROT_MASK; + return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); } struct host_mmu { From patchwork Wed Dec 18 19:40:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914095 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7BDC6E77187 for ; Wed, 18 Dec 2024 19:44:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=KvnjN4P6SkLUk+22h2Qqv7fPFdw/8dEz5PT/l7XI0hY=; b=nIiNSIQJzfFGd8wh1/ZkAW1u// sujXvYk5IvqOp6mHtKVdOThOdcPouXabWUgmKEDyIz/1BWR1JI59cccW59FY29UYSxKbEjbpDEPMH qJIPyrDZ/1YwXQzY3jA/McjnTmotUkvDhp1dPz6ROTOR9pzEsOwdl+ojGBsiZyIP1ANRZDGckFR+8 8dGq6Uta06dS4N1FV226AyXTMPN7TvPFogJiRtQb2yA4/pLNC4ArCEqLUdgVeS/tMf4Xwq5d6hqSh 8OUcE4AFr0DEXRc7EsVPayll0uu7W14nWzRBdMOqxfM9yLGZ77yyi/cffsOkkNfaulXXzeTFFc5qO 382tp1EQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNzy6-0000000HZ2o-1Hd1; Wed, 18 Dec 2024 19:44:26 +0000 Received: from mail-ej1-x649.google.com ([2a00:1450:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzuv-0000000HYIc-02cC for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:10 +0000 Received: by mail-ej1-x649.google.com with SMTP id a640c23a62f3a-aa6869e15ebso123481566b.1 for ; Wed, 18 Dec 2024 11:41:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550867; x=1735155667; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KvnjN4P6SkLUk+22h2Qqv7fPFdw/8dEz5PT/l7XI0hY=; b=TqbUiA7B0BACvniSqtVImuu0qfFyoTbl82c7BXxNJKBbYd0Q3/0ozP+l+JxSguNpYW Xz/kiNPjSSiqzPcOH4gh2TaHllI4W9BiVs2DagkWsQTOeGOjvaCtwdqaVTb3fLP7Tq8K oWXLrcAXj/GfSCTQ4qUpLgcphDDqfs5tLouos+BVNCM2hjtkg0C7RX3WUAiGGggYU3U/ HTx0DQe7wqM7GeH+ufgYwzm2AJ80capam+BY7oLhNS9ZdWd9kzFUfFYK9k6CCB6Zw8eH 3Y3SN+FjuEjz/8LOIC8NqAcbZPraXJ+pFSGtmxRLCPhwEwVP9OG5ZjOL5RQIy9Q27VNp tYJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550867; x=1735155667; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KvnjN4P6SkLUk+22h2Qqv7fPFdw/8dEz5PT/l7XI0hY=; b=wFFJsv/3xSRfo0hzzGuSzlV22IG3WTtrSlXxa4UddEt5Opl6jb0kWHhf/rUXYyIinu rE8v1/UoPpWSriWpYOj0FaBAqbnpxkjTfEH5vkzBXaCoR1Fu+ZGOIwQ2nlBgsxPI/1/Y nmGFRSVzqDPBAWCaEAmJFWuJgj6BnSKWeOLo+rVdI+wiMsaPXf2tuHTPhYDj4xyHPNmu 562fVv4bKOT3AmkhPd0fDTt4b5ZxgU+IqAgjofI3X727N04qKfRh3fo54hSePYOdOqwe iIPyC5p0MMB+KJyumfohlmAJyy4az8IBsx/AsUei++k9hy4/4DuFw4xF/er5Mi8ajFxC T5Kg== X-Forwarded-Encrypted: i=1; AJvYcCVuE01tydGkk/F9XhHX5ywF4kwOku2U9Jukm6cI2VIc0F1N2AgkTX1SA/Qfn5y7yxER+0A4sgqRSabesD0QnRV+@lists.infradead.org X-Gm-Message-State: AOJu0Yw8CFnkIFMQnzEsniqmBjvSavdlg+X6z5RQz883mVp23+cj483G QAmp+xc5Wj9A8jNVdEGCCI249MiePs104VHiASAC393EYAPm5Yd19scq9n8f6cCD7QROC9dUebn sngdy6A== X-Google-Smtp-Source: AGHT+IFzaab4WIj+uu0eFb5+RqKYGg9wSGHkRG5sqXUBPbPpIpzkAXtULFRuD1TiGRf7/yopTJZ+yFFS1fDj X-Received: from ejcvw10.prod.google.com ([2002:a17:907:a70a:b0:aa9:1b4b:489d]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:c23:b0:aa6:8096:2049 with SMTP id a640c23a62f3a-aac08147180mr54742866b.13.1734550866770; Wed, 18 Dec 2024 11:41:06 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:43 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-3-qperret@google.com> Subject: [PATCH v4 02/18] KVM: arm64: Move enum pkvm_page_state to memory.h From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114109_052717_AFD58734 X-CRM114-Status: GOOD ( 14.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In order to prepare the way for storing page-tracking information in pKVM's vmemmap, move the enum pkvm_page_state definition to nvhe/memory.h. No functional changes intended. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 34 +------------------ arch/arm64/kvm/hyp/include/nvhe/memory.h | 33 ++++++++++++++++++ 2 files changed, 34 insertions(+), 33 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 5462faf6bfee..25038ac705d8 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -11,42 +11,10 @@ #include #include #include +#include #include #include -/* - * SW bits 0-1 are reserved to track the memory ownership state of each page: - * 00: The page is owned exclusively by the page-table owner. - * 01: The page is owned by the page-table owner, but is shared - * with another entity. - * 10: The page is shared with, but not owned by the page-table owner. - * 11: Reserved for future use (lending). - */ -enum pkvm_page_state { - PKVM_PAGE_OWNED = 0ULL, - PKVM_PAGE_SHARED_OWNED = BIT(0), - PKVM_PAGE_SHARED_BORROWED = BIT(1), - __PKVM_PAGE_RESERVED = BIT(0) | BIT(1), - - /* Meta-states which aren't encoded directly in the PTE's SW bits */ - PKVM_NOPAGE = BIT(2), -}; -#define PKVM_PAGE_META_STATES_MASK (~__PKVM_PAGE_RESERVED) - -#define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT_SW1) -static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot prot, - enum pkvm_page_state state) -{ - prot &= ~PKVM_PAGE_STATE_PROT_MASK; - prot |= FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); - return prot; -} - -static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) -{ - return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); -} - struct host_mmu { struct kvm_arch arch; struct kvm_pgtable pgt; diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index ab205c4d6774..0964c461da92 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -7,6 +7,39 @@ #include +/* + * SW bits 0-1 are reserved to track the memory ownership state of each page: + * 00: The page is owned exclusively by the page-table owner. + * 01: The page is owned by the page-table owner, but is shared + * with another entity. + * 10: The page is shared with, but not owned by the page-table owner. + * 11: Reserved for future use (lending). + */ +enum pkvm_page_state { + PKVM_PAGE_OWNED = 0ULL, + PKVM_PAGE_SHARED_OWNED = BIT(0), + PKVM_PAGE_SHARED_BORROWED = BIT(1), + __PKVM_PAGE_RESERVED = BIT(0) | BIT(1), + + /* Meta-states which aren't encoded directly in the PTE's SW bits */ + PKVM_NOPAGE = BIT(2), +}; +#define PKVM_PAGE_META_STATES_MASK (~__PKVM_PAGE_RESERVED) + +#define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT_SW1) +static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot prot, + enum pkvm_page_state state) +{ + prot &= ~PKVM_PAGE_STATE_PROT_MASK; + prot |= FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); + return prot; +} + +static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) +{ + return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); +} + struct hyp_page { unsigned short refcount; unsigned short order; From patchwork Wed Dec 18 19:40:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914096 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5D9B9E77187 for ; Wed, 18 Dec 2024 19:45:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=PduJERFrled8xtlVt7hkNZd8foU0YkZ4d2iWPXC2aQE=; b=q0e80VdG/4fkz4og8HgdZZjjGn NBJKWLQDFGPVt89/hY8auFg8VM7Z1Usj4iwqAyziKvviImd/9h2AZV/7f13xfJMXO375ncz18yYl3 rtHiHqIQNr7WTCvOjXO292R9+IzjNSno68ZtFe9W1pQgIz4kZB8+QBVw4RARPfw1j+4OB0r2/aDXE 0iFROV+P8ZsD6oRtDXvyIsNUk5mx7inxhEY3ODD511TRbstEyKwFhT5cY4mrM46fn36TE/OKmPx0J VQgcgBQKQQjo1jwvfUlJOxnIRZuzH/BOPKdGI/wzP+eVzZ42+bjj02JrFd4zVkW9jbC2AR17hPcLe LnsLayKg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNzzA-0000000HZIZ-46wk; Wed, 18 Dec 2024 19:45:33 +0000 Received: from mail-ej1-x64a.google.com ([2a00:1450:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzuw-0000000HYIx-37k3 for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:11 +0000 Received: by mail-ej1-x64a.google.com with SMTP id a640c23a62f3a-aabf8e856e2so336966b.0 for ; Wed, 18 Dec 2024 11:41:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550869; x=1735155669; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PduJERFrled8xtlVt7hkNZd8foU0YkZ4d2iWPXC2aQE=; b=xTUpOZg16RrY6Nt2Imar0KnZfADdQqd5noH2wM06XwBnm5X5YIfDF33RA+IxkRAfaL 0nnK03RdXNyiveSkQUcrsXJnEk/aw1zAhR2BC1g4lbKzsh4JtD73FFQyKVkqIa9WIOoQ v6fA+0RUPp4t45yBDRjYPIm+SLndmGeLgoFKF6udvhumKvjEY1PBPAjxbEAhWWLFMYXz c06H7k17FdQ9Ooh1V3BaXDAN6c8rsKkXWRVvGA2CW0/W71/ijjsYYTguiiz7wu9FWe3Z r6xhBWtodmdYBNRnQLKgKJHhRk1xAa8gOwqF+WNXTQPDTLYrBwwv3AAZ5a3Csusiu98K M/uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550869; x=1735155669; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PduJERFrled8xtlVt7hkNZd8foU0YkZ4d2iWPXC2aQE=; b=TQxeW48Df0XU5T0e+UbObxIBiIImIcZAdfE78KQLIeL+xpCc4dRyHDhnf9q4b60VJh 0gKeyP1HKjRwJoW79Dsh9hR0ooKEdboTcS0d1N0jXNWRX/XBOvmU3+miHptfTmz6rlFK bu42rzilzhTAM5MfwElmsE/2QUumG14cdFp0czvvxC//jVU3qbePnrt5nXLVDiW6+ngi jXcYQyfHQxNOdtWDfvHlAmNVUFnrNMLaaz6uekxjkhxd2Il6rLwwHmkWjjXR13N9U1mz +9fyVE5sxJG6UHepZVZmmSilV49PKynqYk6Qa/RsIEEJRGrpnLmBuflGGc4UJW0dwYeA wNSA== X-Forwarded-Encrypted: i=1; AJvYcCXkSE/PmlQ4Y1ZFPaqrOVtQVclEPS01Sw3Sgr9XplTuRUstzbpFUGJ++uWs8N6iHOaQnI8m+Uyi23LhW83QZlOy@lists.infradead.org X-Gm-Message-State: AOJu0Ywe+O/o4+N82hDWZLDBE8ywrBtKwvlmCNZASWfukjrjfl86jPc/ NzzUCLtL98Gz0R5VyChWk6Z3CFzO5DtcBjGmJGJtAz68yHc4tQo22sI+d1r74gVX4dKEQ7vYO7C kM38qNQ== X-Google-Smtp-Source: AGHT+IEzfg77HHE0CyOw+rv8bE4cFPV2J1IBhBkYsKjKqKS35RJU0YPOC+Z6JqMb76N1ZMGCOQXv2II72IZa X-Received: from ejdq18.prod.google.com ([2002:a17:906:3892:b0:aa6:9198:7599]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:c10d:b0:aa6:66eb:9c06 with SMTP id a640c23a62f3a-aabf471913emr432721066b.5.1734550869004; Wed, 18 Dec 2024 11:41:09 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:44 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-4-qperret@google.com> Subject: [PATCH v4 03/18] KVM: arm64: Make hyp_page::order a u8 From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114110_780777_4C2C2F52 X-CRM114-Status: GOOD ( 14.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We don't need 16 bits to store the hyp page order, and we'll need some bits to store page ownership data soon, so let's reduce the order member. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 6 +++--- arch/arm64/kvm/hyp/include/nvhe/memory.h | 5 +++-- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 14 +++++++------- 3 files changed, 13 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index 97c527ef53c2..3766333bace9 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -7,7 +7,7 @@ #include #include -#define HYP_NO_ORDER USHRT_MAX +#define HYP_NO_ORDER ((u8)(~0)) struct hyp_pool { /* @@ -19,11 +19,11 @@ struct hyp_pool { struct list_head free_area[NR_PAGE_ORDERS]; phys_addr_t range_start; phys_addr_t range_end; - unsigned short max_order; + u8 max_order; }; /* Allocation */ -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order); +void *hyp_alloc_pages(struct hyp_pool *pool, u8 order); void hyp_split_page(struct hyp_page *page); void hyp_get_page(struct hyp_pool *pool, void *addr); void hyp_put_page(struct hyp_pool *pool, void *addr); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 0964c461da92..8f2b42bcc8e1 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -41,8 +41,9 @@ static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) } struct hyp_page { - unsigned short refcount; - unsigned short order; + u16 refcount; + u8 order; + u8 reserved; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index e691290d3765..a1eb27a1a747 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -32,7 +32,7 @@ u64 __hyp_vmemmap; */ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { phys_addr_t addr = hyp_page_to_phys(p); @@ -51,7 +51,7 @@ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, /* Find a buddy page currently available for allocation */ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { struct hyp_page *buddy = __find_buddy_nocheck(pool, p, order); @@ -94,7 +94,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { phys_addr_t phys = hyp_page_to_phys(p); - unsigned short order = p->order; + u8 order = p->order; struct hyp_page *buddy; memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order); @@ -129,7 +129,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { struct hyp_page *buddy; @@ -183,7 +183,7 @@ void hyp_get_page(struct hyp_pool *pool, void *addr) void hyp_split_page(struct hyp_page *p) { - unsigned short order = p->order; + u8 order = p->order; unsigned int i; p->order = 0; @@ -195,10 +195,10 @@ void hyp_split_page(struct hyp_page *p) } } -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order) +void *hyp_alloc_pages(struct hyp_pool *pool, u8 order) { - unsigned short i = order; struct hyp_page *p; + u8 i = order; hyp_spin_lock(&pool->lock); From patchwork Wed Dec 18 19:40:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914097 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9EDFCE77187 for ; Wed, 18 Dec 2024 19:46:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=UAQ/DW5uS6jQ+6W73NZ/tng8Q0fjY1ucHASdT1u/gX0=; b=CFS+rMXQMH3bhYIrbMX1NUk0hl sRX6ZBetqVDqB98bpQLT97577yGOWwvKXVX87D8me0+0K/7jIDiVRjqvcgVV9xIRwnvHkzumjAlcq daHSTQlipzSgoG6D0y6hgsA89Q7o2SxUw5V6pbZRzGmGXQCiP5DBbMz0erWn5dHPYmfGXyRCzjwug 1xSGCfHYpZrugay3Zlam9QHdOJzbVvqJr4Kik/oh4+VVkMklmrsH+yXUZr+g7ApnCZyQ/EHfvaH58 pc9ZTV3RFA19xbPgIlSPxvCaosvY8wpwaebCCSMFR7yLOU+aGAqjXmjbGUZN7+JMWW89bxnaU+274 aRp9LPtA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tO00G-0000000HZXN-35Gx; Wed, 18 Dec 2024 19:46:40 +0000 Received: from mail-ej1-x649.google.com ([2a00:1450:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzuz-0000000HYJL-0XZi for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:14 +0000 Received: by mail-ej1-x649.google.com with SMTP id a640c23a62f3a-aa6845cf116so4666266b.1 for ; Wed, 18 Dec 2024 11:41:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550871; x=1735155671; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=UAQ/DW5uS6jQ+6W73NZ/tng8Q0fjY1ucHASdT1u/gX0=; b=TaNDie8pI91RAPvqAXkJhn2Es7wTkd5Ph69cWzLBYeZAmH3SZA3FalQyu+mRqATDyA s/CzShOLzIxT77vLTFpYdgXaPE5uSqtemB8+Y7xJACiz9JsFbYB9/KwwYsFhG8BrF3dA +GOK5BXW4yvuu+C0MhgMIi1GVM4A3HAZ4yR/AU2gVrJPyBa6YPIZCY0wWdflQAt8EbPA Bk90hI6cS2yZfCrdW6ebV0r24sHmjgkZLAM/tWPrwphiz6TEEymbsfzAeVHJ1LWMHoXY dmOHuQaLC9Cz1oT9jzpn1MFR6Bf8Ywq+Aa7lvyosdWfer1fOvlFKctUiHUgZCWIAYKtl jEIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550871; x=1735155671; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UAQ/DW5uS6jQ+6W73NZ/tng8Q0fjY1ucHASdT1u/gX0=; b=ZiIQO3J5vFbLkmwzZ7NfjRNkUgrP1XqsKaxaGmxGWXS8ZhXjT3RJWmsKYknitKdcLt JI2NtXiVoV8h92kJhqr/9Jwohk/p5h2lzydt7lXDuES4puniZ7Lf3NOrEUTlNalo4yl7 oUoRKk4oJIPxTRsJoy6pGSPWcZlPbX6lt5G3ZYlDOTv9SrsYOiU5knT8PlsKRYPM9ok1 2PnB3qRHp/tIlQvy6aZHkPJwXR8JmR2RpR99UtR/qyZV/Qi4rmSQGNO6DsDkqdEcmoUF HIqwBgxBncVbNchxQtxTKzu9YLiqg3/qh85+7HCyW9qH3Al5A6qlSZx2p8HMaFW3YQLz sggQ== X-Forwarded-Encrypted: i=1; AJvYcCWoH0n+3xQoz9sb6X5C04njmMYkbv8r/qxbYtfD42Y5cOuJblaV6hmnkXGIew100L5CNqfKbiC9x2WsNXkMc7T0@lists.infradead.org X-Gm-Message-State: AOJu0Yz6vN4xkLS8PiRqtIJuh0QOdFME6M7+XwcipWoVoJG04uevAOfU iF8soBC/b/iDULbczHbL+jOCp6ajTTTFLa6KTOBoYaD9lcckYaoIVXMq7npfXKtPmVUXE3S8lDc MTPp5qw== X-Google-Smtp-Source: AGHT+IHuOP3GzND5zb+d3QqSk9EUOwQxf9V5iWs+o92uyoO0OD/GyWpKPSyYwE23yI/fPyO3xcyVVYaZrepA X-Received: from edb1.prod.google.com ([2002:a05:6402:2381:b0:5d0:2a58:40c4]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:2d92:b0:aa6:bedc:2e4c with SMTP id a640c23a62f3a-aac08101008mr53941866b.3.1734550871385; Wed, 18 Dec 2024 11:41:11 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:45 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-5-qperret@google.com> Subject: [PATCH v4 04/18] KVM: arm64: Move host page ownership tracking to the hyp vmemmap From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114113_186819_3F52340E X-CRM114-Status: GOOD ( 24.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We currently store part of the page-tracking state in PTE software bits for the host, guests and the hypervisor. This is sub-optimal when e.g. sharing pages as this forces to break block mappings purely to support this software tracking. This causes an unnecessarily fragmented stage-2 page-table for the host in particular when it shares pages with Secure, which can lead to measurable regressions. Moreover, having this state stored in the page-table forces us to do multiple costly walks on the page transition path, hence causing overhead. In order to work around these problems, move the host-side page-tracking logic from SW bits in its stage-2 PTEs to the hypervisor's vmemmap. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/memory.h | 14 +++- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 100 ++++++++++++++++------- arch/arm64/kvm/hyp/nvhe/setup.c | 7 +- 3 files changed, 84 insertions(+), 37 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 8f2b42bcc8e1..2a5eabf4b753 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -8,7 +8,7 @@ #include /* - * SW bits 0-1 are reserved to track the memory ownership state of each page: + * Bits 0-1 are reserved to track the memory ownership state of each page: * 00: The page is owned exclusively by the page-table owner. * 01: The page is owned by the page-table owner, but is shared * with another entity. @@ -43,7 +43,9 @@ static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) struct hyp_page { u16 refcount; u8 order; - u8 reserved; + + /* Host (non-meta) state. Guarded by the host stage-2 lock. */ + enum pkvm_page_state host_state : 8; }; extern u64 __hyp_vmemmap; @@ -63,7 +65,13 @@ static inline phys_addr_t hyp_virt_to_phys(void *addr) #define hyp_phys_to_pfn(phys) ((phys) >> PAGE_SHIFT) #define hyp_pfn_to_phys(pfn) ((phys_addr_t)((pfn) << PAGE_SHIFT)) -#define hyp_phys_to_page(phys) (&hyp_vmemmap[hyp_phys_to_pfn(phys)]) + +static inline struct hyp_page *hyp_phys_to_page(phys_addr_t phys) +{ + BUILD_BUG_ON(sizeof(struct hyp_page) != sizeof(u32)); + return &hyp_vmemmap[hyp_phys_to_pfn(phys)]; +} + #define hyp_virt_to_page(virt) hyp_phys_to_page(__hyp_pa(virt)) #define hyp_virt_to_pfn(virt) hyp_phys_to_pfn(__hyp_pa(virt)) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index caba3e4bd09e..12bb5445fe47 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -201,8 +201,8 @@ static void *guest_s2_zalloc_page(void *mc) memset(addr, 0, PAGE_SIZE); p = hyp_virt_to_page(addr); - memset(p, 0, sizeof(*p)); p->refcount = 1; + p->order = 0; return addr; } @@ -268,6 +268,7 @@ int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd) void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc) { + struct hyp_page *page; void *addr; /* Dump all pgtable pages in the hyp_pool */ @@ -279,7 +280,9 @@ void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc) /* Drain the hyp_pool into the memcache */ addr = hyp_alloc_pages(&vm->pool, 0); while (addr) { - memset(hyp_virt_to_page(addr), 0, sizeof(struct hyp_page)); + page = hyp_virt_to_page(addr); + page->refcount = 0; + page->order = 0; push_hyp_memcache(mc, addr, hyp_virt_to_phys); WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(addr), 1)); addr = hyp_alloc_pages(&vm->pool, 0); @@ -382,19 +385,28 @@ bool addr_is_memory(phys_addr_t phys) return !!find_mem_range(phys, &range); } -static bool addr_is_allowed_memory(phys_addr_t phys) +static bool is_in_mem_range(u64 addr, struct kvm_mem_range *range) +{ + return range->start <= addr && addr < range->end; +} + +static int check_range_allowed_memory(u64 start, u64 end) { struct memblock_region *reg; struct kvm_mem_range range; - reg = find_mem_range(phys, &range); + /* + * Callers can't check the state of a range that overlaps memory and + * MMIO regions, so ensure [start, end[ is in the same kvm_mem_range. + */ + reg = find_mem_range(start, &range); + if (!is_in_mem_range(end - 1, &range)) + return -EINVAL; - return reg && !(reg->flags & MEMBLOCK_NOMAP); -} + if (!reg || reg->flags & MEMBLOCK_NOMAP) + return -EPERM; -static bool is_in_mem_range(u64 addr, struct kvm_mem_range *range) -{ - return range->start <= addr && addr < range->end; + return 0; } static bool range_is_memory(u64 start, u64 end) @@ -454,8 +466,10 @@ static int host_stage2_adjust_range(u64 addr, struct kvm_mem_range *range) if (kvm_pte_valid(pte)) return -EAGAIN; - if (pte) + if (pte) { + WARN_ON(addr_is_memory(addr) && hyp_phys_to_page(addr)->host_state != PKVM_NOPAGE); return -EPERM; + } do { u64 granule = kvm_granule_size(level); @@ -477,10 +491,33 @@ int host_stage2_idmap_locked(phys_addr_t addr, u64 size, return host_stage2_try(__host_stage2_idmap, addr, addr + size, prot); } +static void __host_update_page_state(phys_addr_t addr, u64 size, enum pkvm_page_state state) +{ + phys_addr_t end = addr + size; + + for (; addr < end; addr += PAGE_SIZE) + hyp_phys_to_page(addr)->host_state = state; +} + int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id) { - return host_stage2_try(kvm_pgtable_stage2_set_owner, &host_mmu.pgt, - addr, size, &host_s2_pool, owner_id); + int ret; + + if (!addr_is_memory(addr)) + return -EPERM; + + ret = host_stage2_try(kvm_pgtable_stage2_set_owner, &host_mmu.pgt, + addr, size, &host_s2_pool, owner_id); + if (ret) + return ret; + + /* Don't forget to update the vmemmap tracking for the host */ + if (owner_id == PKVM_ID_HOST) + __host_update_page_state(addr, size, PKVM_PAGE_OWNED); + else + __host_update_page_state(addr, size, PKVM_NOPAGE); + + return 0; } static bool host_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot prot) @@ -604,35 +641,38 @@ static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size, return kvm_pgtable_walk(pgt, addr, size, &walker); } -static enum pkvm_page_state host_get_page_state(kvm_pte_t pte, u64 addr) -{ - if (!addr_is_allowed_memory(addr)) - return PKVM_NOPAGE; - - if (!kvm_pte_valid(pte) && pte) - return PKVM_NOPAGE; - - return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); -} - static int __host_check_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - struct check_walk_data d = { - .desired = state, - .get_page_state = host_get_page_state, - }; + u64 end = addr + size; + int ret; + + ret = check_range_allowed_memory(addr, end); + if (ret) + return ret; hyp_assert_lock_held(&host_mmu.lock); - return check_page_state_range(&host_mmu.pgt, addr, size, &d); + for (; addr < end; addr += PAGE_SIZE) { + if (hyp_phys_to_page(addr)->host_state != state) + return -EPERM; + } + + return 0; } static int __host_set_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - enum kvm_pgtable_prot prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, state); + if (hyp_phys_to_page(addr)->host_state == PKVM_NOPAGE) { + int ret = host_stage2_idmap_locked(addr, size, PKVM_HOST_MEM_PROT); - return host_stage2_idmap_locked(addr, size, prot); + if (ret) + return ret; + } + + __host_update_page_state(addr, size, state); + + return 0; } static int host_request_owned_transition(u64 *completer_addr, diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index cbdd18cd3f98..7e04d1c2a03d 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -180,7 +180,6 @@ static void hpool_put_page(void *addr) static int fix_host_ownership_walker(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - enum kvm_pgtable_prot prot; enum pkvm_page_state state; phys_addr_t phys; @@ -203,16 +202,16 @@ static int fix_host_ownership_walker(const struct kvm_pgtable_visit_ctx *ctx, case PKVM_PAGE_OWNED: return host_stage2_set_owner_locked(phys, PAGE_SIZE, PKVM_ID_HYP); case PKVM_PAGE_SHARED_OWNED: - prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_BORROWED); + hyp_phys_to_page(phys)->host_state = PKVM_PAGE_SHARED_BORROWED; break; case PKVM_PAGE_SHARED_BORROWED: - prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_OWNED); + hyp_phys_to_page(phys)->host_state = PKVM_PAGE_SHARED_OWNED; break; default: return -EINVAL; } - return host_stage2_idmap_locked(phys, PAGE_SIZE, prot); + return 0; } static int fix_hyp_pgtable_refcnt_walker(const struct kvm_pgtable_visit_ctx *ctx, From patchwork Wed Dec 18 19:40:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914098 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 95C0BE77187 for ; Wed, 18 Dec 2024 19:47:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=9ugxEmd7a7WnD5rzj6dcj8iKmGlwGaXKl0uKILMdDIg=; b=V01hEctYOQSZa0mDjSQRSB6/8W 5hUkghWyYH6L1Rw8a8yYIm6gdSkck8ZouMF526k1Qe6hrfdJZW3Tl106eIrq3JdAH7LQTgdpwz0oF qRWZrPt9SgnvFCb9M/S0jZn6I+RDRb7xo3pk5Nm5H+GUiA4Nr5i8bg+XPmYrfnM2ZMaEUZjB9gS8d tq8nSAM34WLh7wVQA5p4DctDLD+8b2NosCYUNEGSj4DAP/K/uqaqo8JiS614GT/eLLwyyI/rckDtN jAlMKAYzheflVV+aYN8D0QqGZn0w3DLv8mJESR95fqCszNn7/xfr2XFP6SFRboJcH3te76/PYDQNt 0Xap/fgw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tO01L-0000000HZps-1oFf; Wed, 18 Dec 2024 19:47:47 +0000 Received: from mail-ed1-x54a.google.com ([2a00:1450:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzv1-0000000HYJf-3Oog for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:16 +0000 Received: by mail-ed1-x54a.google.com with SMTP id 4fb4d7f45d1cf-5d3e77fd3b3so3348a12.0 for ; Wed, 18 Dec 2024 11:41:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550874; x=1735155674; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9ugxEmd7a7WnD5rzj6dcj8iKmGlwGaXKl0uKILMdDIg=; b=K6PS0kdzvZQChlPdvheGyibMI3OpL8mhSc9/jxZfekvo+dSQzTe1yurjJIwpGdiolF bLzHheMyUlfmFBNLFJOShQH3WPU9V/cDJMdyDGMkBY3QYFPu9xiy1eTm4o65zZAXJY4d yQs+2q7l+jvtI1RKwv/+ucxxlIpsCo4mhByp0yeswt9DM68vSFy8ruik+C53IGK0LuKQ BNmD4g3US6HSaWEEbhWRx2vw+m4PRJNP6gxUOJCBWIyWSBRYcKozDSLirSUnD/7DWmNU tICxHYu8ZB+Xkv0lMGmzQFosypYXecdNG80J3WIYHlG72gh2A7PHd5CDmcAUArOZcrY+ TaTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550874; x=1735155674; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9ugxEmd7a7WnD5rzj6dcj8iKmGlwGaXKl0uKILMdDIg=; b=pu8NSaEsv/T+ZCRfmZWpcDM2R7nEvJXNA5MJyR3FMVbEvw10GUOa/qPimhWPda50ws 67/lHrgJbzIsdi1WlxAeBUWYGogIyJEH6I6vb3AYVvwub2dRHGZ36gsqciDXUCGm7WYi LZa6HzpBjGHnVH+ai6te3wJsmox/QTO2L0SCObGfBertSEx7cJjlAlmvJ0iWVqgiPZgu 2yQgU0Fz7DmpB93nMsLKcGSKV8OvYeht3TlF4S0fz1zsT1m4WlxWBZuIlBozXg57x/xG UizcNAgy5OK1jxa7tJXv8qo1rOimTwGmhWYHMJDIHugEkgpWH1MRVG4DtaO7U9d7EMV8 yL+Q== X-Forwarded-Encrypted: i=1; AJvYcCVJPTM3CLiNt2dNmMupe5VE52xUJ5rRG9wNA7+2NnK06m+HSG2xuZi+2bWTC+Tk9X7e0YwPHoLbRcfvOybMEeBm@lists.infradead.org X-Gm-Message-State: AOJu0YyHzAwysb0OXKY1MfjeutpIkZH6kM0cj7V/XLnCVMQHww0cXSvx yXgviDioy3BNdNGJelJMxXe6hnswvB5cFFG9XpWiV350dfL2ETzmOB8hDK2EN28X8gs4ZIdTUcd KaDGoSA== X-Google-Smtp-Source: AGHT+IFvLhkomwPsLLt4qoep3Va3P5u4/FTIVyNItlHOLlhdPmQivj6hJV4NPzFeQBdz1XX18dDRWVuBUpaz X-Received: from edag5.prod.google.com ([2002:a05:6402:3205:b0:5d1:21a6:f033]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:3584:b0:5d3:cfd0:8d46 with SMTP id 4fb4d7f45d1cf-5d7ee4242dbmr3825137a12.30.1734550873767; Wed, 18 Dec 2024 11:41:13 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:46 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-6-qperret@google.com> Subject: [PATCH v4 05/18] KVM: arm64: Pass walk flags to kvm_pgtable_stage2_mkyoung From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114115_956925_492BDAD9 X-CRM114-Status: GOOD ( 15.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org kvm_pgtable_stage2_mkyoung currently assumes that it is being called from a 'shared' walker, which will not be true once called from pKVM. To allow for the re-use of that function, make the walk flags one of its parameters. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 4 +++- arch/arm64/kvm/hyp/pgtable.c | 7 +++---- arch/arm64/kvm/mmu.c | 3 ++- 3 files changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index aab04097b505..38b7ec1c8614 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -669,13 +669,15 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size); * kvm_pgtable_stage2_mkyoung() - Set the access flag in a page-table entry. * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address to identify the page-table entry. + * @flags: Flags to control the page-table walk (ex. a shared walk) * * The offset of @addr within a page is ignored. * * If there is a valid, leaf page-table entry used to translate @addr, then * set the access flag in that entry. */ -void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr); +void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr, + enum kvm_pgtable_walk_flags flags); /** * kvm_pgtable_stage2_test_clear_young() - Test and optionally clear the access diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 40bd55966540..0470aedb4bf4 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1245,14 +1245,13 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) NULL, NULL, 0); } -void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr) +void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr, + enum kvm_pgtable_walk_flags flags) { int ret; ret = stage2_update_leaf_attrs(pgt, addr, 1, KVM_PTE_LEAF_ATTR_LO_S2_AF, 0, - NULL, NULL, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + NULL, NULL, flags); if (!ret) dsb(ishst); } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c9d46ad57e52..a2339b76c826 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1718,13 +1718,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, /* Resolve the access fault by making the page young again. */ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) { + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED; struct kvm_s2_mmu *mmu; trace_kvm_access_fault(fault_ipa); read_lock(&vcpu->kvm->mmu_lock); mmu = vcpu->arch.hw_mmu; - kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa); + kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa, flags); read_unlock(&vcpu->kvm->mmu_lock); } From patchwork Wed Dec 18 19:40:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914099 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B7F84E77187 for ; Wed, 18 Dec 2024 19:49:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=/S80FmDYKb1wMSZogthtihTZ2ow/iC1/VbK9O8uYM7w=; b=IXCbyYJ5oALR0lG3juwuND76Yc aUP+kFgWgbfsRxzE4oCpZLmp/iSu9Gr6VBlfPHaLwNcfFwF8nOMbmNMzlQuZd5/jj/fUG0J0jqB1G 8FzC8mpsZDjwGQLJKy7/BHucsQwpUCC8LFLGDwaD6P/5iQ/E9Xp7S3+ql3ajxXfovMcGC98Eb9DAu t0KxuezSlvV0FMic0cXmnKbB9IGopApLVD3KDNn189sGkMyJ0NyN3e2pHaVyw6m1ZbIGSDZHuoQK/ zOTogIRDsKV9av3/Uf7zobnrZPz6frm6RuJKj6gdx8Hp7AzQxJaRoRP2GDjfHZfhaJ+jPxx5Swgbe OyN5A3jQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tO02Q-0000000Ha4G-0MdV; Wed, 18 Dec 2024 19:48:54 +0000 Received: from mail-ej1-x649.google.com ([2a00:1450:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzv3-0000000HYKJ-2wgt for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:18 +0000 Received: by mail-ej1-x649.google.com with SMTP id a640c23a62f3a-aab9f30ac00so355101566b.3 for ; Wed, 18 Dec 2024 11:41:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550876; x=1735155676; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/S80FmDYKb1wMSZogthtihTZ2ow/iC1/VbK9O8uYM7w=; b=nW4ovy3DIDzeAKdNYqB+Y8dnKg3HPcQbw/g/+FEg0GGGAWaXig9/3lMovrOwMzPxG8 nk8S8eZ95gDlBy+chx8772EJwrnU6blGGjDz78Gd8pZ4kSFhF86aIjmxshvmcpv3DsTD eWjZUhqrKzDMqGwPkfYf+48EYELPMaHjVcb9WIjC5imkPOJ1Nb1nWd3aDWXR45sR+Ozy fbYdrVcnvlU1DX4Hm8m6r7Ggyt9yj/U17UVog8dyvPFHy/IYFg20JjXPP5Jfp3EjIkgj 0t8sOJSP8h8ZLXC3KoWmvaiURvy71ahEjUlpr79IJk+ERtPaoxf+N4pF0T2eLIq93Nzb 5I3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550876; x=1735155676; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/S80FmDYKb1wMSZogthtihTZ2ow/iC1/VbK9O8uYM7w=; b=GOx//eUInSGnu9TbE2LxQTW1SbPVc5zv6mNK9hqC/HDntoOJVrNH3H/P1s7wLD056h T+oR7H6ISxcG7NTOQKbMPos9JMyaE5RJi+yM3lH1n5jQffXD62r7wmsljDX0px8n4H1M 7KsGvr6ERhQCqDlQL2IlKkiJkGRozhRaKlA2HJ+QJCtIQBuxAgeWSRDKHGuSpd1sUWx4 RDqRm9vCyMdlmtKl8WtRZ8mOwzaFkjJS+hzUzhjKgEJJfWC592gErgYQHL6d2cfwQnt/ cYa4JlXOeZdtCb3f4d4gM3mP4xnHuZ0SCBm4IL+ALW6Av9FvFLW93dJj+Ltdi45cGaXx b+qQ== X-Forwarded-Encrypted: i=1; AJvYcCWQCPWYCSy68Qp6dXSCrwv7qMzMn8Pgzd5Q/I5uQpS9nnxhgAc5Xa4pRi0Z4/dRVXPhk9tuyGj8psPdA345xAKY@lists.infradead.org X-Gm-Message-State: AOJu0YzWXuB9YHzDjODlPSV716hv5hepqxKV0HTjhyjya6+HNW/wyicq CdLqabNriNZX/47TJAKVREvjip0D9UW3bxy77Yo3ofjcPrPZ/bDp6rCzINSyQYsV5Zn/2tfGwMg RjgqysA== X-Google-Smtp-Source: AGHT+IGO+aoiY0pAX0txDmM56AhyGulUG5MWxWr8/SJsqYx74spHn0f9QQj9LlBmJWIaz8dI7mVe/cFIZAmV X-Received: from ejchq16.prod.google.com ([2002:a17:907:3f10:b0:aa6:c227:374c]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:6e90:b0:aa6:8bb4:5035 with SMTP id a640c23a62f3a-aabf479ebadmr460491566b.31.1734550875904; Wed, 18 Dec 2024 11:41:15 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:47 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-7-qperret@google.com> Subject: [PATCH v4 06/18] KVM: arm64: Pass walk flags to kvm_pgtable_stage2_relax_perms From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114117_741584_7EDF6DD4 X-CRM114-Status: GOOD ( 13.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org kvm_pgtable_stage2_relax_perms currently assumes that it is being called from a 'shared' walker, which will not be true once called from pKVM. To allow for the re-use of that function, make the walk flags one of its parameters. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 4 +++- arch/arm64/kvm/hyp/pgtable.c | 6 ++---- arch/arm64/kvm/mmu.c | 7 +++---- 3 files changed, 8 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 38b7ec1c8614..c2f4149283ef 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -707,6 +707,7 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address to identify the page-table entry. * @prot: Additional permissions to grant for the mapping. + * @flags: Flags to control the page-table walk (ex. a shared walk) * * The offset of @addr within a page is ignored. * @@ -719,7 +720,8 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, * Return: 0 on success, negative error code on failure. */ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, - enum kvm_pgtable_prot prot); + enum kvm_pgtable_prot prot, + enum kvm_pgtable_walk_flags flags); /** * kvm_pgtable_stage2_flush_range() - Clean and invalidate data cache to Point diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 0470aedb4bf4..b7a3b5363235 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1307,7 +1307,7 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, } int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, - enum kvm_pgtable_prot prot) + enum kvm_pgtable_prot prot, enum kvm_pgtable_walk_flags flags) { int ret; s8 level; @@ -1325,9 +1325,7 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, if (prot & KVM_PGTABLE_PROT_X) clr |= KVM_PTE_LEAF_ATTR_HI_S2_XN; - ret = stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + ret = stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level, flags); if (!ret || ret == -EAGAIN) kvm_call_hyp(__kvm_tlb_flush_vmid_ipa_nsh, pgt->mmu, addr, level); return ret; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index a2339b76c826..641e4fec1659 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1452,6 +1452,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; struct page *page; + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED; if (fault_is_perm) fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu); @@ -1695,13 +1696,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * PTE, which will be preserved. */ prot &= ~KVM_NV_GUEST_MAP_SZ; - ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot); + ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot, flags); } else { ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, - memcache, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + memcache, flags); } out_unlock: From patchwork Wed Dec 18 19:40:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914105 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 20E3BE77187 for ; Wed, 18 Dec 2024 19:50:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=UN033o2GHiX5GHpTGB6kvm1rK9tee5Gj8O9vUBu1qws=; b=ZK6fQBB2NgkqCe8FTjn05ST1bg AqT+eKOW+DW9y4uY3lBqVrVX/doTEwHbhDNr0YtYXZyHuks5HUnj6JwgIJ4UxFjOpe6IDZPwuXavF 43IUIS3v4FzFmet4QXw5vSA0XbX4YdMrcSsUfKjuuCJB+C+zS/b4lyobcP9jU0ETH26S8AHxYvmxk /1mIYwdITpPRoBZ6yMFPPwRK0WTdiChWnllYPqrD+YZW6R0vahFi5KiQwruJDJIzHk5Ce38N919PE lM3Cjkroq61g8YI9SPDWBoDadr8MHVC3gItcT7+9kwAwE1Ltvo4NEgETRDfp59H+53nizDLwi4+J1 CDQ0rcug==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tO03T-0000000HaGt-31wW; Wed, 18 Dec 2024 19:49:59 +0000 Received: from mail-ed1-x54a.google.com ([2a00:1450:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzv6-0000000HYLT-05hx for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:21 +0000 Received: by mail-ed1-x54a.google.com with SMTP id 4fb4d7f45d1cf-5d3eceb9fe8so4327843a12.3 for ; Wed, 18 Dec 2024 11:41:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550878; x=1735155678; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=UN033o2GHiX5GHpTGB6kvm1rK9tee5Gj8O9vUBu1qws=; b=mHG3OyJZvdYmNNMgQ49DXz4fccCc6BL3kMq1qm70gF7QFcqrC0gWMXAbgfbM6/S8mO MdpXGbujiPXrJtJgvuVZY+XOcKAKzFPo5TF51+a1IGlDT4xdtKONK9wAkWU2uBhwGnH4 eQiOLKh5C83YZocrCNIfWZozzq6j+k1yNaVKUqlLWpml+t+8f3bcoZi2b9Q0S68qGNtk qvcnI0erzim3mh3qUk3NX4wqKxRyi9KUSUEKEz0vo4a9mToQ6LdJaE+0B/ZVYFVu3olw 8nbZY2GZ3ckgf4KFsPHlsNxRtqnyv5Eda0YcrYdT2RGP8yxGyz3I2qDvnzSQh4HxUTdP +7fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550878; x=1735155678; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UN033o2GHiX5GHpTGB6kvm1rK9tee5Gj8O9vUBu1qws=; b=onhfJXxcnDYw9CMmsOdd+/aNnkRL6vE5o7ZW/Ux/tWnliVhUOCQ316oCJFV15b3Cm2 kQd7J/C4o9DtG2/KTN6H1IsG/31D6Wg8Z13DzO+LmtN8EpihNeBkV7owrrb1kvvJVjQM XddmcCthiFFUej5ujApw1ar4jHUZjPbzDuIEcoIzF00ZG9vLkfhB9FE/8jJAYynijTjJ y0UAOGHG7GzQbuA5hq5M+qr+CIKLCL4s9tZoGKeZhWU25bHWcM/yFaILzpW4FryPggIE wFTzqGC+hyFlVq/2AWqe3OQJxiZD77m90sZQR6oWmdSKnkG+31DAiFmruYdWlss4yWhm vSyw== X-Forwarded-Encrypted: i=1; AJvYcCXQ7/5FKDVTVltWYQSihqKURRHIB+5AOlzdE9qtdjl67Mw90RHH3W3EGpPsvMed4MUO97bVUlbb5s9KbXnw/Hgt@lists.infradead.org X-Gm-Message-State: AOJu0YxeQotPJvtmc7UxOtiK0n80oOsgHO5Rto7e/Y2t3NEpfVeM+D5H XegRjjyEclCAc1ygsqkmuo1XTib/YgtYQ/OFeBy4bVAX2NSfHo7etj67Y+CP3s9uBWr1FFJNK80 IJvlJag== X-Google-Smtp-Source: AGHT+IE3wkDG9vtefB7L3ndH+glOdLt6SFA31qjCSbwI5pguq8Sr2lwBLtJq9tsEv+9Luyc+4JoZCl07Dm6S X-Received: from edbes27.prod.google.com ([2002:a05:6402:381b:b0:5d3:ef04:4872]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:34cb:b0:5d0:e826:f0da with SMTP id 4fb4d7f45d1cf-5d7ee3b4ce0mr3687509a12.16.1734550877975; Wed, 18 Dec 2024 11:41:17 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:48 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-8-qperret@google.com> Subject: [PATCH v4 07/18] KVM: arm64: Make kvm_pgtable_stage2_init() a static inline function From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114120_056291_F76C82F4 X-CRM114-Status: GOOD ( 10.10 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Turn kvm_pgtable_stage2_init() into a static inline function instead of a macro. This will allow the usage of typeof() on it later on. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index c2f4149283ef..04418b5e3004 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -526,8 +526,11 @@ int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, enum kvm_pgtable_stage2_flags flags, kvm_pgtable_force_pte_cb_t force_pte_cb); -#define kvm_pgtable_stage2_init(pgt, mmu, mm_ops) \ - __kvm_pgtable_stage2_init(pgt, mmu, mm_ops, 0, NULL) +static inline int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, + struct kvm_pgtable_mm_ops *mm_ops) +{ + return __kvm_pgtable_stage2_init(pgt, mmu, mm_ops, 0, NULL); +} /** * kvm_pgtable_stage2_destroy() - Destroy an unused guest stage-2 page-table. From patchwork Wed Dec 18 19:40:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914106 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 82F9FE77187 for ; Wed, 18 Dec 2024 19:51:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=BiT706MC/v/5WfOS1GqIiRFWN/xkPeYitBY7QIGoTso=; b=GFnv9o2GcH1C0z8dH/aw3TMnPe j1KKXkLuxMEw10dZwEezjhdvRN6ckWEnv/D5a15e8m26rLYeS6+f+XaS+6pr78bd7/XzYLySBt9i/ Q55SFiTsxOq4z6+VG5WgWigeLomyd6hpM/lsFSr8t/npqOt6NxksXmNVB5jG2JJ0Htd9DRdLOE/5E Z/DDd+TmKJVpGfwPY+z71jYUWryQhVVkSaj2RcOy1CmTCpE8Nxf7M97/poMHQCKcfO3OwHyU4/DVB xPwXslZdEHpXpXPnzuRC5I5vI94yQLFIk5XsZbKW0TilHI5Igb0niYK3RfDzjUFJHQkDe0a0ggMAQ 2YqXKNQA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tO04Y-0000000HaPh-1OCM; Wed, 18 Dec 2024 19:51:06 +0000 Received: from mail-ej1-x64a.google.com ([2a00:1450:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzv7-0000000HYM5-3lxX for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:23 +0000 Received: by mail-ej1-x64a.google.com with SMTP id a640c23a62f3a-aa691c09772so41066b.1 for ; Wed, 18 Dec 2024 11:41:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550880; x=1735155680; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BiT706MC/v/5WfOS1GqIiRFWN/xkPeYitBY7QIGoTso=; b=N1Cf8BC7kBbR5qpP00+HVI+QZH58To9/YGJWbDLjM8T+el+71E+pkA61E6uOhD09hD 45IwOxhPROjF3z5Eqzdl2Q08QI3wnr2p2LoVwm0jCdOMzi9PR19YirzFtWHa0nE/wlbf cfSnHKlWk/m+i/yqtczZNLpZfF0k1AVOAKmFK1teLbbqAVSCFzHE6zx3rGHarvyOR9G3 Qx8Ji2ZUJFu/FEvLPL8GeHNZvP8ogGUsDDRM0hPDhYExBupWfUzdlTspD7IKURpDwP9+ POdDlVRj4CdGobnbvQxOhTZv/Eo6JMifyGnvg+Wol1jTt0QPV6R6TKFvY4KzHOH+nhj3 uXZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550880; x=1735155680; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BiT706MC/v/5WfOS1GqIiRFWN/xkPeYitBY7QIGoTso=; b=DiVVArwOqNZRgQzdEKb65UNkLfhlV3enLdiJtJp6PZtpktID/j3Idrc8KJM/Aa4K/3 Vp8PYRIFnAnbH55EeG+LJGgUs7S+nItN3XxLlWsFRK1CkZ3jgjgyD2bfG2EYFuV2W6Lk xorVvtULTwKRT+NeanwjOVpPFi5G2aBzvwoTUt8rQs7gi0SPuUWmzz3NQ5VOZ1caVjA1 KHWwK9Foub3MNQ7vmMR29VzDw9q3DBb6blvj8sfnLo0Fn5OsLPnrtHRj2EuKe2Ai+lf8 TMGeKhkEQUkZ+9ICVS5AOmeAdcAi0GpvwkQ1Y5lt+GXTGR2mdaUACdQwTOdUICdv2vio qo3g== X-Forwarded-Encrypted: i=1; AJvYcCU6GgDVcxrm+YapAmRl+/7wnlFYOgqTw7Hc5XX5p05kJ+Ykl3NuAAH9MOPDY8w++N74wHq1nsMT6T7Xs3zPGMll@lists.infradead.org X-Gm-Message-State: AOJu0YzPvYk1eY2GmgA69I1geMPfXmID8FQOR+Qu4Rf6XIxiawpqgocJ V9iGRR2Grq2PiyIxJ4qlu1wOxHrhbHUFAamvsmu7j2Igsd+2vjSWMe7nAevuBEGdXSAvg5JiBkP DkcGHbQ== X-Google-Smtp-Source: AGHT+IFWzxNIIarybqKCC1vLUo+KqgKX+4fV6W/RAvufSsLnQ/TfrAjnGZxwfp1b7qx032X10zEJNqov2L7r X-Received: from ejctz5.prod.google.com ([2002:a17:907:c785:b0:aab:9ac0:2adc]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:7ba2:b0:aab:76bd:5f8e with SMTP id a640c23a62f3a-aac07b905fdmr61154666b.53.1734550880082; Wed, 18 Dec 2024 11:41:20 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:49 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-9-qperret@google.com> Subject: [PATCH v4 08/18] KVM: arm64: Add {get,put}_pkvm_hyp_vm() helpers From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114121_933735_8E8441BB X-CRM114-Status: GOOD ( 11.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for accessing pkvm_hyp_vm structures at EL2 in a context where we can't always expect a vCPU to be loaded (e.g. MMU notifiers), introduce get/put helpers to get temporary references to hyp VMs from any context. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 3 +++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 20 ++++++++++++++++++++ 2 files changed, 23 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index 24a9a8330d19..f361d8b91930 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -70,4 +70,7 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, unsigned int vcpu_idx); void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); +struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle); +void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm); + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 071993c16de8..d46a02e24e4a 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -327,6 +327,26 @@ void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) hyp_spin_unlock(&vm_table_lock); } +struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle) +{ + struct pkvm_hyp_vm *hyp_vm; + + hyp_spin_lock(&vm_table_lock); + hyp_vm = get_vm_by_handle(handle); + if (hyp_vm) + hyp_page_ref_inc(hyp_virt_to_page(hyp_vm)); + hyp_spin_unlock(&vm_table_lock); + + return hyp_vm; +} + +void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm) +{ + hyp_spin_lock(&vm_table_lock); + hyp_page_ref_dec(hyp_virt_to_page(hyp_vm)); + hyp_spin_unlock(&vm_table_lock); +} + static void pkvm_init_features_from_host(struct pkvm_hyp_vm *hyp_vm, const struct kvm *host_kvm) { struct kvm *kvm = &hyp_vm->kvm; From patchwork Wed Dec 18 19:40:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914107 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D1E47E77187 for ; Wed, 18 Dec 2024 19:52:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=d6FhQs6LanOwv9XGucvwEIpaYgCNqRAQXO7arc2oCbQ=; b=L5fMaUVSjl7oYRN27NxkrIUmaM 4CXiaSR2HtZD3pq056kazPYf8wGb+nbxsEXoxm5V1LnZBzN3TJT3QAv7lxqGMO9sPtOb4ZsgUoGlz dlTRGPsnyltfqpXD2Xw6n+0GwVWsvoUk0GV827kzVsPniFFsCxrdw4/OtGzQamCrxVT1/0fX6pm09 Bj+O5LNFLofgrx+6VR0iaURc4JPuwwshKNtsqv6OdKgKuUARRCYOY/qsO4rUZ0eXv8++P3J24/zsC 4sN1d4nrguDo8zjsUCk7KtHmKy7q37a6p+OKwxXslaOsyfKSomkA39Q5eV5wjbVVPATNl1jJlgjFB mOloYOgg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tO05d-0000000Haab-0o4R; Wed, 18 Dec 2024 19:52:13 +0000 Received: from mail-ed1-x54a.google.com ([2a00:1450:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzv9-0000000HYN7-44ZH for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:25 +0000 Received: by mail-ed1-x54a.google.com with SMTP id 4fb4d7f45d1cf-5d3bf8874dbso2618308a12.2 for ; Wed, 18 Dec 2024 11:41:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550882; x=1735155682; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=d6FhQs6LanOwv9XGucvwEIpaYgCNqRAQXO7arc2oCbQ=; b=iwKOvTuIxXVnh7Fi+4E5IziWJ26R31J2gxpoMJ8/4ew4f+hpX43gLJIwmXo86FRMCm OjatxHN2oGUmTKEGTcJ4Q+v5DTPDpBlWzwTQw70AJdoDJfVxwKVWroBhp1MmW/TMVASA SgWpodq/DycntBbvJ9d+BSQ5YTKKOhdam//gPSrlyIAJ8lhnRH9ajx4Yxk2peZHnMZ9f ++yNFm3NMhI4VbaT+stTNKSbuxVFV7uZLJoRQUHWj23dBK+7P3vTJ0/ipZPkqy7N7e2z HDSvqRKc0DyJf2VN6TY1VyKPqkvNTy/GS+Q/OpouSSvJmWUprhFVr4vVJYVUR0NcuK/6 AvrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550882; x=1735155682; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=d6FhQs6LanOwv9XGucvwEIpaYgCNqRAQXO7arc2oCbQ=; b=r8ARrWXwBYTafxtreoEPGn6GeQqxR8Q718i5sI3tQd9BXtzi5Dm9qGcU12a8XL+Yub bSO4oFL3n8fr6j54Nb+n9W8L3UtXAH/ZsCrOsb3bsvampHac7pBdDea7trXTdVJucYYt PjCjftqJnjiliX2IH21BC8BSxNyCWwjCAJ4En0jGj9p+2xfZUI6NIzdbcg4BZxhy38u1 bhVdrcfVQJVQteLzOrHVYlsU+jG9QLptJrUVfR4RtqKfmcQFhziiwyps2QBJGSRLsCpP 0UCrAov4KjI/F9xKYMVJUv2KuKkHsP0Tmw9/4G5c+/Y4B3/ldUJ2y35qO+zCDJ+j4fqS ZBzw== X-Forwarded-Encrypted: i=1; AJvYcCV9ZqR9glqSO7ZFInUeHYvNHzB4aJxSiUuvBJ8PIxBu1YlcNpqjxjplOQxHznrfJcho2tz8wbmwXzEVP/IWQs4Y@lists.infradead.org X-Gm-Message-State: AOJu0YwkkPeWQejgNPbKX+617ljlYlYlMv0Au8y9aZNg8BT/GXWV+PwF SrUI+5oFLJoj/jpi1S2zYOtUnunRNGnPayVx/9fmFt32E5zUA86sJw3b26UM6tB/42s0vBGPYx1 5Y0WriQ== X-Google-Smtp-Source: AGHT+IEQl6X8dwkHz9iHBFM9j6hF5IUGHurvJ5ons33f9UMxvnTCneUk8xKx1If6w/JhYrqNqB+3Cu0j6Sf1 X-Received: from edbfg15.prod.google.com ([2002:a05:6402:548f:b0:5d2:1b7e:45a4]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:51c9:b0:5d0:c67e:e263 with SMTP id 4fb4d7f45d1cf-5d7ee3ba9d5mr4362570a12.8.1734550882147; Wed, 18 Dec 2024 11:41:22 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:50 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-10-qperret@google.com> Subject: [PATCH v4 09/18] KVM: arm64: Introduce __pkvm_vcpu_{load,put}() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114124_012726_A2165BE9 X-CRM114-Status: GOOD ( 20.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Rather than look-up the hyp vCPU on every run hypercall at EL2, introduce a per-CPU 'loaded_hyp_vcpu' tracking variable which is updated by a pair of load/put hypercalls called directly from kvm_arch_vcpu_{load,put}() when pKVM is enabled. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Marc Zyngier Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 2 ++ arch/arm64/kvm/arm.c | 14 ++++++++ arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 7 ++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 47 ++++++++++++++++++++------ arch/arm64/kvm/hyp/nvhe/pkvm.c | 29 ++++++++++++++++ arch/arm64/kvm/vgic/vgic-v3.c | 6 ++-- 6 files changed, 93 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index ca2590344313..89c0fac69551 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -79,6 +79,8 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_init_vm, __KVM_HOST_SMCCC_FUNC___pkvm_init_vcpu, __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, + __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load, + __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put, }; #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a102c3aebdbc..55cc62b2f469 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -619,12 +619,26 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_arch_vcpu_load_debug_state_flags(vcpu); + if (is_protected_kvm_enabled()) { + kvm_call_hyp_nvhe(__pkvm_vcpu_load, + vcpu->kvm->arch.pkvm.handle, + vcpu->vcpu_idx, vcpu->arch.hcr_el2); + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, + &vcpu->arch.vgic_cpu.vgic_v3); + } + if (!cpumask_test_cpu(cpu, vcpu->kvm->arch.supported_cpus)) vcpu_set_on_unsupported_cpu(vcpu); } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { + if (is_protected_kvm_enabled()) { + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, + &vcpu->arch.vgic_cpu.vgic_v3); + kvm_call_hyp_nvhe(__pkvm_vcpu_put); + } + kvm_arch_vcpu_put_debug_state_flags(vcpu); kvm_arch_vcpu_put_fp(vcpu); if (has_vhe()) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index f361d8b91930..be52c5b15e21 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -20,6 +20,12 @@ struct pkvm_hyp_vcpu { /* Backpointer to the host's (untrusted) vCPU instance. */ struct kvm_vcpu *host_vcpu; + + /* + * If this hyp vCPU is loaded, then this is a backpointer to the + * per-cpu pointer tracking us. Otherwise, NULL if not loaded. + */ + struct pkvm_hyp_vcpu **loaded_hyp_vcpu; }; /* @@ -69,6 +75,7 @@ int __pkvm_teardown_vm(pkvm_handle_t handle); struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, unsigned int vcpu_idx); void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); +struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void); struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle); void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 6aa0b13d86e5..95d78db315b3 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -141,16 +141,46 @@ static void sync_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) host_cpu_if->vgic_lr[i] = hyp_cpu_if->vgic_lr[i]; } +static void handle___pkvm_vcpu_load(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(unsigned int, vcpu_idx, host_ctxt, 2); + DECLARE_REG(u64, hcr_el2, host_ctxt, 3); + struct pkvm_hyp_vcpu *hyp_vcpu; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vcpu = pkvm_load_hyp_vcpu(handle, vcpu_idx); + if (!hyp_vcpu) + return; + + if (pkvm_hyp_vcpu_is_protected(hyp_vcpu)) { + /* Propagate WFx trapping flags */ + hyp_vcpu->vcpu.arch.hcr_el2 &= ~(HCR_TWE | HCR_TWI); + hyp_vcpu->vcpu.arch.hcr_el2 |= hcr_el2 & (HCR_TWE | HCR_TWI); + } +} + +static void handle___pkvm_vcpu_put(struct kvm_cpu_context *host_ctxt) +{ + struct pkvm_hyp_vcpu *hyp_vcpu; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); + if (hyp_vcpu) + pkvm_put_hyp_vcpu(hyp_vcpu); +} + static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, host_vcpu, host_ctxt, 1); int ret; - host_vcpu = kern_hyp_va(host_vcpu); - if (unlikely(is_protected_kvm_enabled())) { - struct pkvm_hyp_vcpu *hyp_vcpu; - struct kvm *host_kvm; + struct pkvm_hyp_vcpu *hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); /* * KVM (and pKVM) doesn't support SME guests for now, and @@ -163,9 +193,6 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) goto out; } - host_kvm = kern_hyp_va(host_vcpu->kvm); - hyp_vcpu = pkvm_load_hyp_vcpu(host_kvm->arch.pkvm.handle, - host_vcpu->vcpu_idx); if (!hyp_vcpu) { ret = -EINVAL; goto out; @@ -176,12 +203,10 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) ret = __kvm_vcpu_run(&hyp_vcpu->vcpu); sync_hyp_vcpu(hyp_vcpu); - pkvm_put_hyp_vcpu(hyp_vcpu); } else { /* The host is fully trusted, run its vCPU directly. */ - ret = __kvm_vcpu_run(host_vcpu); + ret = __kvm_vcpu_run(kern_hyp_va(host_vcpu)); } - out: cpu_reg(host_ctxt, 1) = ret; } @@ -409,6 +434,8 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_init_vm), HANDLE_FUNC(__pkvm_init_vcpu), HANDLE_FUNC(__pkvm_teardown_vm), + HANDLE_FUNC(__pkvm_vcpu_load), + HANDLE_FUNC(__pkvm_vcpu_put), }; static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index d46a02e24e4a..496d186efb03 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -23,6 +23,12 @@ unsigned int kvm_arm_vmid_bits; unsigned int kvm_host_sve_max_vl; +/* + * The currently loaded hyp vCPU for each physical CPU. Used only when + * protected KVM is enabled, but for both protected and non-protected VMs. + */ +static DEFINE_PER_CPU(struct pkvm_hyp_vcpu *, loaded_hyp_vcpu); + /* * Set trap register values based on features in ID_AA64PFR0. */ @@ -306,15 +312,30 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, struct pkvm_hyp_vcpu *hyp_vcpu = NULL; struct pkvm_hyp_vm *hyp_vm; + /* Cannot load a new vcpu without putting the old one first. */ + if (__this_cpu_read(loaded_hyp_vcpu)) + return NULL; + hyp_spin_lock(&vm_table_lock); hyp_vm = get_vm_by_handle(handle); if (!hyp_vm || hyp_vm->nr_vcpus <= vcpu_idx) goto unlock; hyp_vcpu = hyp_vm->vcpus[vcpu_idx]; + + /* Ensure vcpu isn't loaded on more than one cpu simultaneously. */ + if (unlikely(hyp_vcpu->loaded_hyp_vcpu)) { + hyp_vcpu = NULL; + goto unlock; + } + + hyp_vcpu->loaded_hyp_vcpu = this_cpu_ptr(&loaded_hyp_vcpu); hyp_page_ref_inc(hyp_virt_to_page(hyp_vm)); unlock: hyp_spin_unlock(&vm_table_lock); + + if (hyp_vcpu) + __this_cpu_write(loaded_hyp_vcpu, hyp_vcpu); return hyp_vcpu; } @@ -323,10 +344,18 @@ void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) struct pkvm_hyp_vm *hyp_vm = pkvm_hyp_vcpu_to_hyp_vm(hyp_vcpu); hyp_spin_lock(&vm_table_lock); + hyp_vcpu->loaded_hyp_vcpu = NULL; + __this_cpu_write(loaded_hyp_vcpu, NULL); hyp_page_ref_dec(hyp_virt_to_page(hyp_vm)); hyp_spin_unlock(&vm_table_lock); } +struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void) +{ + return __this_cpu_read(loaded_hyp_vcpu); + +} + struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle) { struct pkvm_hyp_vm *hyp_vm; diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c index f267bc2486a1..c2ef41fff079 100644 --- a/arch/arm64/kvm/vgic/vgic-v3.c +++ b/arch/arm64/kvm/vgic/vgic-v3.c @@ -734,7 +734,8 @@ void vgic_v3_load(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; - kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); + if (likely(!is_protected_kvm_enabled())) + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); if (has_vhe()) __vgic_v3_activate_traps(cpu_if); @@ -746,7 +747,8 @@ void vgic_v3_put(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; - kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); + if (likely(!is_protected_kvm_enabled())) + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); WARN_ON(vgic_v4_put(vcpu)); if (has_vhe()) From patchwork Wed Dec 18 19:40:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914108 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5C56DE77187 for ; Wed, 18 Dec 2024 19:53:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=fif6OryM8GaD9IrY10oouFAiMLD9Fy27Kwulc/BsLHw=; b=pn1rjKl9Hlme7fUhRs5+4m0IGl dhjrn6Hgv9quoug+FeVl9S6ALpJ59JdeYjAyBRYWfPl5/WZPypHpepcUZFIR30t6u1AHdB27kynTm pBU4lq7MWM9S+BoYv8HmcslnG+i3TAar3mJ72Szi2pFetbjbUguHw9dVMsoou8dtDQLCnhiAnDUHV MwByFNRmyObULlFWNFYijEc0RBkPrhF0tguxDWbCMxJzlfZFv+VG6iAPN00liF+3e/Q1pWSLrE6R8 apasgpL+55Kp8rPLyt5x9kFN7i6AJe/zy3DGo2AzjgnPb/ac8mFafXUPZr/U5cabxaacwxVIIo6Sa ndRoDJew==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tO06g-0000000Haiv-3obd; Wed, 18 Dec 2024 19:53:19 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzvC-0000000HYNq-2MTv for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:27 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d4e0ac28b2so8069a12.0 for ; Wed, 18 Dec 2024 11:41:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550884; x=1735155684; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=fif6OryM8GaD9IrY10oouFAiMLD9Fy27Kwulc/BsLHw=; b=37Knblo6p0T4xMxPCWuQwvjYxYhEHoQIwFzfsde//D57DRlqCMmToduHARHiiF+ZGi VZ1m38QfZm5vHEsFDa4aLj6+hTxfgsF37rpoWpvkSr5lvYW7m/yY8ypZHNvgJspUg0qX xmNsZwqz43fOxWFmqSsPmD6mmuxZLMTrKzgHq5xIMkuUpe1uh80ULcAxUOlA6o3eOqGv UAob80MQGr98/Y3X3c52nR+SdPTeqpFu1DwQGr+8mZnxyny1rJIEj6xHTlmp2+HLwga/ WEAabymYuYKlgbtHgf63SY4qV/i2bOxkRU8lzYLfPlE2YpPQka04zHF5n+YxoEx1Z7ai iIPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550884; x=1735155684; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fif6OryM8GaD9IrY10oouFAiMLD9Fy27Kwulc/BsLHw=; b=BYCTC1n1Gh/0k/0vXArCfSqq11ttWjDhaGMeXuM87tdPk7M48Y6FXpaFaaLu4pG/V8 rcvABvMWkUJBkx3am7Jjb7WiudHPpwJaL2nEfPPkfzT6Ncrw2nzbRIy+mC3wIBscvXqi 1BOyeJYPq7KJH5AuBi/p/Gt6HLhxu+IH+LWuDnIYNOHeS6xV2RiNv4gKDMhANDrTSqO5 KRdGdHRozmVe+wDxQwFrUqVAXHrZU3VfIENv56XBVltLIYbpQuW0wqWz9bXvhQ932Pr1 aWoYCG+7TP8ocDNUKn10ZHy6WGv5mC//Xlx5NXRXspDcBEIWe7malzELcsgMDOPCnTfW apsQ== X-Forwarded-Encrypted: i=1; AJvYcCUpkJx8F8xLu5Fhf+meD9f1iqXUcybrB7D0LildNRv8gG6BFbOXegQOjo6J5XqziIDGQ9GAGNB+js6Srhr4IlHN@lists.infradead.org X-Gm-Message-State: AOJu0YwfH5Wa4Lvopthjd4NVIGPYaH2MrygY2D0CPbgTGx9YcmnWjeG7 rLBGHi0lFQPwbYhrP6POibSC+yxH7ea8zHYzwgLewQbYwUB0Vrf/EsoBtX9SqbKn4kMluh7NRi5 4+0i1YA== X-Google-Smtp-Source: AGHT+IGBOe/4DPfkEF1B+NjO9Oub04g+76qddsin82OUskxL4QgNyZ1UOFjMRSpfcIHcgJgEB6gBW3Epew2h X-Received: from edbb5.prod.google.com ([2002:a05:6402:1f05:b0:5d3:e5bd:959c]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:34c1:b0:5d0:88da:c235 with SMTP id 4fb4d7f45d1cf-5d7ee41e646mr3798506a12.31.1734550884474; Wed, 18 Dec 2024 11:41:24 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:51 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-11-qperret@google.com> Subject: [PATCH v4 10/18] KVM: arm64: Introduce __pkvm_host_share_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114126_602340_35608EDE X-CRM114-Status: GOOD ( 19.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for handling guest stage-2 mappings at EL2, introduce a new pKVM hypercall allowing to share pages with non-protected guests. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/include/asm/kvm_host.h | 3 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 + arch/arm64/kvm/hyp/include/nvhe/memory.h | 4 +- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 34 +++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 72 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 8 +++ 7 files changed, 123 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 89c0fac69551..449337f5b2a3 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -65,6 +65,7 @@ enum __kvm_host_smccc_func { /* Hypercalls available after pKVM finalisation */ __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, + __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e18e9244d17a..1246f1d01dbf 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -771,6 +771,9 @@ struct kvm_vcpu_arch { /* Cache some mmu pages needed inside spinlock regions */ struct kvm_mmu_memory_cache mmu_page_cache; + /* Pages to top-up the pKVM/EL2 guest pool */ + struct kvm_hyp_memcache pkvm_memcache; + /* Virtual SError ESR to restore when HCR_EL2.VSE is set */ u64 vsesr_el2; diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 25038ac705d8..15b8956051b6 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -39,6 +39,8 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); +int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, + enum kvm_pgtable_prot prot); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 2a5eabf4b753..34233d586060 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -46,6 +46,8 @@ struct hyp_page { /* Host (non-meta) state. Guarded by the host stage-2 lock. */ enum pkvm_page_state host_state : 8; + + u32 host_share_guest_count; }; extern u64 __hyp_vmemmap; @@ -68,7 +70,7 @@ static inline phys_addr_t hyp_virt_to_phys(void *addr) static inline struct hyp_page *hyp_phys_to_page(phys_addr_t phys) { - BUILD_BUG_ON(sizeof(struct hyp_page) != sizeof(u32)); + BUILD_BUG_ON(sizeof(struct hyp_page) != sizeof(u64)); return &hyp_vmemmap[hyp_phys_to_pfn(phys)]; } diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 95d78db315b3..d659462fbf5d 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -211,6 +211,39 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = ret; } +static int pkvm_refill_memcache(struct pkvm_hyp_vcpu *hyp_vcpu) +{ + struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu; + + return refill_memcache(&hyp_vcpu->vcpu.arch.pkvm_memcache, + host_vcpu->arch.pkvm_memcache.nr_pages, + &host_vcpu->arch.pkvm_memcache); +} + +static void handle___pkvm_host_share_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, pfn, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 3); + struct pkvm_hyp_vcpu *hyp_vcpu; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret = pkvm_refill_memcache(hyp_vcpu); + if (ret) + goto out; + + ret = __pkvm_host_share_guest(pfn, gfn, hyp_vcpu, prot); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -420,6 +453,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_share_hyp), HANDLE_FUNC(__pkvm_host_unshare_hyp), + HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 12bb5445fe47..fb9592e721cf 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -867,6 +867,27 @@ static int hyp_complete_donation(u64 addr, return pkvm_create_mappings_locked(start, end, prot); } +static enum pkvm_page_state guest_get_page_state(kvm_pte_t pte, u64 addr) +{ + if (!kvm_pte_valid(pte)) + return PKVM_NOPAGE; + + return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); +} + +static int __guest_check_page_state_range(struct pkvm_hyp_vcpu *vcpu, u64 addr, + u64 size, enum pkvm_page_state state) +{ + struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); + struct check_walk_data d = { + .desired = state, + .get_page_state = guest_get_page_state, + }; + + hyp_assert_lock_held(&vm->lock); + return check_page_state_range(&vm->pgt, addr, size, &d); +} + static int check_share(struct pkvm_mem_share *share) { const struct pkvm_mem_transition *tx = &share->tx; @@ -1349,3 +1370,54 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) return ret; } + +int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, + enum kvm_pgtable_prot prot) +{ + struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 phys = hyp_pfn_to_phys(pfn); + u64 ipa = hyp_pfn_to_phys(gfn); + struct hyp_page *page; + int ret; + + if (prot & ~KVM_PGTABLE_PROT_RWX) + return -EINVAL; + + ret = check_range_allowed_memory(phys, phys + PAGE_SIZE); + if (ret) + return ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __guest_check_page_state_range(vcpu, ipa, PAGE_SIZE, PKVM_NOPAGE); + if (ret) + goto unlock; + + page = hyp_phys_to_page(phys); + switch (page->host_state) { + case PKVM_PAGE_OWNED: + WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_OWNED)); + break; + case PKVM_PAGE_SHARED_OWNED: + if (page->host_share_guest_count) + break; + /* Only host to np-guest multi-sharing is tolerated */ + WARN_ON(1); + fallthrough; + default: + ret = -EPERM; + goto unlock; + } + + WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, PAGE_SIZE, phys, + pkvm_mkstate(prot, PKVM_PAGE_SHARED_BORROWED), + &vcpu->vcpu.arch.pkvm_memcache, 0)); + page->host_share_guest_count++; + +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 496d186efb03..0109c36566c8 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -795,6 +795,14 @@ int __pkvm_teardown_vm(pkvm_handle_t handle) /* Push the metadata pages to the teardown memcache */ for (idx = 0; idx < hyp_vm->nr_vcpus; ++idx) { struct pkvm_hyp_vcpu *hyp_vcpu = hyp_vm->vcpus[idx]; + struct kvm_hyp_memcache *vcpu_mc = &hyp_vcpu->vcpu.arch.pkvm_memcache; + + while (vcpu_mc->nr_pages) { + void *addr = pop_hyp_memcache(vcpu_mc, hyp_phys_to_virt); + + push_hyp_memcache(mc, addr, hyp_virt_to_phys); + unmap_donated_memory_noclear(addr, PAGE_SIZE); + } teardown_donated_memory(mc, hyp_vcpu, sizeof(*hyp_vcpu)); } From patchwork Wed Dec 18 19:40:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914116 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CC00BE77188 for ; Wed, 18 Dec 2024 19:54:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=OoE6hqB3Qh8FU0QOD8fDiy8tqJGDCls1rcuEN0uIG+w=; b=IaBBVk5MsfxcNn6St8gbsEeXP3 +ZDIzaGAuaTQa7v75JaQDT86HVIooCI1QSAaQG3J/82vl+ZyA9jQjFq0jbJxsXPSb99YSUOtDWOol 9QIVeZdQkzbP8fr1j/02Husvi68FQgPg4HT8+b7fNUcDVLY9YMpVjcH4JWkLO7iwwWHMcfWyU9GvB wezz1u4i2ZJegfzWfCfo9xrEjocEmig0ckF5YDmGV7o4GARl/cavGN32ZuTaiLefnpMJv7qeLabgi S3EhXBtElh/YFIAp0o9EkyAdQyoWOZ4v7kc0bvnW0B9YGUVomG23SsSX8I5x7Wca0lPauCsXdLMHy SuD1HfYA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tO07l-0000000HasM-3IhH; Wed, 18 Dec 2024 19:54:25 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzvE-0000000HYOI-0qH6 for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:29 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d121a79ad0so7045363a12.2 for ; Wed, 18 Dec 2024 11:41:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550886; x=1735155686; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OoE6hqB3Qh8FU0QOD8fDiy8tqJGDCls1rcuEN0uIG+w=; b=0PQ4wNrmtUnZszzg9aT4LceGDUsP/hPqsfASZlF9vYuF5TpVmV3GarqtW1YwZ6P/Dm sBHH5E5ppEp0N4z1HzuJkSI52WRQ6s+HNMHhFnfW3aPuxBRfpEvFAtlBJHYDiOPWcR9v MWq/2cKMKjfDUNBYZOa54RWY2DyGQwaZy68caXPc6vP+ITgvPNIKrE9nTcvGhvQ0urvC bBDM8DpPveMWkVGStsa02bu96cwSwcSzo5TjW9xlWRY97uPYhUNKjY+lt5rVJLpZb1l9 76a2p5UFFSf+XdCVGc79PUY9KyF+n64BSLkbeliLu+vJndPlFR92lzAEQIAkQQLtwICE QZoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550886; x=1735155686; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OoE6hqB3Qh8FU0QOD8fDiy8tqJGDCls1rcuEN0uIG+w=; b=UzG49ZaDnh6mal6l9gObN7jO5ev8gnrV+wCA695IV4lJgRhZb34T3y1jFCEfbdPqb/ PiHWCmzaIkDkthqLJa1JZrK9lNBgHX+3ItVZPUoJPKP4hnvmH2FxpGnClRDnNpDHWQeg o5DWMRV4uVTbyhh3m6J3EfvQTdpkULvs9y68c2kBRyIIuGNaLZMfnAlZSe7vKESk4UsQ OFDBzmje1rw2uRzM4YRWY13Nyn57WRXZf1zsNfto0P6pXczCUgTFuxgP9IQDNNcrwgPG Cmiof/Pm48eT19m8RMBdYTuhZbQ4G0iIFY2IY4a3e2LiqUdEIJq+N34UZX0uTiLy5XKO YefA== X-Forwarded-Encrypted: i=1; AJvYcCWFPbV4h0Qd73E8RFPuQNJNwAa+69CO9hYvjIdgk0WrEnzKFYL6qxXTRrISiIWyBTx6c5pDXz2MJLUw7V/PleDR@lists.infradead.org X-Gm-Message-State: AOJu0YwQe77GRjWTMAqPMndbT/mk5e3yiC+Jrce/0D0PCQ15oR9LS0tn dmcgqi9YP6KhjL1mPKot46mldS8o4c4gi3KOTseeaIeR0bI9k/nyHEwFK2qxBf8MV6tYK9EpY/g t97oOig== X-Google-Smtp-Source: AGHT+IHQ7EwTkicFxnfE7dV0rNuG34edDS8BDUT6NHRAcamTZJ79zT7UYiNopDmox/34ACzIpbjGVsaG3RRV X-Received: from edbin7.prod.google.com ([2002:a05:6402:2087:b0:5d0:f9e2:d3c4]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:51c8:b0:5d2:8f70:75f6 with SMTP id 4fb4d7f45d1cf-5d7ee3fd0b2mr3766308a12.30.1734550886668; Wed, 18 Dec 2024 11:41:26 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:52 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-12-qperret@google.com> Subject: [PATCH v4 11/18] KVM: arm64: Introduce __pkvm_host_unshare_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114128_239069_DA90B291 X-CRM114-Status: GOOD ( 14.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for letting the host unmap pages from non-protected guests, introduce a new hypercall implementing the host-unshare-guest transition. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 6 ++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 21 ++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 67 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 12 ++++ 6 files changed, 108 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 449337f5b2a3..0b6c4d325134 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -66,6 +66,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 15b8956051b6..e6d080b71779 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -41,6 +41,7 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); +int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index be52c5b15e21..0cc2a429f1fb 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -64,6 +64,11 @@ static inline bool pkvm_hyp_vcpu_is_protected(struct pkvm_hyp_vcpu *hyp_vcpu) return vcpu_is_protected(&hyp_vcpu->vcpu); } +static inline bool pkvm_hyp_vm_is_protected(struct pkvm_hyp_vm *hyp_vm) +{ + return kvm_vm_is_protected(&hyp_vm->kvm); +} + void pkvm_hyp_vm_table_init(void *tbl); int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva, @@ -78,6 +83,7 @@ void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void); struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle); +struct pkvm_hyp_vm *get_np_pkvm_hyp_vm(pkvm_handle_t handle); void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm); #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index d659462fbf5d..3c3a27c985a2 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -244,6 +244,26 @@ static void handle___pkvm_host_share_guest(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_unshare_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + struct pkvm_hyp_vm *hyp_vm; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm = get_np_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + + ret = __pkvm_host_unshare_guest(gfn, hyp_vm); + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -454,6 +474,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_share_hyp), HANDLE_FUNC(__pkvm_host_unshare_hyp), HANDLE_FUNC(__pkvm_host_share_guest), + HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index fb9592e721cf..30243b7922f1 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1421,3 +1421,70 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, return ret; } + +static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, u64 ipa) +{ + enum pkvm_page_state state; + struct hyp_page *page; + kvm_pte_t pte; + u64 phys; + s8 level; + int ret; + + ret = kvm_pgtable_get_leaf(&vm->pgt, ipa, &pte, &level); + if (ret) + return ret; + if (level != KVM_PGTABLE_LAST_LEVEL) + return -E2BIG; + if (!kvm_pte_valid(pte)) + return -ENOENT; + + state = guest_get_page_state(pte, ipa); + if (state != PKVM_PAGE_SHARED_BORROWED) + return -EPERM; + + phys = kvm_pte_to_phys(pte); + ret = check_range_allowed_memory(phys, phys + PAGE_SIZE); + if (WARN_ON(ret)) + return ret; + + page = hyp_phys_to_page(phys); + if (page->host_state != PKVM_PAGE_SHARED_OWNED) + return -EPERM; + if (WARN_ON(!page->host_share_guest_count)) + return -EINVAL; + + *__phys = phys; + + return 0; +} + +int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *vm) +{ + u64 ipa = hyp_pfn_to_phys(gfn); + struct hyp_page *page; + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_shared_guest(vm, &phys, ipa); + if (ret) + goto unlock; + + ret = kvm_pgtable_stage2_unmap(&vm->pgt, ipa, PAGE_SIZE); + if (ret) + goto unlock; + + page = hyp_phys_to_page(phys); + page->host_share_guest_count--; + if (!page->host_share_guest_count) + WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_OWNED)); + +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 0109c36566c8..2c618f2f2769 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -376,6 +376,18 @@ void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm) hyp_spin_unlock(&vm_table_lock); } +struct pkvm_hyp_vm *get_np_pkvm_hyp_vm(pkvm_handle_t handle) +{ + struct pkvm_hyp_vm *hyp_vm = get_pkvm_hyp_vm(handle); + + if (hyp_vm && pkvm_hyp_vm_is_protected(hyp_vm)) { + put_pkvm_hyp_vm(hyp_vm); + hyp_vm = NULL; + } + + return hyp_vm; +} + static void pkvm_init_features_from_host(struct pkvm_hyp_vm *hyp_vm, const struct kvm *host_kvm) { struct kvm *kvm = &hyp_vm->kvm; From patchwork Wed Dec 18 19:40:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914117 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 60785E77187 for ; Wed, 18 Dec 2024 19:55:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=VDPUkbv296iWYan9Z5++FJBPBOfCRdoKnkMuk2JBHhs=; b=kO09SO81rK8LMEfWUZD0rooa/q RIAjeyUxde4JBVsjmdiarDxgIALO2PQxGAntnIT3hKki13PXSGgvWFFX2plyLuGFY2c2N6aH9xUnD vO1FQbnQNbz8SGiOYnez9yOcNsn5AcOBgCNP2Vuqcl5cfB/Grz/XNceJiwrwC7IrAOAFh8HBcoEOb Fx3QywHzYaD1HWMp7Bvib99apPUeUAam/qUfqY3FFX1ZIo7rTvvP5fs+/zTaYjjxEt5FrwcYL2ZQO yTy4vSKHjfpBtrz6J8h8hDaaSizINoslco+Vru11jPN6nGRb9B44v/qTYtWyJbJ6N6Fn7qLrxCNzU ZOBWECIw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tO08q-0000000Hb3S-1tug; Wed, 18 Dec 2024 19:55:32 +0000 Received: from mail-ed1-x54a.google.com ([2a00:1450:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzvG-0000000HYOg-25VG for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:31 +0000 Received: by mail-ed1-x54a.google.com with SMTP id 4fb4d7f45d1cf-5d3ded861f5so6276315a12.2 for ; Wed, 18 Dec 2024 11:41:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550889; x=1735155689; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VDPUkbv296iWYan9Z5++FJBPBOfCRdoKnkMuk2JBHhs=; b=W82kSmOkcF6Dq+wTtyeiczbyV7hMHVomi2FcpByXNfIxwC7XolpqMROQw+L9qQossR 83d/LWHV4ZrsIK0P1IGX6Sh0Dds0e5l9cacQF/o5KezEg/Lx0V4eOrIH4nL2/Yh2gEvq vPk4+RG2pzsNQ7ONWvTk6DfPiOGeea2DdECp7oAEk0cEyz0H/fXUhj7zXKJKKUuSY8iw 1yt/ZjVLu5q1Vd5k2ag9TRsyQaswjV3zj3fnae+lztAMGGsWNmLWBIfE58N7eI6zuuQC nliyz6zLOWS+jKGZYqtxKwLBZISNnPdgGC+iphgrbOfcKSsri3Xvyv4DEFTjWtz5D7Qf iCHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550889; x=1735155689; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VDPUkbv296iWYan9Z5++FJBPBOfCRdoKnkMuk2JBHhs=; b=PLpQDD5aPFleHpV04m8sLgwIp4Hadx/E6dmyPcuqRNUDO8Am7odpdJG3AJ0OBo+Mii FcFAUngm0x7qlW7B5TyXl6YbGP49/D6kvoqKma2/Uwr9DsEhpdBOKQpnoYP3Knp96Pxk qZdYDzQaH9GgufwowJuZrVwUNqEYu9kv8mbY+JatWdQ4qNPDoVowzy6F+nboHb7LEGla ZkXlTF9oruYQD4yM2sGRPIjudnBXYmPv6pZp5WQofY2BnK85BShSkwbUzQCKq6/BcbpA 1tS5Ipeiy/K3m1Vqj4Fr7gBjS+m3TMt7cO/WRs4hqyxDSHuFAprPn3GXHtfCjEeEtFiX b1XA== X-Forwarded-Encrypted: i=1; AJvYcCWlVq+lsJJgFdf25b8UMd3TTK4QNKJPk5Ulv7CUs0Tan8maLl3upxZFhSADqHPvNe3cO8vvZV/SlfrL536cqu8m@lists.infradead.org X-Gm-Message-State: AOJu0Yyi/19eQ8PZrV3jzCOmFcoHpGXLZMt5J27QOokCPl7LojPFq/Bo E61C5TU6yodm+/D5KSctumNs1YxTqdsp7wux5xWSDLZyvgJSbQ3pZv9jk6Qgmn7kGWUB2d7X63I oF2VQAg== X-Google-Smtp-Source: AGHT+IE7/e9KoTrxRJDGtgOfqDEtYC47rptGi8yOrodHJhAQ85i/v+sfJS1qFdI61ia7HmO/sjzRCdIREpzo X-Received: from edb1.prod.google.com ([2002:a05:6402:2381:b0:5d0:2a58:40c4]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:524d:b0:5d0:d183:cc05 with SMTP id 4fb4d7f45d1cf-5d7ee3a2948mr4239362a12.5.1734550888759; Wed, 18 Dec 2024 11:41:28 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:53 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-13-qperret@google.com> Subject: [PATCH v4 12/18] KVM: arm64: Introduce __pkvm_host_relax_guest_perms() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114130_534422_9AD69333 X-CRM114-Status: GOOD ( 12.69 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a new hypercall allowing the host to relax the stage-2 permissions of mappings in a non-protected guest page-table. It will be used later once we start allowing RO memslots and dirty logging. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 20 ++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 23 +++++++++++++++++++ 4 files changed, 45 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 0b6c4d325134..66ee8542dcc9 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -67,6 +67,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_perms_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index e6d080b71779..181aec2d5bc1 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -42,6 +42,7 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 3c3a27c985a2..287e4ee93ef2 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -264,6 +264,25 @@ static void handle___pkvm_host_unshare_guest(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_relax_perms_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, gfn, host_ctxt, 1); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 2); + struct pkvm_hyp_vcpu *hyp_vcpu; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret = __pkvm_host_relax_perms_guest(gfn, hyp_vcpu, prot); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -475,6 +494,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_unshare_hyp), HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__pkvm_host_unshare_guest), + HANDLE_FUNC(__pkvm_host_relax_perms_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 30243b7922f1..aa8e0408aebb 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1488,3 +1488,26 @@ int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *vm) return ret; } + +int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot) +{ + struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 ipa = hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + if (prot & ~KVM_PGTABLE_PROT_RWX) + return -EINVAL; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_shared_guest(vm, &phys, ipa); + if (!ret) + ret = kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} From patchwork Wed Dec 18 19:40:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914118 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0CC18E77187 for ; Wed, 18 Dec 2024 19:56:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=sMqHHJAbkvVqmc+ZvuoP/w2AMJ0PvS+huStDXnV02UA=; b=1KO8FsIyWtTz17jmcQLtKVU4F+ JM6I8xNWiUSsvNpHqjlLDr2yUwey1PO+mR26UL3S4uvnsNJ2cyQDjRoNojVBytbejwKJqQHQtaivI 4IkwTWhmjBXq8dENXOXW7VZwHH7RKJTo+ldsoN+7Ni804jmdZwRh52Du8APmGyPJaTGL1wfSkMnYA wRbqfrqEVA7PciQTt6Wj/1RD914ACq9I1EyeuGHJ6JKW9uJOWuLuNSl4rBdcnhWkkSY+w7oIL7JIh CHIPtDBwRyuUDoTO0wokeLINTSJDh2Bpb3JW6+PVXtmvBrvBFVmOGaTalosRWIqtQEfSJl6CdjF0J +Dfz3/yw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tO09u-0000000007T-0K09; Wed, 18 Dec 2024 19:56:38 +0000 Received: from mail-ej1-x64a.google.com ([2a00:1450:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzvI-0000000HYP7-2WGF for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:33 +0000 Received: by mail-ej1-x64a.google.com with SMTP id a640c23a62f3a-aa622312962so703449566b.2 for ; Wed, 18 Dec 2024 11:41:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550891; x=1735155691; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sMqHHJAbkvVqmc+ZvuoP/w2AMJ0PvS+huStDXnV02UA=; b=0UJ5FHzOGKv6n71N6r28q/7q/NJNZwUK3Ca2b0rAjXmf8BaehA2r1i94uXEtIWPtI/ 3GPTnbC8eNFMf2u3seN+F7mBQ0DzYmJ/caL0qBmWbLsdwIP+ITHiGqj4V5H3VhjG8Oal QTrHsVCIYGmy2LgcZyWMmyFUgJ0i5nzlOZGmv7L//XSjBEe6umfEChXxewU7zS2kMsQ3 bkTOCXOuNvJ95Pz2C0EwU7YpwyrVLPU5tWpTNfZoeNxpNWIwYQcM1p4O2glpivy99JaT u7KGbFhhS5uTbUTL9BM69WsNF6OQ1oUGdtJbssGxPibROQH8AJTv0Sd2AOom0hEkAl5c tSfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550891; x=1735155691; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sMqHHJAbkvVqmc+ZvuoP/w2AMJ0PvS+huStDXnV02UA=; b=OG9cG2av3Brj2lqnfNNGgE+xtxb+7gh/Y1ReVsTQAnK5bZczPl/DBVIzLLppSncPxw +gywvgAomx4ose8kX+/Eji+V+QR5NYcxlZR5jajvdUOW5H+NHe0fih6gn3X/BeHCbDJz BC/thloVSbEntu7CZQPUbFvmATN7fMwMoAzsspyh2G3jLq2SEA8WJEssouVQjWv6OEwr DFTdFuJRCOZdINkUGD92Z4LfhDAYF1HrcPnS22R+ppWsxd/fU6hvZPzGOzhgofz8UnbL WsSNN+t1xpKWUdLRWlpPbqaQX2xX4a8KDfO6XmyKuVnkUgnWGqCiUYFJhb57tOIDrq7S 4s0Q== X-Forwarded-Encrypted: i=1; AJvYcCV+qiA9MsroEwaqYn7wKBiApVD2FRM1AEdbx4sNaLS6qs4Od4rhmkvjpzu/4elZDiyY00Y9jI8Wau7lJSa6cXsi@lists.infradead.org X-Gm-Message-State: AOJu0YzP3lKNMAqirewKbT+bQT8Jf0rn2GTOT3BZ5p/mlpvppqI7lJ9m lH6Nf7Fcgob9gOz5urDJ5knCK+jD7pBWjhDg/8UmsCi3kOM0h9sDfMEnix/hzN+s31uGE2Whij7 IvruJ/g== X-Google-Smtp-Source: AGHT+IHbX6RfegUQG4Dmsb1kyEyr86HPklOi6/oJALVsWc4EPf0tNeL14jIjXsZpNMuUN7pNAqXTgRwlD+AP X-Received: from ejdcw14.prod.google.com ([2002:a17:907:160e:b0:aa6:a21b:2ad]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:6a20:b0:aab:8a24:d5a5 with SMTP id a640c23a62f3a-aabf47a746dmr378800466b.30.1734550890875; Wed, 18 Dec 2024 11:41:30 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:54 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-14-qperret@google.com> Subject: [PATCH v4 13/18] KVM: arm64: Introduce __pkvm_host_wrprotect_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114132_656479_FDC0C007 X-CRM114-Status: GOOD ( 11.89 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a new hypercall to remove the write permission from a non-protected guest stage-2 mapping. This will be used for e.g. enabling dirty logging. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 21 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 19 +++++++++++++++++ 4 files changed, 42 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 66ee8542dcc9..8663a588cf34 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -68,6 +68,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_perms_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 181aec2d5bc1..0bbfb0e1734c 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,6 +43,7 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); +int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 287e4ee93ef2..98d317735107 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -283,6 +283,26 @@ static void handle___pkvm_host_relax_perms_guest(struct kvm_cpu_context *host_ct cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_wrprotect_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + struct pkvm_hyp_vm *hyp_vm; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm = get_np_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + + ret = __pkvm_host_wrprotect_guest(gfn, hyp_vm); + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -495,6 +515,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__pkvm_host_relax_perms_guest), + HANDLE_FUNC(__pkvm_host_wrprotect_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index aa8e0408aebb..94e4251b5077 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1511,3 +1511,22 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_ return ret; } + +int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) +{ + u64 ipa = hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_shared_guest(vm, &phys, ipa); + if (!ret) + ret = kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} From patchwork Wed Dec 18 19:40:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914119 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 57277E77187 for ; Wed, 18 Dec 2024 19:57:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=FA16CpWSjw4J8hytyuNewMCyl42cawlROSmZZIo9OEU=; b=h/FIOumvPlPNBQYune+yiX3f2R pvhB+HIhMrvAlvrjvI5ma7eQ0Q4xQXlgLYJMba7jCWaf52xdPSAoG3pSpr7hNF4+2/DlwmyQDIDTA KUghmNsiCsCysbdCf3ecfnf35iE5m+yxdWcSuJN+pdXAkaaHPfHsfgEV45OSRShUBQJptfRinXin6 0+VSa0f26KyZqqn8Xcbz42/iFWI6f70FDdNy6IyD34HicLsTiRkL8PrGKy0mC+chvGj6Fj+xG95Cd jeYb6c3LsCmvlh/KpqWqYuGU0lYhjzQrXDmxFqZvBqljUIuK2/ONt6Igl74A5LMZ8fwK/C8JU22NB Wyi7gkuA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tO0Ay-000000000Hf-35h6; Wed, 18 Dec 2024 19:57:44 +0000 Received: from mail-ej1-x64a.google.com ([2a00:1450:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzvK-0000000HYPr-3SaO for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:35 +0000 Received: by mail-ej1-x64a.google.com with SMTP id a640c23a62f3a-aa6732a1af5so616117766b.3 for ; Wed, 18 Dec 2024 11:41:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550893; x=1735155693; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FA16CpWSjw4J8hytyuNewMCyl42cawlROSmZZIo9OEU=; b=G9TbQ9LxPnChv7ip8gaNrHaoVG0D1OYb9/ha2IiaEXjzi/iIPTkf8Ww1byI9Qibpbe 1+PqFmCKxOCeNbx8/i3DnnHEI0cWbDRbxcGAb/gjBoBAOPciKxh10nBQS87y1g8MDoPM CvC5QLoh3bk0/yqpny1lwFXUZqwophR183UQoaYRaFSXHkR+osTJyaKz2eJuAzN80+nj xWC9BO8Hfkm0B1pVaxK2+IRup1Q8m/F2CMsvNODUt1/JprgaqHv1coynHSYEbQVAO7h1 B8CmaJQ6t1swIhP7rq2iV/phExPG86AOMM02NpVvTy2gLpqWLw+hEeVsrLTi2ADdHa0o N7IA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550893; x=1735155693; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FA16CpWSjw4J8hytyuNewMCyl42cawlROSmZZIo9OEU=; b=QbEje5AV52mycdRF5q4ui11cWYYsncG4VE50jMjYzH5mOKuqkI16pCUsGish7B94Gj +MGpyhdYkv4lF4wF2A+Exdtzxy2p2zgMqJdLP53hEUQb8iY8U6woeQk9C+Q3LDA4b31X En9nWNXsP7KBHNW4zdutdMGetEpzdLYWo1j9+ykgN29hZoUrrvWFL+knRZIsMlSZp/RV DWMfJxws1hLdgLcRc5p2hu/8YZc8bQNeNYqSHfq3cYSfqP+Cn2+NSw6AnxxmKEeumnmu e2DrDLc9CrF0djwaBEtFJt3WW22UWGbkT2CytDFEpPWcVCwcOhLG2Nxz3F0kKPBVRKPM 3Q7g== X-Forwarded-Encrypted: i=1; AJvYcCW4pWCwa4ge4EmsL+6wbpPzeV0WAfZTy9n7FcPHUSnpXUk/UGR+jf8VLf956ai5nY2cAwMgI9jPYVl58NodZr/R@lists.infradead.org X-Gm-Message-State: AOJu0YzsJm1Qhrj5Djj7QBgAOryZWftzm9YB2sDGrbXv1aDjvGq6+7pF 5PrU51neVTbyIWffRZYmzcDEBFPJWowWTbcKPukPdJonFE//BrV+efAPRjFdwUN8EDsVv1Krvin H5lCAeg== X-Google-Smtp-Source: AGHT+IG8EXE2W+9QfjAAeb4en8etMIJHTOmKwloyUtafN8OWJRimmwomZbfFp5X5CMVOuBoShjXizcAXD0pD X-Received: from edbfi23.prod.google.com ([2002:a05:6402:5517:b0:5d4:8b0:1df0]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:3297:b0:aa6:66bc:8788 with SMTP id a640c23a62f3a-aabf48cf088mr348264866b.45.1734550893196; Wed, 18 Dec 2024 11:41:33 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:55 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-15-qperret@google.com> Subject: [PATCH v4 14/18] KVM: arm64: Introduce __pkvm_host_test_clear_young_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114134_865313_E6E7FC2D X-CRM114-Status: GOOD ( 11.56 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Plumb the kvm_stage2_test_clear_young() callback into pKVM for non-protected guest. It will be later be called from MMU notifiers. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 22 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 19 ++++++++++++++++ 4 files changed, 43 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 8663a588cf34..4f97155d6323 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -69,6 +69,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_perms_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_test_clear_young_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 0bbfb0e1734c..74bd6c72fff2 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -44,6 +44,7 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 98d317735107..616e172a9c48 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -303,6 +303,27 @@ static void handle___pkvm_host_wrprotect_guest(struct kvm_cpu_context *host_ctxt cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_test_clear_young_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(bool, mkold, host_ctxt, 3); + struct pkvm_hyp_vm *hyp_vm; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm = get_np_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + + ret = __pkvm_host_test_clear_young_guest(gfn, mkold, hyp_vm); + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -516,6 +537,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__pkvm_host_relax_perms_guest), HANDLE_FUNC(__pkvm_host_wrprotect_guest), + HANDLE_FUNC(__pkvm_host_test_clear_young_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 94e4251b5077..0e42c3baaf4b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1530,3 +1530,22 @@ int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) return ret; } + +int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm) +{ + u64 ipa = hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_shared_guest(vm, &phys, ipa); + if (!ret) + ret = kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mkold); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} From patchwork Wed Dec 18 19:40:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914120 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 981EDE77187 for ; Wed, 18 Dec 2024 19:59:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=WyQ2FY+PZ6PTObIJ7L2BgknDpmScKnttO8f3zJ1YOCk=; b=T1F4sAFcJEqiEZ6/g4glxoB32P zFPdwIcFrOHQxXVUYZQ5NHdkZgA926aESfwQEHd6h1UsksNVRBKONXWA8uywzsEguOUm1ZZc2m2SM 6NtIOpuIn9KSqh/97QtgQ03sxBpmdYK2bxb2CjFnlRJO8yOKYEYMVQ4xREABXB9PBaZXuG4jH71JW 9wOC5tVFR83a3mG0t1UE9f3WLGgf8//ainYLWYaGzN+qY8ILCAll2VVTSezGsar7H9TnzcWN53GAw M+IqmKm5o2whgRm5QVws7gbGENtKLtr4/B3QiPqgFRbJhE6iAU+gB6uNZvCTJxVIDbAi0qYlETtU+ 0cC+y5Vg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tO0C3-000000000OH-1xmX; Wed, 18 Dec 2024 19:58:51 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzvN-0000000HYR0-0FNr for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:38 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d43ec75bc4so4122036a12.2 for ; Wed, 18 Dec 2024 11:41:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550895; x=1735155695; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=WyQ2FY+PZ6PTObIJ7L2BgknDpmScKnttO8f3zJ1YOCk=; b=bVQKtp5FG/YYGMp8L42YBig/99HyuVmo9pUgd5tlT+LF/g8oRS/bUAmWq810fuHNfE sjeBW0FYAsO+zrRMtwjCoSSjT/VqrpNchJtg4lN+QDqQ8gLUIOP+nqoMN8scGFNNXOUi 3mhc+8HqPOzTI2yzsMBnePvTz8sJPv9E2vsMYjHUGtHtEtQVpzL/OFK1MqrDgQZwlgSe eSzdpgG1ETQjjEQ/EFnmJ7w0WO7jmG3ABwFmS8B8ASmhLnQn0dDC4kAx5EQeaM2rMDbx v11DZCN1/iDnzJnE5MLMqrGUNG1ozRnSMOn7Jss/LmL/wi5PzWQhMK3b0Pz+dE+XdDS9 or1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550895; x=1735155695; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=WyQ2FY+PZ6PTObIJ7L2BgknDpmScKnttO8f3zJ1YOCk=; b=H50iYQH7Tq2w942tFuBGdqkkC2zBMg/Y1wCAbNko2NdXdwht1zxWQEs7/kfHyga7Nk U3sssfqsxQDSKhX4vk4zopydr3D86HluQO3V+9A/CG674Eh4pvvJBWmMsx2aqf/GTfb2 DtMg+rqWRoy52mYoQ0aqTO8RpS1MTqKXRi2HKTVE5TqeVstYpREXSh+LCBE/HT2RhGus IYWTMFManCFr1Vb4b7tyYmiaYcD3GtoobxDBNifSK75VwB+jx/XPxkGx4HuKc+RMuurl HRE0izFqk9potujfUbNCOSppT81jt4hdOPurJG2THLEhhl2r51Jj6VwbGu/9pyN+Jnm3 RfKg== X-Forwarded-Encrypted: i=1; AJvYcCU/gyrpYeWNR6Z+ooXeN/FU6Yxc3P+sbuw5OniOqjoB72Hq921JUr0EdrsjlGYn1QdHiqFznVVyDCuqQrp4waoL@lists.infradead.org X-Gm-Message-State: AOJu0Yzv790TneFJDpnPsKWlkKvC09ph4nSHLzFwttAGWPBb3jYnbfmU 7gjI96/XcRglU1WsLCvobdIAyt6QMz04U+OKYgyW4sKfP5ddeBt3l/Rm2YtAAUK3+tmzmONiaSI CJjSyJw== X-Google-Smtp-Source: AGHT+IFwRaet79zZrR2QrxOid1hTv5/CbYmoBwkgLrd14LNjLEatI9Bw+MwfUFU8CYpCEQ+YTYRLzjjywJcU X-Received: from edbev26.prod.google.com ([2002:a05:6402:541a:b0:5d1:26ae:9f05]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:40c9:b0:5d0:bcdd:ffa8 with SMTP id 4fb4d7f45d1cf-5d7ee3772bdmr3504055a12.1.1734550895452; Wed, 18 Dec 2024 11:41:35 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:56 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-16-qperret@google.com> Subject: [PATCH v4 15/18] KVM: arm64: Introduce __pkvm_host_mkyoung_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114137_097704_074EFB88 X-CRM114-Status: GOOD ( 12.84 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Plumb the kvm_pgtable_stage2_mkyoung() callback into pKVM for non-protected guests. It will be called later from the fault handling path. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 19 ++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 20 +++++++++++++++++++ 4 files changed, 41 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 4f97155d6323..a3b07db2776c 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -70,6 +70,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_perms_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_test_clear_young_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_mkyoung_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 74bd6c72fff2..978f38c386ee 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -45,6 +45,7 @@ int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm); +int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 616e172a9c48..32c4627b5b5b 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -324,6 +324,24 @@ static void handle___pkvm_host_test_clear_young_guest(struct kvm_cpu_context *ho cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_mkyoung_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, gfn, host_ctxt, 1); + struct pkvm_hyp_vcpu *hyp_vcpu; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret = __pkvm_host_mkyoung_guest(gfn, hyp_vcpu); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -538,6 +556,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_relax_perms_guest), HANDLE_FUNC(__pkvm_host_wrprotect_guest), HANDLE_FUNC(__pkvm_host_test_clear_young_guest), + HANDLE_FUNC(__pkvm_host_mkyoung_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 0e42c3baaf4b..eae03509d371 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1549,3 +1549,23 @@ int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm * return ret; } + +int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu) +{ + struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 ipa = hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_shared_guest(vm, &phys, ipa); + if (!ret) + kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} From patchwork Wed Dec 18 19:40:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914121 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1ABE7E77187 for ; Wed, 18 Dec 2024 20:00:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=hkfvAMeAqasaarDCC25uz9tXymaAjZWepO/HImSI4Yg=; b=i9Ekuqe+10kdV0o8SgUh84/hJi Y4qan/EmAi7cgcipAe9SpNlvFfCwJ02WyQ8AtzJVlhRk+mkVEj8wa9fyuQ0GEgRlXhRIAPqBiK/r5 VLOtW+aBzOuaKnSlccOytuFG+1aSiTAN6Blq4CDSmnBQtuQgYuovckAqHi2Tv5kGOJqO3udCBY8OZ orjWZ2l72VTck4d6so5NR6OorbA8iUWKk8KiRclyspW7pP3DUFI9uvG8gyEIpuywnJoe1a9ZyFmQF HwO3SYS0sJ7mnNlVbSgWOZ26HhyCgV/cz9QjmJlgoyBl/t4mOyiTv/Mx4XUT5tqeqsEeMJk2sGpnc F6tVDFVg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tO0D7-000000000VG-0TWm; Wed, 18 Dec 2024 19:59:57 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzvP-0000000HYRb-0jNw for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:40 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d3bf8874dbso2618493a12.2 for ; Wed, 18 Dec 2024 11:41:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550897; x=1735155697; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hkfvAMeAqasaarDCC25uz9tXymaAjZWepO/HImSI4Yg=; b=lO8O6jdTlPuN1rm+9jm9Lh76CkaqX2w90qpXof1SoM6hB0G5wmrLv/Z0qYhZvdbDw6 y5f/OEuHweWJ0evwjDBNyi/QW7PisATLUg/iJItRwrs75FBL33LKNDGZpiDYr5OUHwjA M+WLRaxXz17/E91p7l0AC3QScM/DUdefqrm83FqOfFzis5d7cahWH++SLLmUocT/Fpfm Vyk+atVH1eCBw74OHJ9iszNjUceHC52bfmeadtbWZ3cnYwqrGjQKw9zyYA5ifSvdD6x2 tsz52jAknqVt0XPhU27Nl9RJcooshh340pCM8TwaBvFSRzEAb3o/nJobRLNLgECVhhKM GhLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550897; x=1735155697; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hkfvAMeAqasaarDCC25uz9tXymaAjZWepO/HImSI4Yg=; b=ddNduaYQgG3Juguk6iz19BeooMxQgeU5dJ/hb/EDnCRpq6v7x2yCEuv1OTE5WJUUhG sZygHUVdSParnzhaKhga+LELrvZuUNHFaHmnAUWlicRGmOs1vzxEdiW7/33nBKAKdb6F kDWZxwWP+R0/PhMMAcQ5x5JPYIYNHZZpXnsrZAFz2klhYdIGizjotAVw/7PubVGSRYae zgXCc1o3bgWFQDUT7JrDFWnmR/sLJ8R3unOt3d2Eu4weYXU6WhuRhnwSN1H9oP58DJs8 zIpPFWjVU0Yi4X1iK+wegQc6KeWKme956Z91dBLc1IMairHExweFH/5S4fbuvrPyYzDv 7TcQ== X-Forwarded-Encrypted: i=1; AJvYcCWscTTyzIPCNM+PkNXj1Pt1iTjBfW/lEs/KhkJwh9UvX2ETfro7LWj646FFOYyEvKv3n+DRkOTVV4ufvC6IOEbA@lists.infradead.org X-Gm-Message-State: AOJu0YzDoTvMkABi1UoT2j1jDhqPEkFXT9wzAKZxNgf9L+Fa3nc1rryO KWgtlAzvsuSs5JKmPtKlkFQ9mS/SHgDmpc7z2J1BXvPFAi0Hlru9lXM/zx1C1pF8aalX4pwkmNl V607VHw== X-Google-Smtp-Source: AGHT+IGSBECViplx1U+nRtyXTDKvrUbyI+0W4ITB//TrX14agtn1suD5UatvN7l8zoDc+3G2GrjvLEAvKgAZ X-Received: from edb22.prod.google.com ([2002:a05:6402:2396:b0:5d7:f279:523e]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:2550:b0:5d2:7270:611f with SMTP id 4fb4d7f45d1cf-5d7ee3ff1famr4439977a12.22.1734550897647; Wed, 18 Dec 2024 11:41:37 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:57 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-17-qperret@google.com> Subject: [PATCH v4 16/18] KVM: arm64: Introduce __pkvm_tlb_flush_vmid() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114139_210584_E9106117 X-CRM114-Status: GOOD ( 11.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a new hypercall to flush the TLBs of non-protected guests. The host kernel will be responsible for issuing this hypercall after changing stage-2 permissions using the __pkvm_host_relax_guest_perms() or __pkvm_host_wrprotect_guest() paths. This is left under the host's responsibility for performance reasons. Note however that the TLB maintenance for all *unmap* operations still remains entirely under the hypervisor's responsibility for security reasons -- an unmapped page may be donated to another entity, so a stale TLB entry could be used to leak private data. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 17 +++++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index a3b07db2776c..002088c6e297 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -87,6 +87,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put, + __KVM_HOST_SMCCC_FUNC___pkvm_tlb_flush_vmid, }; #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 32c4627b5b5b..130f5f23bcb5 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -389,6 +389,22 @@ static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) __kvm_tlb_flush_vmid(kern_hyp_va(mmu)); } +static void handle___pkvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + struct pkvm_hyp_vm *hyp_vm; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vm = get_np_pkvm_hyp_vm(handle); + if (!hyp_vm) + return; + + __kvm_tlb_flush_vmid(&hyp_vm->kvm.arch.mmu); + put_pkvm_hyp_vm(hyp_vm); +} + static void handle___kvm_flush_cpu_context(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -573,6 +589,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_teardown_vm), HANDLE_FUNC(__pkvm_vcpu_load), HANDLE_FUNC(__pkvm_vcpu_put), + HANDLE_FUNC(__pkvm_tlb_flush_vmid), }; static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) From patchwork Wed Dec 18 19:40:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914122 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B26A4E7718A for ; Wed, 18 Dec 2024 20:01:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=y6DkakClF8BqI5Hrkmle3MNhfJWXR/14r9X27edMbAg=; b=HE7xnh5CfdlyVYVQYezjlOgyP+ 9nwIKIwPZRI39x1uAtYk1Rg+PebqThTQDM1IHkybYrzvLN0/BWaOH7VmM1sZ5YgkNBCZveR57G79n bo1q4WHcNrWb9KWb/oUD0s26NT0MA6r0I3IvDtJP7+irUcLcDFHweuGPb+vsEs3dT1SWOhwKS5knF e+6xGN3BhjOvB5Iwed8/zBIpgJBtVt9B38Y0RoQeB6C3X7Wh3wHIPhR4k/4QmCG+1F1OgJ4pdcjP2 9XLfao8A5PRVYDAmTK5AKD6Ypk2cKplUElUpcL5sxRiKm/+7rPLYFF5s1Qg5z6QP1ags021WnAqTs gInEvjyg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tO0EC-000000000l0-2pf2; Wed, 18 Dec 2024 20:01:04 +0000 Received: from mail-ej1-x64a.google.com ([2a00:1450:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzvR-0000000HYSM-2IPH for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:42 +0000 Received: by mail-ej1-x64a.google.com with SMTP id a640c23a62f3a-aab9f30ac00so355125466b.3 for ; Wed, 18 Dec 2024 11:41:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550900; x=1735155700; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=y6DkakClF8BqI5Hrkmle3MNhfJWXR/14r9X27edMbAg=; b=qDi1unvJrhm4uirXczA46/14kBn3MGi9Jm8viHteIpg/xgF6JRLgXO0bKorY6on5tr 8poi0ZxBKemRiq8QYn0bx9prD4HxjWx//lzFxMSErF2uIe75pmm+828kWewH2UOppPT8 wYhBn5I22bBdeeq5PmwlbRdL2tAfhWNxTRkph8KBSA88zaAYNw3+jxNwGCevxIjyHTjf 61XYUuzSpGKi5nZKULOoaIBk0jVyZXL9aWzPgG/2WFzTzfvWaI8Ae2/UHxIqwNCqD3s8 iqwv0JEm6vx8KoiT0mH04HPMCEtKVtPkX2hb3poeCtJ3IP/foxo04CsHh7cjJ/w3ecrc O6/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550900; x=1735155700; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=y6DkakClF8BqI5Hrkmle3MNhfJWXR/14r9X27edMbAg=; b=nFYaUijtudD1a5r6bpnBLUtzBs0BxhKn6Bmlzbcri+EkBdsToUTxbfNkaT/NJjl8TC 9l+mjivumFlPf+dX4d+1wotCdE/Wdj4kqAzdw49/RiYc76YO7HdfTZonlfNOUebdoZ2W WlDENZuicFfakWwq8/a+aAZfvtQE0a8q7RK0h37NWn7KrmUncVuqLQVslbCZGPypgCRv ULyV1H0HFkGa942E50SeeevIvrXGt04GLr5TixadjIxOewqvGUUhVAywN/+tFxB2ZIQZ J0hMJlWsUDscPb0YOgeKTJu0Gx+F3DZ7/jxYl5lqC+ePCLEEcSzS2w8uWVd7lbEX+4lD WkHw== X-Forwarded-Encrypted: i=1; AJvYcCWNH7Npgv4ApPeXKgQ8TYPrbdVBlRt0HUN45Xdp8mnXEMUPcaxpO5XtEiWQxhvZdlz7AuFAekK6GKeVZX9LKQh2@lists.infradead.org X-Gm-Message-State: AOJu0YwIR74O1yJ5vf94sv1jGi+AgU1WbMOHftd/zApSTSkmhP4yrMP9 5M5ALyXHB79HEPeMtVZzrM6LjHgMFgChgu8aAkdLaza1SDBGKAL5y9V8rYo7QgbnmOOJbPZg0Lg YT7AlwQ== X-Google-Smtp-Source: AGHT+IHNKpe3m3A1MgUUPpiS0U2tlUavLUx/VUDBj0i41dQCQEpXAX6s495FgbN3PfM0SoDtetWb6xxetzlN X-Received: from ejctq12.prod.google.com ([2002:a17:907:c50c:b0:aa6:a222:16b6]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:72c7:b0:aab:d4f0:c598 with SMTP id a640c23a62f3a-aabf478a5d0mr412739466b.27.1734550899889; Wed, 18 Dec 2024 11:41:39 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:58 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-18-qperret@google.com> Subject: [PATCH v4 17/18] KVM: arm64: Introduce the EL1 pKVM MMU From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114141_588644_D6ACE4A0 X-CRM114-Status: GOOD ( 21.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a set of helper functions allowing to manipulate the pKVM guest stage-2 page-tables from EL1 using pKVM's HVC interface. Each helper has an exact one-to-one correspondance with the traditional kvm_pgtable_stage2_*() functions from pgtable.c, with a strictly matching prototype. This will ease plumbing later on in mmu.c. These callbacks track the gfn->pfn mappings in a simple rb_tree indexed by IPA in lieu of a page-table. This rb-tree is kept in sync with pKVM's state and is protected by the mmu_lock like a traditional stage-2 page-table. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/kvm_pgtable.h | 23 +-- arch/arm64/include/asm/kvm_pkvm.h | 26 ++++ arch/arm64/kvm/pkvm.c | 201 +++++++++++++++++++++++++++ 4 files changed, 242 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 1246f1d01dbf..f23f4ea9ec8b 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -85,6 +85,7 @@ void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu); struct kvm_hyp_memcache { phys_addr_t head; unsigned long nr_pages; + struct pkvm_mapping *mapping; /* only used from EL1 */ }; static inline void push_hyp_memcache(struct kvm_hyp_memcache *mc, diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 04418b5e3004..6b9d274052c7 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -412,15 +412,20 @@ static inline bool kvm_pgtable_walk_lock_held(void) * be used instead of block mappings. */ struct kvm_pgtable { - u32 ia_bits; - s8 start_level; - kvm_pteref_t pgd; - struct kvm_pgtable_mm_ops *mm_ops; - - /* Stage-2 only */ - struct kvm_s2_mmu *mmu; - enum kvm_pgtable_stage2_flags flags; - kvm_pgtable_force_pte_cb_t force_pte_cb; + union { + struct rb_root pkvm_mappings; + struct { + u32 ia_bits; + s8 start_level; + kvm_pteref_t pgd; + struct kvm_pgtable_mm_ops *mm_ops; + + /* Stage-2 only */ + enum kvm_pgtable_stage2_flags flags; + kvm_pgtable_force_pte_cb_t force_pte_cb; + }; + }; + struct kvm_s2_mmu *mmu; }; /** diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index cd56acd9a842..65f988b6fe0d 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -137,4 +137,30 @@ static inline size_t pkvm_host_sve_state_size(void) SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_sve_max_vl))); } +struct pkvm_mapping { + struct rb_node node; + u64 gfn; + u64 pfn; +}; + +int pkvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, + struct kvm_pgtable_mm_ops *mm_ops); +void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); +int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, + enum kvm_pgtable_prot prot, void *mc, + enum kvm_pgtable_walk_flags flags); +int pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); +int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size); +int pkvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size); +bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, u64 size, bool mkold); +int pkvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, enum kvm_pgtable_prot prot, + enum kvm_pgtable_walk_flags flags); +void pkvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr, + enum kvm_pgtable_walk_flags flags); +int pkvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size, + struct kvm_mmu_memory_cache *mc); +void pkvm_pgtable_stage2_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, s8 level); +kvm_pte_t *pkvm_pgtable_stage2_create_unlinked(struct kvm_pgtable *pgt, u64 phys, s8 level, + enum kvm_pgtable_prot prot, void *mc, + bool force_pte); #endif /* __ARM64_KVM_PKVM_H__ */ diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 85117ea8f351..930b677eb9b0 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -268,3 +269,203 @@ static int __init finalize_pkvm(void) return ret; } device_initcall_sync(finalize_pkvm); + +static int cmp_mappings(struct rb_node *node, const struct rb_node *parent) +{ + struct pkvm_mapping *a = rb_entry(node, struct pkvm_mapping, node); + struct pkvm_mapping *b = rb_entry(parent, struct pkvm_mapping, node); + + if (a->gfn < b->gfn) + return -1; + if (a->gfn > b->gfn) + return 1; + return 0; +} + +static struct rb_node *find_first_mapping_node(struct rb_root *root, u64 gfn) +{ + struct rb_node *node = root->rb_node, *prev = NULL; + struct pkvm_mapping *mapping; + + while (node) { + mapping = rb_entry(node, struct pkvm_mapping, node); + if (mapping->gfn == gfn) + return node; + prev = node; + node = (gfn < mapping->gfn) ? node->rb_left : node->rb_right; + } + + return prev; +} + +/* + * __tmp is updated to rb_next(__tmp) *before* entering the body of the loop to allow freeing + * of __map inline. + */ +#define for_each_mapping_in_range_safe(__pgt, __start, __end, __map) \ + for (struct rb_node *__tmp = find_first_mapping_node(&(__pgt)->pkvm_mappings, \ + ((__start) >> PAGE_SHIFT)); \ + __tmp && ({ \ + __map = rb_entry(__tmp, struct pkvm_mapping, node); \ + __tmp = rb_next(__tmp); \ + true; \ + }); \ + ) \ + if (__map->gfn < ((__start) >> PAGE_SHIFT)) \ + continue; \ + else if (__map->gfn >= ((__end) >> PAGE_SHIFT)) \ + break; \ + else + +int pkvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, + struct kvm_pgtable_mm_ops *mm_ops) +{ + pgt->pkvm_mappings = RB_ROOT; + pgt->mmu = mmu; + + return 0; +} + +void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +{ + struct kvm *kvm = kvm_s2_mmu_to_kvm(pgt->mmu); + pkvm_handle_t handle = kvm->arch.pkvm.handle; + struct pkvm_mapping *mapping; + struct rb_node *node; + + if (!handle) + return; + + node = rb_first(&pgt->pkvm_mappings); + while (node) { + mapping = rb_entry(node, struct pkvm_mapping, node); + kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); + node = rb_next(node); + rb_erase(&mapping->node, &pgt->pkvm_mappings); + kfree(mapping); + } +} + +int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, + u64 phys, enum kvm_pgtable_prot prot, + void *mc, enum kvm_pgtable_walk_flags flags) +{ + struct kvm *kvm = kvm_s2_mmu_to_kvm(pgt->mmu); + struct pkvm_mapping *mapping = NULL; + struct kvm_hyp_memcache *cache = mc; + u64 gfn = addr >> PAGE_SHIFT; + u64 pfn = phys >> PAGE_SHIFT; + int ret; + + if (size != PAGE_SIZE) + return -EINVAL; + + lockdep_assert_held_write(&kvm->mmu_lock); + ret = kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, prot); + if (ret) { + /* Is the gfn already mapped due to a racing vCPU? */ + if (ret == -EPERM) + return -EAGAIN; + } + + swap(mapping, cache->mapping); + mapping->gfn = gfn; + mapping->pfn = pfn; + WARN_ON(rb_find_add(&mapping->node, &pgt->pkvm_mappings, cmp_mappings)); + + return ret; +} + +int pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct kvm *kvm = kvm_s2_mmu_to_kvm(pgt->mmu); + pkvm_handle_t handle = kvm->arch.pkvm.handle; + struct pkvm_mapping *mapping; + int ret = 0; + + lockdep_assert_held_write(&kvm->mmu_lock); + for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { + ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); + if (WARN_ON(ret)) + break; + rb_erase(&mapping->node, &pgt->pkvm_mappings); + kfree(mapping); + } + + return ret; +} + +int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct kvm *kvm = kvm_s2_mmu_to_kvm(pgt->mmu); + pkvm_handle_t handle = kvm->arch.pkvm.handle; + struct pkvm_mapping *mapping; + int ret = 0; + + lockdep_assert_held(&kvm->mmu_lock); + for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { + ret = kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->gfn); + if (WARN_ON(ret)) + break; + } + + return ret; +} + +int pkvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct kvm *kvm = kvm_s2_mmu_to_kvm(pgt->mmu); + struct pkvm_mapping *mapping; + + lockdep_assert_held(&kvm->mmu_lock); + for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) + __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), PAGE_SIZE); + + return 0; +} + +bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, u64 size, bool mkold) +{ + struct kvm *kvm = kvm_s2_mmu_to_kvm(pgt->mmu); + pkvm_handle_t handle = kvm->arch.pkvm.handle; + struct pkvm_mapping *mapping; + bool young = false; + + lockdep_assert_held(&kvm->mmu_lock); + for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) + young |= kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle, mapping->gfn, + mkold); + + return young; +} + +int pkvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, enum kvm_pgtable_prot prot, + enum kvm_pgtable_walk_flags flags) +{ + return kvm_call_hyp_nvhe(__pkvm_host_relax_perms_guest, addr >> PAGE_SHIFT, prot); +} + +void pkvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr, + enum kvm_pgtable_walk_flags flags) +{ + WARN_ON(kvm_call_hyp_nvhe(__pkvm_host_mkyoung_guest, addr >> PAGE_SHIFT)); +} + +void pkvm_pgtable_stage2_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, s8 level) +{ + WARN_ON_ONCE(1); +} + +kvm_pte_t *pkvm_pgtable_stage2_create_unlinked(struct kvm_pgtable *pgt, u64 phys, s8 level, + enum kvm_pgtable_prot prot, void *mc, bool force_pte) +{ + WARN_ON_ONCE(1); + return NULL; +} + +int pkvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size, + struct kvm_mmu_memory_cache *mc) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} From patchwork Wed Dec 18 19:40:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13914123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ACA3FE77187 for ; Wed, 18 Dec 2024 20:02:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Vrp18CDHZ6Mn1scEH2wlnFhiKHJPrKlVzJRYJ5Umd4I=; b=WOg/0X07Wt8tmiy28e6Ec45/bI dcRn84hplggp6ITHiHP5ymciIrc5Ko4NLyqtVO7iyy/PMggqf7Io23Po/diQkA4HHs38Ip3bi2iE3 vl+8zvh482UyAmtSkFHJShwdUDdWjTPK1VKCn4cLYRbm9FvfbXHRN4HAOvooxIVOfh3lFw63Jpy3A HKhOJojl84v63wTiC/I7SQZtnZNwT61Qc1RUnkGsDzGtRUOuJOlzAQCQOnC9fMCrzL5pqVnHV7M66 /RvVhZXuqzdChOCv+GntgTILmeYk3hkUCugXfCNUI2Mpw24jhX+bFEcuVmFFuvXUYUIfmcs8kDNy6 ToS5cGxQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tO0FH-000000000xo-1QMx; Wed, 18 Dec 2024 20:02:11 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNzvT-0000000HYSv-3vxe for linux-arm-kernel@lists.infradead.org; Wed, 18 Dec 2024 19:41:45 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d3cb2e6c42so7667714a12.3 for ; Wed, 18 Dec 2024 11:41:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734550902; x=1735155702; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Vrp18CDHZ6Mn1scEH2wlnFhiKHJPrKlVzJRYJ5Umd4I=; b=2Md/panu6peTNF0siIJSXPUQwuvDDYHCSo5TKLyJXJYMeoVYuthltglZbLcYY64wT6 7WU3ITolUMQc7Hla/5NJOhLJftqEfWGbkoGCOdKQDQPzJA6h62KQNuSuluAuKRn0XIhR Rremb+YiDaaUdcIK24XIKfzcEWnSJD48uWIvoUQ9CfTLi/xRKyXr41dVgcq3+fnNzhUb x/td+ZMtNf85RsDNTvbnUNU9Oinabm+8piEd2Dg+7prGebEfCXE6+jebGNH0y8USUAs8 EnFvz5j70mRSxHD40RjofrFhsxHtie2Zd1xQyMBY07MJGGxRNXqeY1lGzcWK6Jb0Q/2a 0QXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734550902; x=1735155702; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Vrp18CDHZ6Mn1scEH2wlnFhiKHJPrKlVzJRYJ5Umd4I=; b=Arz92FIIv2dSWxUeh8BPpPgovzViOAo86JszL/hlC7bl4cwguJp7L+RZu3aLxxYZ9p 8FR7XUz25JG6DbAguIsrz/b2x1BKba2Z+ZhtyiRbVa6/rfPABCnORoo0xsWqyyAyJyA4 N3Wl+rqhBSntNCMBtZqVKOI26pZSiv+izwFPwILfMU7N7X79bOPKGdL4TDHgxI8jbwao wHcoGJASuyHRUh/89g83OVyk6b26uP0bnOz3PvWHqQrdQBZVe0kXgk2i+QZVoNRxTgXY iRwQcuLVHcZRjlL89Qt9VrDVFpFF1Wm29hbY58qXtPcKa/pxskqByHR3GyJa8wqd0Kth z9lg== X-Forwarded-Encrypted: i=1; AJvYcCVdJYY4+/ulMpnKcXam1zHnM3SXnw/5KxOJc2HkviumIQHK2e+Txwl/BuaV+YdTo4hE4A7YBKa5KWJrM5fcYdd2@lists.infradead.org X-Gm-Message-State: AOJu0YyRmVjRA0G3n80qVl3OKrRKxO/S1sSDBM5ciYLNTlIZ8qIGCmk7 EUZkgiKS7FeswR5YOtDg9jdCxpz0nmdW/MymTXNd/aFo9kf4lJQGNp4MDSBbXPd4iVsgrHEmfCZ +T2yTbA== X-Google-Smtp-Source: AGHT+IE/bh/YBSK6Nzso5i2wcnxVL5uqB8UYSDaBDVMvQfLpUj7+lL77OQ9xcJ56VGz3+7xUpwFFrCHSkRHC X-Received: from edbek21.prod.google.com ([2002:a05:6402:3715:b0:5d7:c7b4:8772]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:51cb:b0:5d1:22c2:6c56 with SMTP id 4fb4d7f45d1cf-5d7ee3b4b98mr3413543a12.17.1734550902170; Wed, 18 Dec 2024 11:41:42 -0800 (PST) Date: Wed, 18 Dec 2024 19:40:59 +0000 In-Reply-To: <20241218194059.3670226-1-qperret@google.com> Mime-Version: 1.0 References: <20241218194059.3670226-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218194059.3670226-19-qperret@google.com> Subject: [PATCH v4 18/18] KVM: arm64: Plumb the pKVM MMU in KVM From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241218_114143_979855_CE00EBD0 X-CRM114-Status: GOOD ( 21.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce the KVM_PGT_CALL() helper macro to allow switching from the traditional pgtable code to the pKVM version easily in mmu.c. The cost of this 'indirection' is expected to be very minimal due to is_protected_kvm_enabled() being backed by a static key. With this, everything is in place to allow the delegation of non-protected guest stage-2 page-tables to pKVM, so let's stop using the host's kvm_s2_mmu from EL2 and enjoy the ride. Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_mmu.h | 16 ++++++ arch/arm64/kvm/arm.c | 9 +++- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 2 - arch/arm64/kvm/mmu.c | 87 ++++++++++++++++++++---------- 4 files changed, 82 insertions(+), 32 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 66d93e320ec8..d116ab4230e8 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -353,6 +353,22 @@ static inline bool kvm_is_nested_s2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu) return &kvm->arch.mmu != mmu; } +static inline void kvm_fault_lock(struct kvm *kvm) +{ + if (is_protected_kvm_enabled()) + write_lock(&kvm->mmu_lock); + else + read_lock(&kvm->mmu_lock); +} + +static inline void kvm_fault_unlock(struct kvm *kvm) +{ + if (is_protected_kvm_enabled()) + write_unlock(&kvm->mmu_lock); + else + read_unlock(&kvm->mmu_lock); +} + #ifdef CONFIG_PTDUMP_STAGE2_DEBUGFS void kvm_s2_ptdump_create_debugfs(struct kvm *kvm); #else diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 55cc62b2f469..9bcbc7b8ed38 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -502,7 +502,10 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) { - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + if (!is_protected_kvm_enabled()) + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + else + free_hyp_memcache(&vcpu->arch.pkvm_memcache); kvm_timer_vcpu_terminate(vcpu); kvm_pmu_vcpu_destroy(vcpu); kvm_vgic_vcpu_destroy(vcpu); @@ -574,6 +577,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) struct kvm_s2_mmu *mmu; int *last_ran; + if (is_protected_kvm_enabled()) + goto nommu; + if (vcpu_has_nv(vcpu)) kvm_vcpu_load_hw_mmu(vcpu); @@ -594,6 +600,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) *last_ran = vcpu->vcpu_idx; } +nommu: vcpu->cpu = cpu; kvm_vgic_load(vcpu); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 130f5f23bcb5..258d572eed62 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -103,8 +103,6 @@ static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) /* Limit guest vector length to the maximum supported by the host. */ hyp_vcpu->vcpu.arch.sve_max_vl = min(host_vcpu->arch.sve_max_vl, kvm_host_sve_max_vl); - hyp_vcpu->vcpu.arch.hw_mmu = host_vcpu->arch.hw_mmu; - hyp_vcpu->vcpu.arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; hyp_vcpu->vcpu.arch.hcr_el2 &= ~(HCR_TWI | HCR_TWE); hyp_vcpu->vcpu.arch.hcr_el2 |= READ_ONCE(host_vcpu->arch.hcr_el2) & diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 641e4fec1659..9403524c11c6 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -31,6 +32,8 @@ static phys_addr_t __ro_after_init hyp_idmap_vector; static unsigned long __ro_after_init io_map_base; +#define KVM_PGT_FN(fn) (!is_protected_kvm_enabled() ? fn : p ## fn) + static phys_addr_t __stage2_range_addr_end(phys_addr_t addr, phys_addr_t end, phys_addr_t size) { @@ -147,7 +150,7 @@ static int kvm_mmu_split_huge_pages(struct kvm *kvm, phys_addr_t addr, return -EINVAL; next = __stage2_range_addr_end(addr, end, chunk_size); - ret = kvm_pgtable_stage2_split(pgt, addr, next - addr, cache); + ret = KVM_PGT_FN(kvm_pgtable_stage2_split)(pgt, addr, next - addr, cache); if (ret) break; } while (addr = next, addr != end); @@ -168,15 +171,23 @@ static bool memslot_is_logging(struct kvm_memory_slot *memslot) */ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { - kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); + if (is_protected_kvm_enabled()) + kvm_call_hyp_nvhe(__pkvm_tlb_flush_vmid, kvm->arch.pkvm.handle); + else + kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); return 0; } int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages) { - kvm_tlb_flush_vmid_range(&kvm->arch.mmu, - gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT); + u64 size = nr_pages << PAGE_SHIFT; + u64 addr = gfn << PAGE_SHIFT; + + if (is_protected_kvm_enabled()) + kvm_call_hyp_nvhe(__pkvm_tlb_flush_vmid, kvm->arch.pkvm.handle); + else + kvm_tlb_flush_vmid_range(&kvm->arch.mmu, addr, size); return 0; } @@ -225,7 +236,7 @@ static void stage2_free_unlinked_table_rcu_cb(struct rcu_head *head) void *pgtable = page_to_virt(page); s8 level = page_private(page); - kvm_pgtable_stage2_free_unlinked(&kvm_s2_mm_ops, pgtable, level); + KVM_PGT_FN(kvm_pgtable_stage2_free_unlinked)(&kvm_s2_mm_ops, pgtable, level); } static void stage2_free_unlinked_table(void *addr, s8 level) @@ -324,7 +335,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 lockdep_assert_held_write(&kvm->mmu_lock); WARN_ON(size & ~PAGE_MASK); - WARN_ON(stage2_apply_range(mmu, start, end, kvm_pgtable_stage2_unmap, + WARN_ON(stage2_apply_range(mmu, start, end, KVM_PGT_FN(kvm_pgtable_stage2_unmap), may_block)); } @@ -336,7 +347,7 @@ void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_flush); + stage2_apply_range_resched(mmu, addr, end, KVM_PGT_FN(kvm_pgtable_stage2_flush)); } static void stage2_flush_memslot(struct kvm *kvm, @@ -942,10 +953,14 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t return -ENOMEM; mmu->arch = &kvm->arch; - err = kvm_pgtable_stage2_init(pgt, mmu, &kvm_s2_mm_ops); + err = KVM_PGT_FN(kvm_pgtable_stage2_init)(pgt, mmu, &kvm_s2_mm_ops); if (err) goto out_free_pgtable; + mmu->pgt = pgt; + if (is_protected_kvm_enabled()) + return 0; + mmu->last_vcpu_ran = alloc_percpu(typeof(*mmu->last_vcpu_ran)); if (!mmu->last_vcpu_ran) { err = -ENOMEM; @@ -959,7 +974,6 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t mmu->split_page_chunk_size = KVM_ARM_EAGER_SPLIT_CHUNK_SIZE_DEFAULT; mmu->split_page_cache.gfp_zero = __GFP_ZERO; - mmu->pgt = pgt; mmu->pgd_phys = __pa(pgt->pgd); if (kvm_is_nested_s2_mmu(kvm, mmu)) @@ -968,7 +982,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t return 0; out_destroy_pgtable: - kvm_pgtable_stage2_destroy(pgt); + KVM_PGT_FN(kvm_pgtable_stage2_destroy)(pgt); out_free_pgtable: kfree(pgt); return err; @@ -1065,7 +1079,7 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) write_unlock(&kvm->mmu_lock); if (pgt) { - kvm_pgtable_stage2_destroy(pgt); + KVM_PGT_FN(kvm_pgtable_stage2_destroy)(pgt); kfree(pgt); } } @@ -1082,9 +1096,11 @@ static void *hyp_mc_alloc_fn(void *unused) void free_hyp_memcache(struct kvm_hyp_memcache *mc) { - if (is_protected_kvm_enabled()) - __free_hyp_memcache(mc, hyp_mc_free_fn, - kvm_host_va, NULL); + if (!is_protected_kvm_enabled()) + return; + + kfree(mc->mapping); + __free_hyp_memcache(mc, hyp_mc_free_fn, kvm_host_va, NULL); } int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages) @@ -1092,6 +1108,12 @@ int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages) if (!is_protected_kvm_enabled()) return 0; + if (!mc->mapping) { + mc->mapping = kzalloc(sizeof(struct pkvm_mapping), GFP_KERNEL_ACCOUNT); + if (!mc->mapping) + return -ENOMEM; + } + return __topup_hyp_memcache(mc, min_pages, hyp_mc_alloc_fn, kvm_host_pa, NULL); } @@ -1130,8 +1152,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, break; write_lock(&kvm->mmu_lock); - ret = kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, pa, prot, - &cache, 0); + ret = KVM_PGT_FN(kvm_pgtable_stage2_map)(pgt, addr, PAGE_SIZE, + pa, prot, &cache, 0); write_unlock(&kvm->mmu_lock); if (ret) break; @@ -1151,7 +1173,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, */ void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_wrprotect); + stage2_apply_range_resched(mmu, addr, end, KVM_PGT_FN(kvm_pgtable_stage2_wrprotect)); } /** @@ -1442,9 +1464,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, unsigned long mmu_seq; phys_addr_t ipa = fault_ipa; struct kvm *kvm = vcpu->kvm; - struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; struct vm_area_struct *vma; short vma_shift; + void *memcache; gfn_t gfn; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); @@ -1472,8 +1494,15 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * and a write fault needs to collapse a block entry into a table. */ if (!fault_is_perm || (logging_active && write_fault)) { - ret = kvm_mmu_topup_memory_cache(memcache, - kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu)); + int min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); + + if (!is_protected_kvm_enabled()) { + memcache = &vcpu->arch.mmu_page_cache; + ret = kvm_mmu_topup_memory_cache(memcache, min_pages); + } else { + memcache = &vcpu->arch.pkvm_memcache; + ret = topup_hyp_memcache(memcache, min_pages); + } if (ret) return ret; } @@ -1494,7 +1523,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * logging_active is guaranteed to never be true for VM_PFNMAP * memslots. */ - if (logging_active) { + if (logging_active || is_protected_kvm_enabled()) { force_pte = true; vma_shift = PAGE_SHIFT; } else { @@ -1634,7 +1663,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, prot |= kvm_encode_nested_level(nested); } - read_lock(&kvm->mmu_lock); + kvm_fault_lock(kvm); pgt = vcpu->arch.hw_mmu->pgt; if (mmu_invalidate_retry(kvm, mmu_seq)) { ret = -EAGAIN; @@ -1696,16 +1725,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * PTE, which will be preserved. */ prot &= ~KVM_NV_GUEST_MAP_SZ; - ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot, flags); + ret = KVM_PGT_FN(kvm_pgtable_stage2_relax_perms)(pgt, fault_ipa, prot, flags); } else { - ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, + ret = KVM_PGT_FN(kvm_pgtable_stage2_map)(pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, memcache, flags); } out_unlock: kvm_release_faultin_page(kvm, page, !!ret, writable); - read_unlock(&kvm->mmu_lock); + kvm_fault_unlock(kvm); /* Mark the page dirty only if the fault is handled successfully */ if (writable && !ret) @@ -1724,7 +1753,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) read_lock(&vcpu->kvm->mmu_lock); mmu = vcpu->arch.hw_mmu; - kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa, flags); + KVM_PGT_FN(kvm_pgtable_stage2_mkyoung)(mmu->pgt, fault_ipa, flags); read_unlock(&vcpu->kvm->mmu_lock); } @@ -1764,7 +1793,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) } /* Falls between the IPA range and the PARange? */ - if (fault_ipa >= BIT_ULL(vcpu->arch.hw_mmu->pgt->ia_bits)) { + if (fault_ipa >= BIT_ULL(VTCR_EL2_IPA(vcpu->arch.hw_mmu->vtcr))) { fault_ipa |= kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0); if (is_iabt) @@ -1930,7 +1959,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (!kvm->arch.mmu.pgt) return false; - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + return KVM_PGT_FN(kvm_pgtable_stage2_test_clear_young)(kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, size, true); /* @@ -1946,7 +1975,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (!kvm->arch.mmu.pgt) return false; - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + return KVM_PGT_FN(kvm_pgtable_stage2_test_clear_young)(kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, size, false); }