From patchwork Mon Dec 16 17:57:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BF1EEE7717F for ; Mon, 16 Dec 2024 18:01:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Zm077LWUklom1qgghNmApFcV7NaUdG/wSJAtTBhffFY=; b=ph2xlX7OpZlXeF6qs8uPa7aPHn jaQ9SeCiGg06Hypl0CGJyGn8XAPOZFeZDP8zY2yWQPLstLkrEHu3iU0jzQMIU1xJenfU7ZJa6PT3V N2wrDrxFwK3l2LxNZ6Ro6jorIx/U97vki/4UYHI4oVTa2GnT9nsyEtWDuKjHneZpbOETmVlzrAo0n ptRWJrxOXKscdKrVJb0j2JHS0zSb3SGkBXnhfQuE0ZOaYKVvgMFZjWl+OWlDADTgblYPYMX0tETBt 8H1ifUCDGAnXBxJAdZRJNubsB0f7FionhpJJd8foP6sgmhkFnuOQNzcviwBco8PEu8xKwBgTSMH9O 0lwOc0uA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNFPY-0000000As93-2QkJ; Mon, 16 Dec 2024 18:01:40 +0000 Received: from mail-ej1-x64a.google.com ([2a00:1450:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFM9-0000000AqsU-3Fuz for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:10 +0000 Received: by mail-ej1-x64a.google.com with SMTP id a640c23a62f3a-aab954d1116so273117966b.3 for ; Mon, 16 Dec 2024 09:58:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371887; x=1734976687; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Zm077LWUklom1qgghNmApFcV7NaUdG/wSJAtTBhffFY=; b=Wl5SyB6gd+LStevkpkr5EVhcy+OKsLfPg+hf2fbkvkCASl+opWPPu0gGqcm3QsT6lg pjyKy705VHlc4oNI/HXsmYHa1Ns6A8MAJOZTKZuG6RzDyrJdY1p6CJl9WwtQV6yoCYp2 lGC28iExbl7br44783xviVvqfJMH3gFpNSzn2K9B+YN6zBW+qY75jYUeGumTOdCExooo rL+AIDSQ2SpmldzWTsq0yBtzJaUr8z6tdnisfsSrU0JsrUOFPTafXmDfVYEWYi5p2EEE RU2fIEFLE84f339N4byMyretjAE3ErRjYfwP5IXSIYZZuGNAb/W3bpJXTStnDNCNEea2 +O8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371887; x=1734976687; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Zm077LWUklom1qgghNmApFcV7NaUdG/wSJAtTBhffFY=; b=f9qKG+XkhquiRtRctrdXqvXxAdI0ly/wxOPs90pBHcjp1eH3yOj1JHXz6efi1UKTu+ 5eJ0YV3Z5z59Httmsbmc280udDn1syz2AViNEymZjTCyFzgMvQ4I2nBztlk1C++6W9iR MtmkhYMG9842mCR55QyNHCIBNNA+aXGNsySk2yQO5aLEIerdN59Ak+h+pJ8WE1UUzYdp nmAu3D6QWHZ+tkTn7W4RH1S4UxVVsZ3E5GHKrlRg81mDri86uKddlW8JNf7zdBDIK05y pYDgtcEn2q9ukcIO7PCrN/5WqwJ2o5saJxSBRcuXex27jWxihnuAdNTguvakgJNL97zk 47ww== X-Forwarded-Encrypted: i=1; AJvYcCUCOqMaRZzOu8fI9nKVd66zAp50NdLmfdjSAX6jV56dpwZGbIDSuBStBc/I0nveE56oYBMlYpRPOSl2iCK5tOrm@lists.infradead.org X-Gm-Message-State: AOJu0YwVrNCIUkqaQDgsXdKjzeNjO3W2uWVxfyYKYXlqHiyIWsBxlCHp WkG2+whw20I22NqCOjCIP4a3GTfS00sTUy2dqay7kq9UIPUmD2bWqZLvP+1R9nOhymSHDGJjb+b QuCK0/Q== X-Google-Smtp-Source: AGHT+IHQ5oA98sWMNkT9FIe7HUKG7DjxRL6Q1mhzqfn466rudnNmwiDaQ8QrHj3NSSmISVUj3n4MBHY2yXmJ X-Received: from ejctj6.prod.google.com ([2002:a17:907:c246:b0:aab:70bc:648c]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:7ba1:b0:aa6:894c:84b9 with SMTP id a640c23a62f3a-aab7795ff77mr1374185366b.23.1734371887552; Mon, 16 Dec 2024 09:58:07 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:46 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-2-qperret@google.com> Subject: [PATCH v3 01/18] KVM: arm64: Change the layout of enum pkvm_page_state From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095809_813756_91BB1280 X-CRM114-Status: GOOD ( 11.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The 'concrete' (a.k.a non-meta) page states are currently encoded using software bits in PTEs. For performance reasons, the abstract pkvm_page_state enum uses the same bits to encode these states as that makes conversions from and to PTEs easy. In order to prepare the ground for moving the 'concrete' state storage to the hyp vmemmap, re-arrange the enum to use bits 0 and 1 for this purpose. No functional changes intended. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 0972faccc2af..8c30362af2b9 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -24,25 +24,27 @@ */ enum pkvm_page_state { PKVM_PAGE_OWNED = 0ULL, - PKVM_PAGE_SHARED_OWNED = KVM_PGTABLE_PROT_SW0, - PKVM_PAGE_SHARED_BORROWED = KVM_PGTABLE_PROT_SW1, - __PKVM_PAGE_RESERVED = KVM_PGTABLE_PROT_SW0 | - KVM_PGTABLE_PROT_SW1, + PKVM_PAGE_SHARED_OWNED = BIT(0), + PKVM_PAGE_SHARED_BORROWED = BIT(1), + __PKVM_PAGE_RESERVED = BIT(0) | BIT(1), /* Meta-states which aren't encoded directly in the PTE's SW bits */ - PKVM_NOPAGE, + PKVM_NOPAGE = BIT(2), }; +#define PKVM_PAGE_META_STATES_MASK (~(BIT(0) | BIT(1))) #define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT_SW1) static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot prot, enum pkvm_page_state state) { - return (prot & ~PKVM_PAGE_STATE_PROT_MASK) | state; + prot &= ~PKVM_PAGE_STATE_PROT_MASK; + prot |= FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); + return prot; } static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) { - return prot & PKVM_PAGE_STATE_PROT_MASK; + return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); } struct host_mmu { From patchwork Mon Dec 16 17:57:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910182 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2B171E7717F for ; Mon, 16 Dec 2024 18:03:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=IKKtybYMKA7zsu5unQStwbk02cJvQpLoCTe4uFlfQf0=; b=c+/8lWuDPf/LolGOqkNLs14I5f HWAA40kTe0AP99qTHQ2185I1O3EnT+IyGbUL6YPcRHiV3BPVBr+3L0bcWKioBpaOng1jhelUFGIKu F5sRgLiL+vRTVsvbk1UfYtiMtfVglprNDlnXvVWXlM9mE9+tlVP8yf6lPRUw64k2vXBunOSKTXVNE NlUHm1CjqaygVkWm/7eFHl6nBizGeEIv+38ydNKyg/J0S9qcMVQC24hwobGhhEWe9OwOvTrnqYJgR SDzz6Ofz1bqC4qfQ8fnIYEee2dARFM72/tmhKHNoqMknHOWAoLs7WBwTNKJd2ZXNkPmKdZYKaj1Sm o8xY+vEw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNFQe-0000000AsNo-0qnZ; Mon, 16 Dec 2024 18:02:48 +0000 Received: from mail-ej1-x64a.google.com ([2a00:1450:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFMB-0000000Aqtc-2aNx for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:12 +0000 Received: by mail-ej1-x64a.google.com with SMTP id a640c23a62f3a-aa6845cf116so447105766b.1 for ; Mon, 16 Dec 2024 09:58:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371889; x=1734976689; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IKKtybYMKA7zsu5unQStwbk02cJvQpLoCTe4uFlfQf0=; b=vGbD9abXB+Y2dIWI3hcZWSXoeYAW5N9xsUXTjxLb4kDgr4HVsgPQDN/jRIBuZ1buEW ZRlmgteLunqEljajXFqvFZ5U2BwTiJBE1zaQyO+BiC4UbqKuou7OtxLDcce4d7eMm6py 9P4nbGhXWNYiqTMfYh006Ri0/KpVO1ksWMzbVVRKQNp99uTjPwpa5oaSsxik/4UklIL2 J2yf63cQxDVhS8giUO48/73+DyxumkjpVeRBXK40hUEcnRxlJt/p9LTiiSpkCP1wzEHP 20d7iC6jm9U8L4wWqHrJF58sw/zTCAZFVubXDkQI9BvQr6f1AEHlwlDnwTrr+ewFcn6m Cx3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371889; x=1734976689; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IKKtybYMKA7zsu5unQStwbk02cJvQpLoCTe4uFlfQf0=; b=pbDsIgldexIZNfFxITA/hcSIyk5xPjKwV7Mrucwb5TFtI/lKSTcmAG3IK8WaEDl69J 0tnW71LNVYM1YkEyHrTBd21uRjpNIaYKTluY7K9EU0MAt98teS2JMc16lXtiws4ARbJv 4eFG+TtpC+pBH/wrCeH7HR6naTlt2oslV7gfMuxI4eTEdh6IoayUH4Q1iELTGLR94yex 92tvvK8M9s6d5j4QzRmDe5oawrH7lHjbr+EMOWiwwRJJE3PX5UGgPohf7cMs0UZrY3mM 3nVOL7RB7mDGOFxloClccz2TRtaQ32lrdTknQjpaEZuKqBSXzL7uYYkoIzxkGR43L4JV q1pA== X-Forwarded-Encrypted: i=1; AJvYcCXAepz5AvOM6AqekQygzaebtCZ3Mjad+uvDHCBhJayMu0XSFW+v1yr2znpwH4m6OPCPT7CkHX75ptQyggZU9+h2@lists.infradead.org X-Gm-Message-State: AOJu0Yy05jfCJuZoyXNaG+R6bCjIsS7vscE/RgpSpWyV1xwcU7vl/y9b R+9QiQiHbunA2IHZBRiK8mMwlk+Hlb4onJVHDNWLrQHE+Q/KUksEy6+qt68iL9yzKGDMgAY7Y4G CPptjQA== X-Google-Smtp-Source: AGHT+IGNIsVt310HOq58YI6qOaJU0R9CQoXTaCdBST5eb3fkgxzHsq57oQwvG8ZxTSrd75RDqwWjPK4h1541 X-Received: from edbin5.prod.google.com ([2002:a05:6402:2085:b0:5d2:727b:9b35]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:7da5:b0:aa6:9540:570f with SMTP id a640c23a62f3a-aabdc8bd0e0mr21397566b.18.1734371889630; Mon, 16 Dec 2024 09:58:09 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:47 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-3-qperret@google.com> Subject: [PATCH v3 02/18] KVM: arm64: Move enum pkvm_page_state to memory.h From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095811_655511_64471B40 X-CRM114-Status: GOOD ( 14.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In order to prepare the way for storing page-tracking information in pKVM's vmemmap, move the enum pkvm_page_state definition to nvhe/memory.h. No functional changes intended. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 34 +------------------ arch/arm64/kvm/hyp/include/nvhe/memory.h | 33 ++++++++++++++++++ 2 files changed, 34 insertions(+), 33 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 8c30362af2b9..25038ac705d8 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -11,42 +11,10 @@ #include #include #include +#include #include #include -/* - * SW bits 0-1 are reserved to track the memory ownership state of each page: - * 00: The page is owned exclusively by the page-table owner. - * 01: The page is owned by the page-table owner, but is shared - * with another entity. - * 10: The page is shared with, but not owned by the page-table owner. - * 11: Reserved for future use (lending). - */ -enum pkvm_page_state { - PKVM_PAGE_OWNED = 0ULL, - PKVM_PAGE_SHARED_OWNED = BIT(0), - PKVM_PAGE_SHARED_BORROWED = BIT(1), - __PKVM_PAGE_RESERVED = BIT(0) | BIT(1), - - /* Meta-states which aren't encoded directly in the PTE's SW bits */ - PKVM_NOPAGE = BIT(2), -}; -#define PKVM_PAGE_META_STATES_MASK (~(BIT(0) | BIT(1))) - -#define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT_SW1) -static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot prot, - enum pkvm_page_state state) -{ - prot &= ~PKVM_PAGE_STATE_PROT_MASK; - prot |= FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); - return prot; -} - -static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) -{ - return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); -} - struct host_mmu { struct kvm_arch arch; struct kvm_pgtable pgt; diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index ab205c4d6774..c84b24234ac7 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -7,6 +7,39 @@ #include +/* + * SW bits 0-1 are reserved to track the memory ownership state of each page: + * 00: The page is owned exclusively by the page-table owner. + * 01: The page is owned by the page-table owner, but is shared + * with another entity. + * 10: The page is shared with, but not owned by the page-table owner. + * 11: Reserved for future use (lending). + */ +enum pkvm_page_state { + PKVM_PAGE_OWNED = 0ULL, + PKVM_PAGE_SHARED_OWNED = BIT(0), + PKVM_PAGE_SHARED_BORROWED = BIT(1), + __PKVM_PAGE_RESERVED = BIT(0) | BIT(1), + + /* Meta-states which aren't encoded directly in the PTE's SW bits */ + PKVM_NOPAGE = BIT(2), +}; +#define PKVM_PAGE_META_STATES_MASK (~(BIT(0) | BIT(1))) + +#define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT_SW1) +static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot prot, + enum pkvm_page_state state) +{ + prot &= ~PKVM_PAGE_STATE_PROT_MASK; + prot |= FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); + return prot; +} + +static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) +{ + return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); +} + struct hyp_page { unsigned short refcount; unsigned short order; From patchwork Mon Dec 16 17:57:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2262CE7717F for ; Mon, 16 Dec 2024 18:04:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=9U+p93j/YN41MEZx68FsDZhowuKIrUGiMwA9+JFupUA=; b=FNJcrv3i7M/AVo3LhPkWAb2juH sDlR7a7B0z4A6jDlLATpWjjs9XEaVKRnnjFSBYpTz0cE340mVaOZRyZ5B6XL7PYJXrWwm673NKH3u NPEfoExYNyc7hOxnUoo2gtJBEbAOI183+OXuAwa4dvCa1e2v0S39QpKcd16fMCdwrl1u4NjERg5TU UA1Tw4w3/XVbrkbBH41wtNw0F48sNOGJ7GLFx2uUv4zQvLIScxzGGS9pfG2o/UuM8Fh4kt/vIk3Hr r+A7OEHTWUWUEAo98E1ds5GQxy9tv7OOnYwBJeYQ267BO/waPa/dMBOP87BM1h3xXynXkSw1x/p75 bKw7chCQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNFRj-0000000AstY-3noo; Mon, 16 Dec 2024 18:03:55 +0000 Received: from mail-ed1-x54a.google.com ([2a00:1450:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFMD-0000000Aqu0-477h for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:15 +0000 Received: by mail-ed1-x54a.google.com with SMTP id 4fb4d7f45d1cf-5d3cdb9d4b1so6005881a12.1 for ; Mon, 16 Dec 2024 09:58:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371891; x=1734976691; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9U+p93j/YN41MEZx68FsDZhowuKIrUGiMwA9+JFupUA=; b=QcdBiTtRJanKI5x8qkGq9FiAqj23Ovcm9ZgpUxBasAHFywNeOc0bIkBcthlYKn+mQe ewkGrJ3NyGOF9Qd7lK0TDzwqIRgZLwrkazcooGZ+PtaNLsBawxD4hkUiLfPenk9n2yH8 B18U4rwNKl7bIjHTyILH5fUuQhW0M99kBahpQQ9JcDo4oLrP/NEkp3ux2E/lxsChK2X7 IRSMO1oHoq+OQKQ2zcVFf05VEm6GIb9d8uwq3mhGw0Lt4Zb9i9FWHEXpycfZiMT4dTtu 7dpdAVXzmv3maniPf8x7cKvJzw/uQ4rhVhGLyRV7E32BQVco5V6BWZrnYMZU2iDKcy5u obLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371891; x=1734976691; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9U+p93j/YN41MEZx68FsDZhowuKIrUGiMwA9+JFupUA=; b=ho/DJP1Y5CMTlzuGxt5ogMJGRb8yEzSVBQKeVAvR68IS+3cRGJgUf/96ovtR0qceUp 1t9mDAh+4doRpvR7FS7TPLT8D/KVQ2DdvHQorvO7ixlCO6+gvTVf55TR8mpjxMTSjXpZ a+3jAQMu6M1hjOKABZwiJ+BS+TunhHPDEXiiGDlKL9JdweuTrMu0qHPOY5Bc9FYnQpLH fKxqWjh75FWYusPFYbNOaoJWChYYDu3XqRfKfGo8/Uz3lSQbJAj0aDNyN7SfDmlmCOzD Z7yP8CM+szFrX8oEZz7ShnYrfdAQM2av2J9lFNYQ+rKXpCrn+Ghr5KHSPDc7pgm26fqD FYtA== X-Forwarded-Encrypted: i=1; AJvYcCU0NkbJcdCuAj+xXAvuzm1U4TmaK8wawGTjwGTPuATin7l6Q7bEpfBMfe/IPxDWR/2DQDtQ8FVJWwo92uNpToSv@lists.infradead.org X-Gm-Message-State: AOJu0YwkDIGx3ndrLG7gaFeuCvljemK7YOlsLnpdEdnAYD2q/pf5vvch gQG4/QnET4AQcNMKm8q6QaeSPHUVJDnjB7F9wQg5EmXOGAXsIK1j2371kEhTS1Vm66JDGoZmF5a jzgnWvw== X-Google-Smtp-Source: AGHT+IG7st/yyJqIhpv2xQBIuWlDy4uU3mEKGGddfO0WV9zfYHQl4j73x0bTTMfU7vqe5be5/q+MEgxQCPxJ X-Received: from edbet14.prod.google.com ([2002:a05:6402:378e:b0:5d0:a9a6:5abc]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:354d:b0:5d0:7282:6f22 with SMTP id 4fb4d7f45d1cf-5d7d56175c1mr253568a12.14.1734371891718; Mon, 16 Dec 2024 09:58:11 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:48 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-4-qperret@google.com> Subject: [PATCH v3 03/18] KVM: arm64: Make hyp_page::order a u8 From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095814_018664_9518F86C X-CRM114-Status: GOOD ( 14.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We don't need 16 bits to store the hyp page order, and we'll need some bits to store page ownership data soon, so let's reduce the order member. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 6 +++--- arch/arm64/kvm/hyp/include/nvhe/memory.h | 5 +++-- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 14 +++++++------- 3 files changed, 13 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index 97c527ef53c2..f1725bad6331 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -7,7 +7,7 @@ #include #include -#define HYP_NO_ORDER USHRT_MAX +#define HYP_NO_ORDER 0xff struct hyp_pool { /* @@ -19,11 +19,11 @@ struct hyp_pool { struct list_head free_area[NR_PAGE_ORDERS]; phys_addr_t range_start; phys_addr_t range_end; - unsigned short max_order; + u8 max_order; }; /* Allocation */ -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order); +void *hyp_alloc_pages(struct hyp_pool *pool, u8 order); void hyp_split_page(struct hyp_page *page); void hyp_get_page(struct hyp_pool *pool, void *addr); void hyp_put_page(struct hyp_pool *pool, void *addr); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index c84b24234ac7..45b8d1840aa4 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -41,8 +41,9 @@ static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) } struct hyp_page { - unsigned short refcount; - unsigned short order; + u16 refcount; + u8 order; + u8 reserved; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index e691290d3765..a1eb27a1a747 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -32,7 +32,7 @@ u64 __hyp_vmemmap; */ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { phys_addr_t addr = hyp_page_to_phys(p); @@ -51,7 +51,7 @@ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, /* Find a buddy page currently available for allocation */ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { struct hyp_page *buddy = __find_buddy_nocheck(pool, p, order); @@ -94,7 +94,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { phys_addr_t phys = hyp_page_to_phys(p); - unsigned short order = p->order; + u8 order = p->order; struct hyp_page *buddy; memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order); @@ -129,7 +129,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { struct hyp_page *buddy; @@ -183,7 +183,7 @@ void hyp_get_page(struct hyp_pool *pool, void *addr) void hyp_split_page(struct hyp_page *p) { - unsigned short order = p->order; + u8 order = p->order; unsigned int i; p->order = 0; @@ -195,10 +195,10 @@ void hyp_split_page(struct hyp_page *p) } } -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order) +void *hyp_alloc_pages(struct hyp_pool *pool, u8 order) { - unsigned short i = order; struct hyp_page *p; + u8 i = order; hyp_spin_lock(&pool->lock); From patchwork Mon Dec 16 17:57:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7D0F4E7717F for ; Mon, 16 Dec 2024 18:05:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=aNbeauvqp5W7nuvO39aHLHQD/cqU02ozkTE0BlHZ7yk=; b=eq3HjGraUWZhvcNjYKupUbcb0d Q6oL8/mrgtOYSP/tNX5W1YV/vMeFKdY+LmYaaL9vx693KyqKYYwdtVO8YvnUB8bvrV3xrmezcq+sm OkAveETQvttU1dpZmOA0HYsKBnXD/sCGifB1C+XNqU4INRijUyzUqxfAgrrVhguV7c5RBXVXb4yPl 4+ZY+5FU70EjcWKRKnqeCcpfGFa3/19RCp9G5z5vR0KvdzQgqzNk60Sdsj+NktGJx+EzQvSh0DOq1 CJVsaYTqidu6eIyupir4EmcVKOZhXTCi/lHkd3kK/LNp/qZnuQHIgDFFNRkuD+i+WSkdLjYsb9jKj sMzJEvhA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNFSo-0000000AtIR-2ARI; Mon, 16 Dec 2024 18:05:02 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFMG-0000000AquU-18xX for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:17 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d3bf8874dbso3527789a12.2 for ; Mon, 16 Dec 2024 09:58:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371894; x=1734976694; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aNbeauvqp5W7nuvO39aHLHQD/cqU02ozkTE0BlHZ7yk=; b=YaYVvGa3I9w6fB94/TJOeKkN33haFQDZshP8kZtta7UYyfNdH+VjG1iUcEdtTrNBKB Yri7nVgknozDua/u3vmlu5cbzJDK7V1UFq00InN4VblkoguiUcz9qtlZgW8qO9DHqKLL 1BsAQOajaIk6O/hfx7kMHjzt2GH+Oz5aGMVZdVHYDKRgYEGUKPMybGFyuDdfvx8AAsmY q7WkI5/gv5f0zzsEDeAU2XTGXM2EjZPhscDl5phPUr5jAl38aPrQQkHEE73TM94U4FPu /Qg2ebgJKKRIdBRl6CbGMYEQ6U+fUn45BVaV+0UaAhdcjwtzft8qRL4q8jjwkiftHiwd abZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371894; x=1734976694; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aNbeauvqp5W7nuvO39aHLHQD/cqU02ozkTE0BlHZ7yk=; b=TRk4Is8FiN5Q+yJXZpqzmikByNFYpF5k3+VKO+Ni7CFaQGjjoZw5439NxR1w9pPMl2 V1KLfYRnUUNb89pcXMBDRuzfcOXHFLTA796mWrszKxfkcJrYwr7xG0J8ijcZMcdj2m/W M44DT3+aQjEsZB0r5xGGlHjs7iEcY37nQUGEmZMGBBTdfL6LXeOQZJ0f2TWg+fAyIkxz 6a26MxePw0n6jC1gfGoezVEFrxHk4lc9Yoz7Xt1G1u83wpUCtkAOM+KYqUXKTnhmPbSe ocdeHuxfdJXIy+OXsADIDQHXXQZPygMp5IW8C+kTbRBJjHhey21j7sYXh6D8RcFoNnbC 9tjw== X-Forwarded-Encrypted: i=1; AJvYcCVbBfZgaY3M30/OalsvoLt0UGbWv+sJUC1f4fLG8PYRyBRwsnGT3S8K/vriG3tY7h+rODiZpcy2CTKZWAwgB88x@lists.infradead.org X-Gm-Message-State: AOJu0YzgjMeh31eQYQj9k+oPWO6u2Gm7Y55PXVaTgPYoiy7hx1KDFvJh YHlW9KzSrss7gq0VgDi+sd536Sw+0q1LcpYiH2T+yVv+U1fzub1+COQG9QO9rH5PTikGbVidLYg xWcjaYQ== X-Google-Smtp-Source: AGHT+IHxPGCw1y5ow9DgOQAPQ4TBp6JWxKWj0NrM4CPtdBj7NEcwM4Oy4ozYXUpiXyHJXYrisQbzG3ZC92H8 X-Received: from edon15.prod.google.com ([2002:aa7:d04f:0:b0:5d2:7266:10e]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:278b:b0:5d0:b7c5:c409 with SMTP id 4fb4d7f45d1cf-5d63c3163f0mr12759048a12.14.1734371893900; Mon, 16 Dec 2024 09:58:13 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:49 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-5-qperret@google.com> Subject: [PATCH v3 04/18] KVM: arm64: Move host page ownership tracking to the hyp vmemmap From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095816_320071_E0A3542F X-CRM114-Status: GOOD ( 24.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We currently store part of the page-tracking state in PTE software bits for the host, guests and the hypervisor. This is sub-optimal when e.g. sharing pages as this forces to break block mappings purely to support this software tracking. This causes an unnecessarily fragmented stage-2 page-table for the host in particular when it shares pages with Secure, which can lead to measurable regressions. Moreover, having this state stored in the page-table forces us to do multiple costly walks on the page transition path, hence causing overhead. In order to work around these problems, move the host-side page-tracking logic from SW bits in its stage-2 PTEs to the hypervisor's vmemmap. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/nvhe/memory.h | 6 +- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 100 ++++++++++++++++------- arch/arm64/kvm/hyp/nvhe/setup.c | 7 +- 3 files changed, 77 insertions(+), 36 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 45b8d1840aa4..8bd9a539f260 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -8,7 +8,7 @@ #include /* - * SW bits 0-1 are reserved to track the memory ownership state of each page: + * Bits 0-1 are reserved to track the memory ownership state of each page: * 00: The page is owned exclusively by the page-table owner. * 01: The page is owned by the page-table owner, but is shared * with another entity. @@ -43,7 +43,9 @@ static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) struct hyp_page { u16 refcount; u8 order; - u8 reserved; + + /* Host (non-meta) state. Guarded by the host stage-2 lock. */ + enum pkvm_page_state host_state : 8; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index caba3e4bd09e..12bb5445fe47 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -201,8 +201,8 @@ static void *guest_s2_zalloc_page(void *mc) memset(addr, 0, PAGE_SIZE); p = hyp_virt_to_page(addr); - memset(p, 0, sizeof(*p)); p->refcount = 1; + p->order = 0; return addr; } @@ -268,6 +268,7 @@ int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd) void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc) { + struct hyp_page *page; void *addr; /* Dump all pgtable pages in the hyp_pool */ @@ -279,7 +280,9 @@ void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc) /* Drain the hyp_pool into the memcache */ addr = hyp_alloc_pages(&vm->pool, 0); while (addr) { - memset(hyp_virt_to_page(addr), 0, sizeof(struct hyp_page)); + page = hyp_virt_to_page(addr); + page->refcount = 0; + page->order = 0; push_hyp_memcache(mc, addr, hyp_virt_to_phys); WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(addr), 1)); addr = hyp_alloc_pages(&vm->pool, 0); @@ -382,19 +385,28 @@ bool addr_is_memory(phys_addr_t phys) return !!find_mem_range(phys, &range); } -static bool addr_is_allowed_memory(phys_addr_t phys) +static bool is_in_mem_range(u64 addr, struct kvm_mem_range *range) +{ + return range->start <= addr && addr < range->end; +} + +static int check_range_allowed_memory(u64 start, u64 end) { struct memblock_region *reg; struct kvm_mem_range range; - reg = find_mem_range(phys, &range); + /* + * Callers can't check the state of a range that overlaps memory and + * MMIO regions, so ensure [start, end[ is in the same kvm_mem_range. + */ + reg = find_mem_range(start, &range); + if (!is_in_mem_range(end - 1, &range)) + return -EINVAL; - return reg && !(reg->flags & MEMBLOCK_NOMAP); -} + if (!reg || reg->flags & MEMBLOCK_NOMAP) + return -EPERM; -static bool is_in_mem_range(u64 addr, struct kvm_mem_range *range) -{ - return range->start <= addr && addr < range->end; + return 0; } static bool range_is_memory(u64 start, u64 end) @@ -454,8 +466,10 @@ static int host_stage2_adjust_range(u64 addr, struct kvm_mem_range *range) if (kvm_pte_valid(pte)) return -EAGAIN; - if (pte) + if (pte) { + WARN_ON(addr_is_memory(addr) && hyp_phys_to_page(addr)->host_state != PKVM_NOPAGE); return -EPERM; + } do { u64 granule = kvm_granule_size(level); @@ -477,10 +491,33 @@ int host_stage2_idmap_locked(phys_addr_t addr, u64 size, return host_stage2_try(__host_stage2_idmap, addr, addr + size, prot); } +static void __host_update_page_state(phys_addr_t addr, u64 size, enum pkvm_page_state state) +{ + phys_addr_t end = addr + size; + + for (; addr < end; addr += PAGE_SIZE) + hyp_phys_to_page(addr)->host_state = state; +} + int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id) { - return host_stage2_try(kvm_pgtable_stage2_set_owner, &host_mmu.pgt, - addr, size, &host_s2_pool, owner_id); + int ret; + + if (!addr_is_memory(addr)) + return -EPERM; + + ret = host_stage2_try(kvm_pgtable_stage2_set_owner, &host_mmu.pgt, + addr, size, &host_s2_pool, owner_id); + if (ret) + return ret; + + /* Don't forget to update the vmemmap tracking for the host */ + if (owner_id == PKVM_ID_HOST) + __host_update_page_state(addr, size, PKVM_PAGE_OWNED); + else + __host_update_page_state(addr, size, PKVM_NOPAGE); + + return 0; } static bool host_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot prot) @@ -604,35 +641,38 @@ static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size, return kvm_pgtable_walk(pgt, addr, size, &walker); } -static enum pkvm_page_state host_get_page_state(kvm_pte_t pte, u64 addr) -{ - if (!addr_is_allowed_memory(addr)) - return PKVM_NOPAGE; - - if (!kvm_pte_valid(pte) && pte) - return PKVM_NOPAGE; - - return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); -} - static int __host_check_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - struct check_walk_data d = { - .desired = state, - .get_page_state = host_get_page_state, - }; + u64 end = addr + size; + int ret; + + ret = check_range_allowed_memory(addr, end); + if (ret) + return ret; hyp_assert_lock_held(&host_mmu.lock); - return check_page_state_range(&host_mmu.pgt, addr, size, &d); + for (; addr < end; addr += PAGE_SIZE) { + if (hyp_phys_to_page(addr)->host_state != state) + return -EPERM; + } + + return 0; } static int __host_set_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - enum kvm_pgtable_prot prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, state); + if (hyp_phys_to_page(addr)->host_state == PKVM_NOPAGE) { + int ret = host_stage2_idmap_locked(addr, size, PKVM_HOST_MEM_PROT); - return host_stage2_idmap_locked(addr, size, prot); + if (ret) + return ret; + } + + __host_update_page_state(addr, size, state); + + return 0; } static int host_request_owned_transition(u64 *completer_addr, diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index cbdd18cd3f98..7e04d1c2a03d 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -180,7 +180,6 @@ static void hpool_put_page(void *addr) static int fix_host_ownership_walker(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - enum kvm_pgtable_prot prot; enum pkvm_page_state state; phys_addr_t phys; @@ -203,16 +202,16 @@ static int fix_host_ownership_walker(const struct kvm_pgtable_visit_ctx *ctx, case PKVM_PAGE_OWNED: return host_stage2_set_owner_locked(phys, PAGE_SIZE, PKVM_ID_HYP); case PKVM_PAGE_SHARED_OWNED: - prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_BORROWED); + hyp_phys_to_page(phys)->host_state = PKVM_PAGE_SHARED_BORROWED; break; case PKVM_PAGE_SHARED_BORROWED: - prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_OWNED); + hyp_phys_to_page(phys)->host_state = PKVM_PAGE_SHARED_OWNED; break; default: return -EINVAL; } - return host_stage2_idmap_locked(phys, PAGE_SIZE, prot); + return 0; } static int fix_hyp_pgtable_refcnt_walker(const struct kvm_pgtable_visit_ctx *ctx, From patchwork Mon Dec 16 17:57:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 141E7E7717F for ; Mon, 16 Dec 2024 18:06:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=dMBJ8qv477+/KhCtyqixNk1hTPuipFTJvx4sNjSPhX0=; b=XKjdfiDHkGY+k9/qFNYNVRhkGD ofZXNhearkbBXLNwL4LNyhJmFY3ToidkusuLbz6LykRvhtFH3o1kucm5XffTkz/2JViSSooAkgNwY yFHgjdOHH9iE11HpkdyzlfBusCOkJRyixHeeE2WcCRVDJw3sUTUzag8H3BZMQtIwQjBky8mJsTc8a 6cDUz7bcwx2ijFuHH1xJE333VjpNvZ2vzxdBPFSkqeFmtcsmugqtpBQWHrNB4ACW5qEk0edDMKlPK 2fGJS8IZGryErbpbr8y5XnGn7BYHqqrRlMDSZImBFnV0KePjarDk6k5921tWPlZiTlUeyrFmAN3UZ boHa2kcg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNFTu-0000000AtXD-0sgx; Mon, 16 Dec 2024 18:06:10 +0000 Received: from mail-ed1-x54a.google.com ([2a00:1450:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFMH-0000000Aqut-2gMR for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:18 +0000 Received: by mail-ed1-x54a.google.com with SMTP id 4fb4d7f45d1cf-5d3fe991854so4648267a12.0 for ; Mon, 16 Dec 2024 09:58:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371896; x=1734976696; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dMBJ8qv477+/KhCtyqixNk1hTPuipFTJvx4sNjSPhX0=; b=ZlOcVreo3nYiu1cOi7lAPKVi86ADHEbX6m+CGmA/NQYfGqHZVZBVNOYUlIZNLN82KJ qaBSfyTPLn7Uq3S04OLmYVMq0bZUypD9bQKJyEXZgq1faQfUwU8x0fx9Leynxf3TK6Zc x+wjqU3gWdvMZ+PfUC/7CZE8JGmusbFErvAkM0hOdJs8Mz/MNXj01yAATgiuI/LleF5u mfK5t71nSVu/20bCO4jWtDtsA22xbqNHA7gf/ZO4nc4Ju4HCJSt35ro9UJz14WXBMdD/ ELqqZpfj5lALiuu775yK6f6a1uTRBnn+BpMFfhEkB0lOum5+kWUCmGWmCmIgp/mW1+3B ZBYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371896; x=1734976696; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dMBJ8qv477+/KhCtyqixNk1hTPuipFTJvx4sNjSPhX0=; b=DJGgOI+ULgVgvfnkVIURhj+dRM/SiFVFmkoM4LyERyDq5nYxROd6up/yvhR4L+bJWW +K0D7krglE2aibSpjhMe6cBr+aQQjRpdaVWafv+sMFIx6W/JhK+FCrk9KN3orDXIK8xJ 7ZO2rgMbHbEQXB/9Mq4AHUNP2cfSAZFiTzeYzijV1KCPMIvhsQW0ZPAEWPCRTvqewLFr 0zDNWT7phiUo7stSGxi3uIxi6sjww9vVya0ndKbBTbLlshbAPPBistsfdsTpLNaAchuF ZSaujd0nW9i8nTPFDIsM+KijVtTtuNTG7O+lc83IZAVi4jXHNMbZkZdjKZmZr3VZOi6b WfiA== X-Forwarded-Encrypted: i=1; AJvYcCXhs0ekUCU18xw6jQP6VbbD6nkCCmAupQqMXfGchaxpn+4eRBxz44F1apweU5Mm4ZRjl4b8EPXaSaJg6jmI4dWM@lists.infradead.org X-Gm-Message-State: AOJu0YySi7GexeIxdbyFoJaf2S6g1VqZgY7avok+QoJEREDsQYyoyy5X wXyMVL8eAj/xpumN53iYYTTuUk09D5CGk+rO8Q37r6SdJjF8ZDxWj05ZJKibDP41VcjOp6F3MOZ pS1Szcw== X-Google-Smtp-Source: AGHT+IG78GpSrIyP8EQZsymolUELoZAfYkZfqZ5uAr74kYA6Of93xC43jJPtt/eSfM7J/zSOVAI94LqFwgRU X-Received: from edber15.prod.google.com ([2002:a05:6402:448f:b0:5d6:3c96:bb82]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:2695:b0:5cf:e71c:ff88 with SMTP id 4fb4d7f45d1cf-5d7d4053910mr586302a12.4.1734371895963; Mon, 16 Dec 2024 09:58:15 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:50 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-6-qperret@google.com> Subject: [PATCH v3 05/18] KVM: arm64: Pass walk flags to kvm_pgtable_stage2_mkyoung From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095817_680496_9C4CDF9E X-CRM114-Status: GOOD ( 15.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org kvm_pgtable_stage2_mkyoung currently assumes that it is being called from a 'shared' walker, which will not be true once called from pKVM. To allow for the re-use of that function, make the walk flags one of its parameters. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_pgtable.h | 4 +++- arch/arm64/kvm/hyp/pgtable.c | 7 +++---- arch/arm64/kvm/mmu.c | 3 ++- 3 files changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index aab04097b505..38b7ec1c8614 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -669,13 +669,15 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size); * kvm_pgtable_stage2_mkyoung() - Set the access flag in a page-table entry. * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address to identify the page-table entry. + * @flags: Flags to control the page-table walk (ex. a shared walk) * * The offset of @addr within a page is ignored. * * If there is a valid, leaf page-table entry used to translate @addr, then * set the access flag in that entry. */ -void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr); +void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr, + enum kvm_pgtable_walk_flags flags); /** * kvm_pgtable_stage2_test_clear_young() - Test and optionally clear the access diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 40bd55966540..0470aedb4bf4 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1245,14 +1245,13 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) NULL, NULL, 0); } -void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr) +void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr, + enum kvm_pgtable_walk_flags flags) { int ret; ret = stage2_update_leaf_attrs(pgt, addr, 1, KVM_PTE_LEAF_ATTR_LO_S2_AF, 0, - NULL, NULL, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + NULL, NULL, flags); if (!ret) dsb(ishst); } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c9d46ad57e52..a2339b76c826 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1718,13 +1718,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, /* Resolve the access fault by making the page young again. */ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) { + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED; struct kvm_s2_mmu *mmu; trace_kvm_access_fault(fault_ipa); read_lock(&vcpu->kvm->mmu_lock); mmu = vcpu->arch.hw_mmu; - kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa); + kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa, flags); read_unlock(&vcpu->kvm->mmu_lock); } From patchwork Mon Dec 16 17:57:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910186 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E3DFEE7717F for ; Mon, 16 Dec 2024 18:07:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=6dFW7RUrUetXxVFNK2ULy2Fjq1gVFLFzwOzuHDurHwY=; b=wAV2iZR/R//0WD0SzUpmyfrjc3 d0jcbfgIfAkBVZMGARcP7AiqJLcky64Ud1X5W2aFAm94Nk9qi+LKKMp29BOoHjfumRlcDnCM+raKQ bqkdDQcCoWAOnq1PxyC3FeQqZrULmCaTCo+Y5cbscQ3QDTl9ZHn8qCNYKhjzNeeEbK5obdFZTmPEz O9Z/4fmCrf/K7KsSZsM/4LBuJcioqMPzJb4tksHU7HdWiNtzFe/FSs0IneL60WNUPbYEqfn0WdcKj ZtapqE00C/NDvGlU3tDqbhzxxnR1/jwEeo2mvyktpfeRYxYRBbi6Ov1lJjnhAsP0AnzkA5MjUbw6p GhkywJWQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNFUz-0000000Atn7-3hqj; Mon, 16 Dec 2024 18:07:17 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFMJ-0000000AqwV-2ihi for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:20 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d3c284eba0so6275563a12.1 for ; Mon, 16 Dec 2024 09:58:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371898; x=1734976698; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6dFW7RUrUetXxVFNK2ULy2Fjq1gVFLFzwOzuHDurHwY=; b=lnQGqgfeGiLOdZfd7cUzeerzRBavTnvgTMz8mJ9ahKwp8vLXnKiPcVOyKPfKwBG0IK x2l0yiqP3LPfXfdIbiAqbod9zI8JO7hcrI9B962gSwQuabbAb5EX5a/tpE+zNJn3f+Ub nbFhZiUCPyKhdsKSAK6bqWHeQlXDOE21CItfdYXYiPKYDf4pWqXO0xS0SfPsYhS1MOga n1gI2+86qFb8LWBSxg32nwdjLMuL2V97R1L8sRVhVxIW5Q0MmXguz3IlAVU5qrMzAnO0 DfEutewPEP8wJ0PvaNDGWe4fFsDONlcV2Nyo+e1RqWn0o9JEqkAUODC7eHQVOOj5XrIk Odog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371898; x=1734976698; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6dFW7RUrUetXxVFNK2ULy2Fjq1gVFLFzwOzuHDurHwY=; b=GwtHTd9RqFIAyVMEnzkJFNCC7HBBG4sCbpyudd1cYQN0131U+tP0egKMmyFmgvje5X RWYndoRqkVzbkavusQJR6Ie2C9M222GcvCVPcNjgy/hRWfGd8qTAuqboEsFZKWShdGrq lIT6pK2+t1jwpuGQKwiodYD4c3qgyaAwuwHcubyoGeZnXmet9+VgZwPuF4or9MFK/6Vm TXz9KD5syyo1drzpJZGCdfBe4UzQ0FfuPsIMEGu/pVU/Mh1Vgt6uxdlmOuoChefUKLWj MGHs3M98GCEoT1m0GsPP6I/kyQ8DsBdQu6U5VGqfUxESkcfb2O1U3vjLpXgQadJ3SGjA y3gA== X-Forwarded-Encrypted: i=1; AJvYcCUx6Whh957nybjY1o4aNQmSGnNtvl2ABPzgpnkAFB0OehHLs67cxlo9all3MejSS9cAfq+a1NNveGsVfttlkfJd@lists.infradead.org X-Gm-Message-State: AOJu0YwPXqa3TIiwmtwpDw+WqBgtlgo0qMgnWkvmiZnu2e/ioh0s/KwL jWpw9Ui1GVskQmF4gzBieqjyRrubbI0jT9GoZPbTzZnooN00Vh+sHxWTur3K4tO9niD1b4RFqoU 1vY9jyA== X-Google-Smtp-Source: AGHT+IGc0SKdQx4fRg/2UZZOEnFM7Ob6WE6YJF9bkeGHxvuSwv02bVDmCemOOlGDIUPuD02qJQBB9v2MqSc7 X-Received: from edxk18.prod.google.com ([2002:a05:6402:12d2:b0:5d0:2300:5be]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:3551:b0:5d0:d84c:abb3 with SMTP id 4fb4d7f45d1cf-5d63c3b1c3bmr12340959a12.26.1734371898142; Mon, 16 Dec 2024 09:58:18 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:51 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-7-qperret@google.com> Subject: [PATCH v3 06/18] KVM: arm64: Pass walk flags to kvm_pgtable_stage2_relax_perms From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095819_688273_A2E54C03 X-CRM114-Status: GOOD ( 13.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org kvm_pgtable_stage2_relax_perms currently assumes that it is being called from a 'shared' walker, which will not be true once called from pKVM. To allow for the re-use of that function, make the walk flags one of its parameters. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_pgtable.h | 4 +++- arch/arm64/kvm/hyp/pgtable.c | 6 ++---- arch/arm64/kvm/mmu.c | 7 +++---- 3 files changed, 8 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 38b7ec1c8614..c2f4149283ef 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -707,6 +707,7 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address to identify the page-table entry. * @prot: Additional permissions to grant for the mapping. + * @flags: Flags to control the page-table walk (ex. a shared walk) * * The offset of @addr within a page is ignored. * @@ -719,7 +720,8 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, * Return: 0 on success, negative error code on failure. */ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, - enum kvm_pgtable_prot prot); + enum kvm_pgtable_prot prot, + enum kvm_pgtable_walk_flags flags); /** * kvm_pgtable_stage2_flush_range() - Clean and invalidate data cache to Point diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 0470aedb4bf4..b7a3b5363235 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1307,7 +1307,7 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, } int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, - enum kvm_pgtable_prot prot) + enum kvm_pgtable_prot prot, enum kvm_pgtable_walk_flags flags) { int ret; s8 level; @@ -1325,9 +1325,7 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, if (prot & KVM_PGTABLE_PROT_X) clr |= KVM_PTE_LEAF_ATTR_HI_S2_XN; - ret = stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + ret = stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level, flags); if (!ret || ret == -EAGAIN) kvm_call_hyp(__kvm_tlb_flush_vmid_ipa_nsh, pgt->mmu, addr, level); return ret; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index a2339b76c826..641e4fec1659 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1452,6 +1452,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; struct page *page; + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED; if (fault_is_perm) fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu); @@ -1695,13 +1696,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * PTE, which will be preserved. */ prot &= ~KVM_NV_GUEST_MAP_SZ; - ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot); + ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot, flags); } else { ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, - memcache, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + memcache, flags); } out_unlock: From patchwork Mon Dec 16 17:57:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D6141E7717F for ; Mon, 16 Dec 2024 18:08:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=qd1/ZR7yY7ReH6NCWgcpm0z1q63UQhmB9TeGY0t0ioU=; b=uXiSNmuVWcZYR2NIkv2rsbJTxB aC4R4L3aaU6A8o5KSeRr8BO1sOP9Sm+Vm2WVPnnnnXf+7T/NW7JIRdePBoeQmVCbY85HIBGF/Mz24 Qc2749Tk0UtWYIpX6icoutjvVsEnaLEoCNSHKFMbaKLT/Tp3wC9AcsUNg/b4UllN/7BkbI/65gSnq 4ccrlCmR/M64ZcrCMBGdIyCmRu+3MhoyjdwvaVLOLvAB1wdn3JbqrpoeyAG+Vq/ewzH/1n/Ay/GqQ 5dy1Lf1UeOEKYZduNhHOsKW8t6KC4rc4CrWBGG+zF7nQhSDeN1DTJqMwveu2ZSX/VptzjRfCaVJ8B RrrKVTmg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNFW5-0000000Au2d-2AOi; Mon, 16 Dec 2024 18:08:25 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFMN-0000000Aqxy-1N2H for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:25 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d3eceb9fe8so1856430a12.3 for ; Mon, 16 Dec 2024 09:58:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371901; x=1734976701; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qd1/ZR7yY7ReH6NCWgcpm0z1q63UQhmB9TeGY0t0ioU=; b=xnQcopw75O+ZS8sTSJu1H3rs9rLC4L9XZYskXmz0b4HbAzX7ZQaQzkwEa/5BBconmc tHurqH6ZblSsZIfTxpqEOM3iDQoWKYkFEP4tahrRytA+aP7deRbPqStwro1naz7Op58z PtfQ1aFZ4ZXkU7Prj/9R8RsEYpZaIA45vEviKTar8yyKtVAmtAhtCeueISgILLdRceWs 2UnboV8wAfko+jpZ60q5xp2MZke1tRfAArCamAtWWwMlryPmVxi+jy9+0Z78vbw8h2rw OfkXq7BWPCnEMjhw22lC7K4F77OM0QY3JlsFh8egQoC3PbPGS4AwReEW5iGwt8s/i01y 1YzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371901; x=1734976701; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qd1/ZR7yY7ReH6NCWgcpm0z1q63UQhmB9TeGY0t0ioU=; b=ZnDQIGjI1qzCpWFn5HUj+hOvVn7gtXWeCo0Z2QcXuSRGUWIxwQn922fBwhFjYcTB/d enzBmx2BxQsi89bvKyTE6avWhMMTFDIDPsT8gIFws/WGSGt2hyhSNLbAMsVzIdruVTak O8mDRg9SP/YACHjkyAFzxPT0bjekljmk4eVFuS1Rgfq50eB6XkipVyb5z6EL+/+5Ikha bDNAOq2yg97OqZkhiuA5NnMlRVMZtvEgG+/4jeOgxdPn+YXRpueunHRir+ZZ4e1xPk+U zsnp6UAGMRgoCBiZxFAzcXlHb+C3bfWKviK8ltpS4JSGbG1b9X0o4gQSTRUH7ncBtgiI JBHA== X-Forwarded-Encrypted: i=1; AJvYcCUlFOpj05sc4VA/23+iYzkxvPMLLzmeNAzQbfGFW2gMTURlAoeJ6Zm7zWCbFfrhcYZKq2LuH+Jf107AxnLa5t+x@lists.infradead.org X-Gm-Message-State: AOJu0Ywygtwb6twyF4qAsB0WuZOlj4VwHtgSPqN7Z5hec4mxO6D4pXjQ uKowJeFFALQ/jzww495Gimv2Hs70vWSjCMgs/aRXfNOkOApkizQqqLpp5AFMGUhoOe4BSf7ot3I 0ngNNbw== X-Google-Smtp-Source: AGHT+IEZAZq2KaUjkxJv0f3RrehRlI6rnfYsshuZuD5/X0wPwG8guYjpqgqhcPwLPFFRfZ50xXtdhX2JySHz X-Received: from edyv15.prod.google.com ([2002:a05:6402:184f:b0:5d0:1fec:60a3]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:270f:b0:5d0:bcdd:ffa8 with SMTP id 4fb4d7f45d1cf-5d63c2e7bcdmr11529217a12.1.1734371900235; Mon, 16 Dec 2024 09:58:20 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:52 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-8-qperret@google.com> Subject: [PATCH v3 07/18] KVM: arm64: Make kvm_pgtable_stage2_init() a static inline function From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095823_365976_A673060C X-CRM114-Status: GOOD ( 10.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Turn kvm_pgtable_stage2_init() into a static inline function instead of a macro. This will allow the usage of typeof() on it later on. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_pgtable.h | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index c2f4149283ef..04418b5e3004 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -526,8 +526,11 @@ int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, enum kvm_pgtable_stage2_flags flags, kvm_pgtable_force_pte_cb_t force_pte_cb); -#define kvm_pgtable_stage2_init(pgt, mmu, mm_ops) \ - __kvm_pgtable_stage2_init(pgt, mmu, mm_ops, 0, NULL) +static inline int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, + struct kvm_pgtable_mm_ops *mm_ops) +{ + return __kvm_pgtable_stage2_init(pgt, mmu, mm_ops, 0, NULL); +} /** * kvm_pgtable_stage2_destroy() - Destroy an unused guest stage-2 page-table. From patchwork Mon Dec 16 17:57:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8411FE7717F for ; Mon, 16 Dec 2024 18:09:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=FdcyBmRiYnupt5usNoa8YsTNzt1V3OCJZIqsaCSd/iY=; b=Ew/KbF0+QQaShzmILKcdQ6R9Jn z9Sj+xJFLVuvf05RZ9LPYGdDJM56feCbjLDzP4xy7/yC7PeqnP1e8VH1vdT+khzAgFdzdoUdlL0mz 5V9+CtcVTeaDvTeh1o9J6mpxizu/kmCgoXjZSjfztYaptTQ9U6eSj7ipo4DtKSpPXC4J6Qu8ZudKJ s5MJYBI2U18EETgi4+L1h/r0nNz9ugVaGn/rIwinu3D5/qO18iQ6xWXN5BWbibG7yEOqOrYT1Ndws SEwIQUUPJnme3T14qEwMiP/RQIk3e3POf8MiEC6GIspoundi+j/qNVWlCsuRHdifrktqdZUrIwuH3 uYIqMfWA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNFXB-0000000AuKt-0bL0; Mon, 16 Dec 2024 18:09:33 +0000 Received: from mail-ed1-x54a.google.com ([2a00:1450:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFMN-0000000AqyE-48sh for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:26 +0000 Received: by mail-ed1-x54a.google.com with SMTP id 4fb4d7f45d1cf-5d40c0c728aso3250982a12.2 for ; Mon, 16 Dec 2024 09:58:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371902; x=1734976702; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FdcyBmRiYnupt5usNoa8YsTNzt1V3OCJZIqsaCSd/iY=; b=f7ejoaiZqhjA8FuDFZfUIh420Xi86gRY8ZyLASs3X3tQYlz10hdbWN/TEXG+G+UM7Q 0dtGQsG+uK0nlYkrGGg2Rcd0jSD2jxVSMkhmcSwI7Mh9ULNeNRKgWk7P7Lz8nXJ28hRo k7SAABiFeq0jemvM/r9nG8UwbI/aNKClLmNbukgKlh9xkor0E3I0sHbGYro0IeXSgBwr xgloZAgNm4FOawGUJjGE8n6qLva7rXGgJuHFO8dEu4BVQ1KS0WMmroKAYBFqdxKaByv2 DVT59UuAyxsfsIwSYF3ILPa7TBW2UT7aGe5ZPngYNry/LxKywJHHS+cJsd+8JEfhwHyD v7pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371902; x=1734976702; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FdcyBmRiYnupt5usNoa8YsTNzt1V3OCJZIqsaCSd/iY=; b=Tr4vIAp1sEPd58PtDhGXAi5U8LPqV7pe3pMH1GP7trOK1zxeJHM4ioopbApMITfs5q dbSpnCR3Ah+FvAYgWv81qXtMa4+rMeHQ9+Sl2mPu1bivN6ECUuGeoWUw/8Yjc2+gBYvR crw6nobRWgsy3mxJXqtYL3UeTTWHelJPKIXMqUN8jFM6C4y2hquvHeVCHai3jokd9YdV 6mu8zcwbv3uRw7pYH25Edxf2wKAQ1C5J9+0cCCeq8LKbZMQZ4u16L1wPddZOjgxIQKDA YGxEN0KLjQaJbIUNYXsIfEFugi38hko5nH527f4zbSBSPtzBVlvwE0hfA/DrQ0WcR4XA MWgw== X-Forwarded-Encrypted: i=1; AJvYcCV5fUdWDYA+1XASIOCklV9XKlqUVLvtKN0WCtbAn9ahhS3+EYCP9vFg9Qcc5jT/xd9O02YNnE0oxzClxZ7tWBVA@lists.infradead.org X-Gm-Message-State: AOJu0YxIiJtEYGtLjXEMjO4uvz/8EmDUBjs1PLWYGhss3gSnP6vuJ11d hmbxjfjZwoz7AwOPSULdA9xkkb+Rgfy2y1mTNJByqCphW85j51mv8OTEEVVyi0QM1F8Dg9DhF2T QZkGPqw== X-Google-Smtp-Source: AGHT+IHpcpJq4kyenHoOx1fxav7mIvFDvXklCXubp7lwpwsMau839iNr+DB/Q2jwccyvsQCELDrIix68lHxb X-Received: from edbfj7.prod.google.com ([2002:a05:6402:2b87:b0:5d3:e71d:9abe]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:254e:b0:5d2:7270:6128 with SMTP id 4fb4d7f45d1cf-5d63c3dbe25mr31398284a12.25.1734371902351; Mon, 16 Dec 2024 09:58:22 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:53 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-9-qperret@google.com> Subject: [PATCH v3 08/18] KVM: arm64: Add {get,put}_pkvm_hyp_vm() helpers From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095824_031396_69208F34 X-CRM114-Status: GOOD ( 11.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for accessing pkvm_hyp_vm structures at EL2 in a context where we can't always expect a vCPU to be loaded (e.g. MMU notifiers), introduce get/put helpers to get temporary references to hyp VMs from any context. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 3 +++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 20 ++++++++++++++++++++ 2 files changed, 23 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index 24a9a8330d19..f361d8b91930 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -70,4 +70,7 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, unsigned int vcpu_idx); void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); +struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle); +void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm); + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 071993c16de8..d46a02e24e4a 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -327,6 +327,26 @@ void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) hyp_spin_unlock(&vm_table_lock); } +struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle) +{ + struct pkvm_hyp_vm *hyp_vm; + + hyp_spin_lock(&vm_table_lock); + hyp_vm = get_vm_by_handle(handle); + if (hyp_vm) + hyp_page_ref_inc(hyp_virt_to_page(hyp_vm)); + hyp_spin_unlock(&vm_table_lock); + + return hyp_vm; +} + +void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm) +{ + hyp_spin_lock(&vm_table_lock); + hyp_page_ref_dec(hyp_virt_to_page(hyp_vm)); + hyp_spin_unlock(&vm_table_lock); +} + static void pkvm_init_features_from_host(struct pkvm_hyp_vm *hyp_vm, const struct kvm *host_kvm) { struct kvm *kvm = &hyp_vm->kvm; From patchwork Mon Dec 16 17:57:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EF7BAE7717F for ; Mon, 16 Dec 2024 18:10:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=1GzF0OnJe5/psiQhuOUuiFtBBVZ148+mINmLmFP8at8=; b=jAY4z2nX1xWMW0tIJpBHPtF5Y3 UTqpb88WKW9Pybw8iC8iv9XMaPWKd/EYLytAVmYMUUWeDthmL3nPWSdQ3xHAdUeEImlydf+mxEt3C IlQylPy32rUb0JCm4kfMNHx7UUU5WWarMQsIdftluZKOeqDEg4j01kbBlH5U0pqeyYZld58ak7KYk gyuDehWSLsUKhbFqk97qJclPKKPTF+VRYtJw8JZ4NtK2Omg7rmshkgfmteRhqwiFAW0a+YzfdmE7W 0GvIJXoD7xDk4MhdJCVJQ+zzKRymt54qnHgH5Mx+ab03JGoHFq3ziyQm0c5eGCoqeiOotAvMGtJPv ejKgPkDw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNFYG-0000000AudR-1rY3; Mon, 16 Dec 2024 18:10:40 +0000 Received: from mail-ej1-x64a.google.com ([2a00:1450:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFMQ-0000000AqzJ-0uMB for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:27 +0000 Received: by mail-ej1-x64a.google.com with SMTP id a640c23a62f3a-aa69e84128aso377597566b.1 for ; Mon, 16 Dec 2024 09:58:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371904; x=1734976704; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1GzF0OnJe5/psiQhuOUuiFtBBVZ148+mINmLmFP8at8=; b=0a21qQNZ5F/ERwvN1mu2DB2hgU+dYF8e0g/1U2H4nNa2BBu0Rijhjsg3tkZukYW51V RN2K0vowMn+Ss/G4wMPraw9QIN7F83Yk2OxmjIv/EKQffiT3N7sb2g9Hz5P/jaygvzUx iseMpaT2PYqzBhyaz8rO9XI4j8OQNEXoFOFETCT8hm6gc3SaKWhnr4VEptmb1tu0PvIE NhhrDhh+Ruf0GQyqxntnrqYtciUz+SNgTBePJrbRa0PEKJARJUAeZdr9k8i575lDZmnR 8ZPU6Z85LwN/O3Ar6v+tKijHGF9QAuKV/pHq9ZVNiVeAc0CUaKhOdqkt6+sWt+eB0yLX RdSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371904; x=1734976704; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1GzF0OnJe5/psiQhuOUuiFtBBVZ148+mINmLmFP8at8=; b=LEKLYsjx/2J4lVry7bWfAXLLGEmnhmXnGCqNCVctnyzT4hSA06lE/F79bRqobul4I2 rzCtVWTvsBooL87g0PT5xGtx7s1nF/A0i3M0wnXW5Y7xgl69GKondgZYIEB6wBBmQPGt e1kP3eFXFzYeWrod3mi99vNcrrOSSEfy5PCmMh5/7upIEBaAtJn1jHW6PQWcu2Ohb3Kn 4qh7IQ6qbXU/3OCj2iFKBiVxqLxrvoWxDnseUtRzJUflG+hEze7wajt7iL5A+8dHnGL5 l4RMZS17qR+Z2aWDPFe5CfY8yuKMJh4oxxhHwSG/2b70H9dDkF3HPEhi7K8Cimo97CqV ozIg== X-Forwarded-Encrypted: i=1; AJvYcCVMvYGKccQynmFo82X+qQgVQz4YfxKx8XSVCiBRJN1fUnHQrNaysgRL5+rYUinTggzxYD06QokvWubpFp8NXj8x@lists.infradead.org X-Gm-Message-State: AOJu0YxEEwPmCL4918JkJDDJdQ4KkiKR8aTl/0hqHvRrsaQcZ8H9MV65 qd8zZHMpi9IyvnAtKqtELCIsZbKRYf4SxR7dYaQlV2kxD9FBganMtmpgklTQvlPnGrI3jkU6eBY ykyQLoA== X-Google-Smtp-Source: AGHT+IHZbYiwzEGu+bCA7n40Kqr0IGh3JGxfyYng6mi5asEO4czzA3fF7vnZWUutLuTJFZ8wYZZX+Gk1uzSF X-Received: from edbfj22.prod.google.com ([2002:a05:6402:2b96:b0:5cf:bdb9:fa82]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:3da4:b0:aa6:96ad:f8ff with SMTP id a640c23a62f3a-aab77ee5aa4mr1454749966b.52.1734371904491; Mon, 16 Dec 2024 09:58:24 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:54 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-10-qperret@google.com> Subject: [PATCH v3 09/18] KVM: arm64: Introduce __pkvm_vcpu_{load,put}() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095826_274868_49BF6615 X-CRM114-Status: GOOD ( 20.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Rather than look-up the hyp vCPU on every run hypercall at EL2, introduce a per-CPU 'loaded_hyp_vcpu' tracking variable which is updated by a pair of load/put hypercalls called directly from kvm_arch_vcpu_{load,put}() when pKVM is enabled. Signed-off-by: Marc Zyngier Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 2 ++ arch/arm64/kvm/arm.c | 14 ++++++++ arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 7 ++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 47 ++++++++++++++++++++------ arch/arm64/kvm/hyp/nvhe/pkvm.c | 29 ++++++++++++++++ arch/arm64/kvm/vgic/vgic-v3.c | 6 ++-- 6 files changed, 93 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index ca2590344313..89c0fac69551 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -79,6 +79,8 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_init_vm, __KVM_HOST_SMCCC_FUNC___pkvm_init_vcpu, __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, + __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load, + __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put, }; #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a102c3aebdbc..55cc62b2f469 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -619,12 +619,26 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_arch_vcpu_load_debug_state_flags(vcpu); + if (is_protected_kvm_enabled()) { + kvm_call_hyp_nvhe(__pkvm_vcpu_load, + vcpu->kvm->arch.pkvm.handle, + vcpu->vcpu_idx, vcpu->arch.hcr_el2); + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, + &vcpu->arch.vgic_cpu.vgic_v3); + } + if (!cpumask_test_cpu(cpu, vcpu->kvm->arch.supported_cpus)) vcpu_set_on_unsupported_cpu(vcpu); } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { + if (is_protected_kvm_enabled()) { + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, + &vcpu->arch.vgic_cpu.vgic_v3); + kvm_call_hyp_nvhe(__pkvm_vcpu_put); + } + kvm_arch_vcpu_put_debug_state_flags(vcpu); kvm_arch_vcpu_put_fp(vcpu); if (has_vhe()) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index f361d8b91930..be52c5b15e21 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -20,6 +20,12 @@ struct pkvm_hyp_vcpu { /* Backpointer to the host's (untrusted) vCPU instance. */ struct kvm_vcpu *host_vcpu; + + /* + * If this hyp vCPU is loaded, then this is a backpointer to the + * per-cpu pointer tracking us. Otherwise, NULL if not loaded. + */ + struct pkvm_hyp_vcpu **loaded_hyp_vcpu; }; /* @@ -69,6 +75,7 @@ int __pkvm_teardown_vm(pkvm_handle_t handle); struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, unsigned int vcpu_idx); void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); +struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void); struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle); void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 6aa0b13d86e5..95d78db315b3 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -141,16 +141,46 @@ static void sync_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) host_cpu_if->vgic_lr[i] = hyp_cpu_if->vgic_lr[i]; } +static void handle___pkvm_vcpu_load(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(unsigned int, vcpu_idx, host_ctxt, 2); + DECLARE_REG(u64, hcr_el2, host_ctxt, 3); + struct pkvm_hyp_vcpu *hyp_vcpu; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vcpu = pkvm_load_hyp_vcpu(handle, vcpu_idx); + if (!hyp_vcpu) + return; + + if (pkvm_hyp_vcpu_is_protected(hyp_vcpu)) { + /* Propagate WFx trapping flags */ + hyp_vcpu->vcpu.arch.hcr_el2 &= ~(HCR_TWE | HCR_TWI); + hyp_vcpu->vcpu.arch.hcr_el2 |= hcr_el2 & (HCR_TWE | HCR_TWI); + } +} + +static void handle___pkvm_vcpu_put(struct kvm_cpu_context *host_ctxt) +{ + struct pkvm_hyp_vcpu *hyp_vcpu; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); + if (hyp_vcpu) + pkvm_put_hyp_vcpu(hyp_vcpu); +} + static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, host_vcpu, host_ctxt, 1); int ret; - host_vcpu = kern_hyp_va(host_vcpu); - if (unlikely(is_protected_kvm_enabled())) { - struct pkvm_hyp_vcpu *hyp_vcpu; - struct kvm *host_kvm; + struct pkvm_hyp_vcpu *hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); /* * KVM (and pKVM) doesn't support SME guests for now, and @@ -163,9 +193,6 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) goto out; } - host_kvm = kern_hyp_va(host_vcpu->kvm); - hyp_vcpu = pkvm_load_hyp_vcpu(host_kvm->arch.pkvm.handle, - host_vcpu->vcpu_idx); if (!hyp_vcpu) { ret = -EINVAL; goto out; @@ -176,12 +203,10 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) ret = __kvm_vcpu_run(&hyp_vcpu->vcpu); sync_hyp_vcpu(hyp_vcpu); - pkvm_put_hyp_vcpu(hyp_vcpu); } else { /* The host is fully trusted, run its vCPU directly. */ - ret = __kvm_vcpu_run(host_vcpu); + ret = __kvm_vcpu_run(kern_hyp_va(host_vcpu)); } - out: cpu_reg(host_ctxt, 1) = ret; } @@ -409,6 +434,8 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_init_vm), HANDLE_FUNC(__pkvm_init_vcpu), HANDLE_FUNC(__pkvm_teardown_vm), + HANDLE_FUNC(__pkvm_vcpu_load), + HANDLE_FUNC(__pkvm_vcpu_put), }; static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index d46a02e24e4a..496d186efb03 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -23,6 +23,12 @@ unsigned int kvm_arm_vmid_bits; unsigned int kvm_host_sve_max_vl; +/* + * The currently loaded hyp vCPU for each physical CPU. Used only when + * protected KVM is enabled, but for both protected and non-protected VMs. + */ +static DEFINE_PER_CPU(struct pkvm_hyp_vcpu *, loaded_hyp_vcpu); + /* * Set trap register values based on features in ID_AA64PFR0. */ @@ -306,15 +312,30 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, struct pkvm_hyp_vcpu *hyp_vcpu = NULL; struct pkvm_hyp_vm *hyp_vm; + /* Cannot load a new vcpu without putting the old one first. */ + if (__this_cpu_read(loaded_hyp_vcpu)) + return NULL; + hyp_spin_lock(&vm_table_lock); hyp_vm = get_vm_by_handle(handle); if (!hyp_vm || hyp_vm->nr_vcpus <= vcpu_idx) goto unlock; hyp_vcpu = hyp_vm->vcpus[vcpu_idx]; + + /* Ensure vcpu isn't loaded on more than one cpu simultaneously. */ + if (unlikely(hyp_vcpu->loaded_hyp_vcpu)) { + hyp_vcpu = NULL; + goto unlock; + } + + hyp_vcpu->loaded_hyp_vcpu = this_cpu_ptr(&loaded_hyp_vcpu); hyp_page_ref_inc(hyp_virt_to_page(hyp_vm)); unlock: hyp_spin_unlock(&vm_table_lock); + + if (hyp_vcpu) + __this_cpu_write(loaded_hyp_vcpu, hyp_vcpu); return hyp_vcpu; } @@ -323,10 +344,18 @@ void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) struct pkvm_hyp_vm *hyp_vm = pkvm_hyp_vcpu_to_hyp_vm(hyp_vcpu); hyp_spin_lock(&vm_table_lock); + hyp_vcpu->loaded_hyp_vcpu = NULL; + __this_cpu_write(loaded_hyp_vcpu, NULL); hyp_page_ref_dec(hyp_virt_to_page(hyp_vm)); hyp_spin_unlock(&vm_table_lock); } +struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void) +{ + return __this_cpu_read(loaded_hyp_vcpu); + +} + struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle) { struct pkvm_hyp_vm *hyp_vm; diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c index f267bc2486a1..c2ef41fff079 100644 --- a/arch/arm64/kvm/vgic/vgic-v3.c +++ b/arch/arm64/kvm/vgic/vgic-v3.c @@ -734,7 +734,8 @@ void vgic_v3_load(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; - kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); + if (likely(!is_protected_kvm_enabled())) + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); if (has_vhe()) __vgic_v3_activate_traps(cpu_if); @@ -746,7 +747,8 @@ void vgic_v3_put(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; - kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); + if (likely(!is_protected_kvm_enabled())) + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); WARN_ON(vgic_v4_put(vcpu)); if (has_vhe()) From patchwork Mon Dec 16 17:57:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B040CE7717F for ; Mon, 16 Dec 2024 18:11:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=xwMJ2ucftcqP6pInsCPOaB/95gwHgo8dpQFtlrs94zw=; b=Z2szzmv5BjQb14H/yLCJ1TBpQj iETrh0mDx+3aen7qa5ZDUqhe0a4l1ZSZLszq33yE+hJuUz5hN9dRG0R1h6wJO0umt6ly/ROuz2qMr CIQSP2ulAuw4Q1TQPHiMotoBje1BfUgYmLx6J7nvejxeS/xE2RYKy+gtvE5WmMIpJ6p8prr3HiuHg qXkDsMCfwpir/1iEgNOHvgPG4fDvjcPAyOHnK6buX3cnrkSud0np6l1tjNOXZR/UrXRdvufQA+589 Vc9vSQV2CHQfd0qU4VDVRyDd0gJMBut8gJq+lgNMu1BuqNC6+GWQnWt26vScGxES0Pb16Fx6FithP 6aTlgEHg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNFZL-0000000Aure-0kRL; Mon, 16 Dec 2024 18:11:47 +0000 Received: from mail-ej1-x649.google.com ([2a00:1450:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFMS-0000000Ar0u-21Lg for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:29 +0000 Received: by mail-ej1-x649.google.com with SMTP id a640c23a62f3a-aa622312962so447917766b.2 for ; Mon, 16 Dec 2024 09:58:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371906; x=1734976706; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xwMJ2ucftcqP6pInsCPOaB/95gwHgo8dpQFtlrs94zw=; b=QaTsnZmWJivIw46L2u0FxQqqHlVckTwIROZyPitx3USfvfQ2fKwlt5ohcF+Zr1R1LO gKTa7wPeLpbOG1ziMwVDqDG5P+q23WRAkeBjNdlCHER3G2FyDGY9qbg8Yc7+Qaf8XKnh r2hUiN4bu1Feh+K+rZqnlXKe4l/CQhDNQjnPpH87qjVFgKFSuh8U2woeKLEr4KZwCSij 4gDRrpUb9AwNfAnZLmOgxAdrIDupErBvc81roUOsru/sHUUmpaGaGBsVLUhgLxbpDtvc u869obQmDL11BadBjkDsUS8knfim/f+7IkFe5GQ4GTpraE4kKSEjyGGXwY0VBI8uv4i0 E3Xg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371906; x=1734976706; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xwMJ2ucftcqP6pInsCPOaB/95gwHgo8dpQFtlrs94zw=; b=YhHYaWKkgV5YrEnxxcux5hzmd82xQjUOWgEZnZWIkQKI2qPZJ/8YnMhFt3h6wvoM5f ucMNHKXeUtoz3fZCFCXxBoZK7tIAbAqdegIk8MwJaD4/sUqa9pt2IHqee7Pf/0Mfzbm+ lSXzQJZkRBDaJQc6WVZm3TNxRBXTOFLmbEFsLIvHp/BABREi+SG7tS2f6mfZ/8jKqsek Os7OUZrIvWKvjNeKgOrafSG27WLOZnVVDYcau++bh7tNS3GDbZMf2AB5hFH5MFWex2pH 39+MWNPQpENofiuiL3wDxk/lxRNk2cBcJPIYSfWQ2fWxsJz6Y0wypiqDjBZhUY7o1U3W begA== X-Forwarded-Encrypted: i=1; AJvYcCUdgPGB5jMfacJu7afILtrGMuPazWb+nVcCat7d8FrUsrFQkNJYFoEPaltATghk7/286zx9Yu0aQ/2DSkT/zYk9@lists.infradead.org X-Gm-Message-State: AOJu0YzGaBThKIq4nwxeJhuTCjVNWfJBBcIKPtCf/uSbtFCfkhX9pFq0 cD3xjBVtTDaa8zjKbqynupG9DMjN+DpUdbLELnRfB9iohVxJ3xx8+eCf12q+HV8r42rgJar56z7 xy78qQg== X-Google-Smtp-Source: AGHT+IF+TZrX5sYRGmkUi3VXSGFNGoOdrFGhaC+xOKcwVfwhPYfxFzDjxWrzZPX/y/JCQEOwKLwIoCJnxx32 X-Received: from edbel11.prod.google.com ([2002:a05:6402:360b:b0:5d3:ba9c:42f3]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:6d0d:b0:aab:8a24:d5a5 with SMTP id a640c23a62f3a-aab8a24d827mr1103625966b.30.1734371906697; Mon, 16 Dec 2024 09:58:26 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:55 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-11-qperret@google.com> Subject: [PATCH v3 10/18] KVM: arm64: Introduce __pkvm_host_share_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095828_526644_C676B52C X-CRM114-Status: GOOD ( 16.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for handling guest stage-2 mappings at EL2, introduce a new pKVM hypercall allowing to share pages with non-protected guests. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/include/asm/kvm_host.h | 3 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/include/nvhe/memory.h | 2 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 34 +++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 72 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 7 ++ 7 files changed, 120 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 89c0fac69551..449337f5b2a3 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -65,6 +65,7 @@ enum __kvm_host_smccc_func { /* Hypercalls available after pKVM finalisation */ __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, + __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e18e9244d17a..1246f1d01dbf 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -771,6 +771,9 @@ struct kvm_vcpu_arch { /* Cache some mmu pages needed inside spinlock regions */ struct kvm_mmu_memory_cache mmu_page_cache; + /* Pages to top-up the pKVM/EL2 guest pool */ + struct kvm_hyp_memcache pkvm_memcache; + /* Virtual SError ESR to restore when HCR_EL2.VSE is set */ u64 vsesr_el2; diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 25038ac705d8..a7976e50f556 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -39,6 +39,7 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); +int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 8bd9a539f260..cc431820c6ce 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -46,6 +46,8 @@ struct hyp_page { /* Host (non-meta) state. Guarded by the host stage-2 lock. */ enum pkvm_page_state host_state : 8; + + u32 host_share_guest_count; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 95d78db315b3..d659462fbf5d 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -211,6 +211,39 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = ret; } +static int pkvm_refill_memcache(struct pkvm_hyp_vcpu *hyp_vcpu) +{ + struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu; + + return refill_memcache(&hyp_vcpu->vcpu.arch.pkvm_memcache, + host_vcpu->arch.pkvm_memcache.nr_pages, + &host_vcpu->arch.pkvm_memcache); +} + +static void handle___pkvm_host_share_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, pfn, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 3); + struct pkvm_hyp_vcpu *hyp_vcpu; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret = pkvm_refill_memcache(hyp_vcpu); + if (ret) + goto out; + + ret = __pkvm_host_share_guest(pfn, gfn, hyp_vcpu, prot); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -420,6 +453,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_share_hyp), HANDLE_FUNC(__pkvm_host_unshare_hyp), + HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 12bb5445fe47..fb9592e721cf 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -867,6 +867,27 @@ static int hyp_complete_donation(u64 addr, return pkvm_create_mappings_locked(start, end, prot); } +static enum pkvm_page_state guest_get_page_state(kvm_pte_t pte, u64 addr) +{ + if (!kvm_pte_valid(pte)) + return PKVM_NOPAGE; + + return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); +} + +static int __guest_check_page_state_range(struct pkvm_hyp_vcpu *vcpu, u64 addr, + u64 size, enum pkvm_page_state state) +{ + struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); + struct check_walk_data d = { + .desired = state, + .get_page_state = guest_get_page_state, + }; + + hyp_assert_lock_held(&vm->lock); + return check_page_state_range(&vm->pgt, addr, size, &d); +} + static int check_share(struct pkvm_mem_share *share) { const struct pkvm_mem_transition *tx = &share->tx; @@ -1349,3 +1370,54 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) return ret; } + +int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, + enum kvm_pgtable_prot prot) +{ + struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 phys = hyp_pfn_to_phys(pfn); + u64 ipa = hyp_pfn_to_phys(gfn); + struct hyp_page *page; + int ret; + + if (prot & ~KVM_PGTABLE_PROT_RWX) + return -EINVAL; + + ret = check_range_allowed_memory(phys, phys + PAGE_SIZE); + if (ret) + return ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __guest_check_page_state_range(vcpu, ipa, PAGE_SIZE, PKVM_NOPAGE); + if (ret) + goto unlock; + + page = hyp_phys_to_page(phys); + switch (page->host_state) { + case PKVM_PAGE_OWNED: + WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_OWNED)); + break; + case PKVM_PAGE_SHARED_OWNED: + if (page->host_share_guest_count) + break; + /* Only host to np-guest multi-sharing is tolerated */ + WARN_ON(1); + fallthrough; + default: + ret = -EPERM; + goto unlock; + } + + WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, PAGE_SIZE, phys, + pkvm_mkstate(prot, PKVM_PAGE_SHARED_BORROWED), + &vcpu->vcpu.arch.pkvm_memcache, 0)); + page->host_share_guest_count++; + +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 496d186efb03..f2e363fe6b84 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -795,6 +795,13 @@ int __pkvm_teardown_vm(pkvm_handle_t handle) /* Push the metadata pages to the teardown memcache */ for (idx = 0; idx < hyp_vm->nr_vcpus; ++idx) { struct pkvm_hyp_vcpu *hyp_vcpu = hyp_vm->vcpus[idx]; + struct kvm_hyp_memcache *vcpu_mc = &hyp_vcpu->vcpu.arch.pkvm_memcache; + + while (vcpu_mc->nr_pages) { + void *addr = pop_hyp_memcache(vcpu_mc, hyp_phys_to_virt); + push_hyp_memcache(mc, addr, hyp_virt_to_phys); + unmap_donated_memory_noclear(addr, PAGE_SIZE); + } teardown_donated_memory(mc, hyp_vcpu, sizeof(*hyp_vcpu)); } From patchwork Mon Dec 16 17:57:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 03D6CE7717F for ; Mon, 16 Dec 2024 18:13:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=L+LjFzCv5XYw4tM5gP6caUnlYbp8EBUjRNr5pJmRWzY=; b=NseCc3z/Fnpci4mBauyjzrOOdk Ic/AkuN++/KEvVDiNcKIFdsZC8AaX62MNG4/jY1+uYU8scJ6PBL3ZRbwn9bceUQ/r0+AQDK1U4A6O 8kRxICljSdwoGFNucH8GaFNwkrHXhGYk/EEAY/SkT7oM0wPQjSyFvsajvrACw5Apv1bFJzfruAxid tfJfnaif0Tc1N1hv1j/hbkx8XuVnmA8ZtxBHVle/Gv7L/A4KOapcLSxrChSf4H0JLMSWBdn5+WEF4 cQwUaDhUUq6fWynoBnhv96sgYEb6yvsKgIj6XggJN66OH/MJnoIVxZ+ItTjqLnIiYJwqtUWwZfm7Q 5xyubo5A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNFaQ-0000000Av8h-3WwS; Mon, 16 Dec 2024 18:12:54 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFMU-0000000Ar1x-1f4j for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:31 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d3c284eba0so6275784a12.1 for ; Mon, 16 Dec 2024 09:58:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371909; x=1734976709; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=L+LjFzCv5XYw4tM5gP6caUnlYbp8EBUjRNr5pJmRWzY=; b=PEgHBThpH4eq6NF8J5/SfdviKcdi7N4d69xCutZNSmFGdu6FtlljEoTL5qLfo3cBDY Yxx8DIJNXrTkfhcMqwyVBKAqsK1MtLK5glHn7nlWRy9JocoIbcMIOaBUCJ1iMHWgSnCF mdQA54YIyBL44V1iBxDl3SVgJ183YY8cLHm81bby5JmtKClegaPLEFgdYwFxwnVDSRL6 RUAFUO191nucE6RMuujItfIlYi1jmRemUfJfvYh25xJlFT+5mEHA8v858IHmVDsIrb5L NlFhVJxn95mHKvkMKcqqixMeYYQZG2qdr1I4aBoBLGkbc7VDOhwzks2PDx4xHwicmSZF xLaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371909; x=1734976709; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=L+LjFzCv5XYw4tM5gP6caUnlYbp8EBUjRNr5pJmRWzY=; b=J+gL8p+DVPoXESL3AFu8CU2RY67FsNkj7qCe/QZDWyQkKaMbaFX8E5uH4Rx2j7+BOq zxHwtoio0JxWU311aS5BNNXsaO1NF3NItnwXqHyEdzI+iLdLtzNn3bsWF37n9Uc5O6bh GEIcQ1XRnNLZC/veFBNzliOl2Qc5eYSYCZRF+SYWEU2mjzp3zlzuwgJcjkOAMjiSoRzH szcj0SPVpStL55Q+PivOgc66ROLRCC+j8AN7PswSEm2u1EE/N3s29T4T29NCDxLrFewG wRRzAMXq3R79dCxKmQ2lRHv01JcTON1r24xw5GUrbOp9DeTs13EFIGmFCYwU5274UjS+ cx7g== X-Forwarded-Encrypted: i=1; AJvYcCXACKWvm/7rlkWNmSHDGcrHb8PW9mZkuMzgV8w9T1vzaB/eIqoh1MAtmAX5oBWEcHdvvRWoHYgyXFy3CX5g4Pxx@lists.infradead.org X-Gm-Message-State: AOJu0YzBppJx7dBc3aSjkSMHSWaDaa1Hawgp71b7udlsstMs0oX67tbd 5u3aNNwLlyYDR/Hjfk7K6VBJ/k3eneiNSgyzmyYPc3w2TFB+FyMv+IjukIN5Cd41U1HG7jZ6mr/ wM+C1hA== X-Google-Smtp-Source: AGHT+IHCZhqHC8extjwLovPgFG/ljnTta5VOSeafMIM6PQ2NYnnk/wOQzU+5RyPYu2VdpKxPmvEc7HcAA2bA X-Received: from edben9.prod.google.com ([2002:a05:6402:5289:b0:5cf:dcd3:49a1]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:13ce:b0:5d3:ba42:ea03 with SMTP id 4fb4d7f45d1cf-5d63c30ad8emr13380104a12.8.1734371908844; Mon, 16 Dec 2024 09:58:28 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:56 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-12-qperret@google.com> Subject: [PATCH v3 11/18] KVM: arm64: Introduce __pkvm_host_unshare_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095830_445534_002B2CD9 X-CRM114-Status: GOOD ( 14.45 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for letting the host unmap pages from non-protected guests, introduce a new hypercall implementing the host-unshare-guest transition. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 6 ++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 21 ++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 67 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 12 ++++ 6 files changed, 108 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 449337f5b2a3..0b6c4d325134 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -66,6 +66,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index a7976e50f556..e528a42ed60e 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -40,6 +40,7 @@ int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); +int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index be52c5b15e21..0cc2a429f1fb 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -64,6 +64,11 @@ static inline bool pkvm_hyp_vcpu_is_protected(struct pkvm_hyp_vcpu *hyp_vcpu) return vcpu_is_protected(&hyp_vcpu->vcpu); } +static inline bool pkvm_hyp_vm_is_protected(struct pkvm_hyp_vm *hyp_vm) +{ + return kvm_vm_is_protected(&hyp_vm->kvm); +} + void pkvm_hyp_vm_table_init(void *tbl); int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva, @@ -78,6 +83,7 @@ void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void); struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle); +struct pkvm_hyp_vm *get_np_pkvm_hyp_vm(pkvm_handle_t handle); void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm); #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index d659462fbf5d..3c3a27c985a2 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -244,6 +244,26 @@ static void handle___pkvm_host_share_guest(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_unshare_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + struct pkvm_hyp_vm *hyp_vm; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm = get_np_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + + ret = __pkvm_host_unshare_guest(gfn, hyp_vm); + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -454,6 +474,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_share_hyp), HANDLE_FUNC(__pkvm_host_unshare_hyp), HANDLE_FUNC(__pkvm_host_share_guest), + HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index fb9592e721cf..30243b7922f1 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1421,3 +1421,70 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, return ret; } + +static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, u64 ipa) +{ + enum pkvm_page_state state; + struct hyp_page *page; + kvm_pte_t pte; + u64 phys; + s8 level; + int ret; + + ret = kvm_pgtable_get_leaf(&vm->pgt, ipa, &pte, &level); + if (ret) + return ret; + if (level != KVM_PGTABLE_LAST_LEVEL) + return -E2BIG; + if (!kvm_pte_valid(pte)) + return -ENOENT; + + state = guest_get_page_state(pte, ipa); + if (state != PKVM_PAGE_SHARED_BORROWED) + return -EPERM; + + phys = kvm_pte_to_phys(pte); + ret = check_range_allowed_memory(phys, phys + PAGE_SIZE); + if (WARN_ON(ret)) + return ret; + + page = hyp_phys_to_page(phys); + if (page->host_state != PKVM_PAGE_SHARED_OWNED) + return -EPERM; + if (WARN_ON(!page->host_share_guest_count)) + return -EINVAL; + + *__phys = phys; + + return 0; +} + +int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *vm) +{ + u64 ipa = hyp_pfn_to_phys(gfn); + struct hyp_page *page; + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_shared_guest(vm, &phys, ipa); + if (ret) + goto unlock; + + ret = kvm_pgtable_stage2_unmap(&vm->pgt, ipa, PAGE_SIZE); + if (ret) + goto unlock; + + page = hyp_phys_to_page(phys); + page->host_share_guest_count--; + if (!page->host_share_guest_count) + WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_OWNED)); + +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index f2e363fe6b84..1b0982fa5ba8 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -376,6 +376,18 @@ void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm) hyp_spin_unlock(&vm_table_lock); } +struct pkvm_hyp_vm *get_np_pkvm_hyp_vm(pkvm_handle_t handle) +{ + struct pkvm_hyp_vm *hyp_vm = get_pkvm_hyp_vm(handle); + + if (hyp_vm && pkvm_hyp_vm_is_protected(hyp_vm)) { + put_pkvm_hyp_vm(hyp_vm); + hyp_vm = NULL; + } + + return hyp_vm; +} + static void pkvm_init_features_from_host(struct pkvm_hyp_vm *hyp_vm, const struct kvm *host_kvm) { struct kvm *kvm = &hyp_vm->kvm; From patchwork Mon Dec 16 17:57:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7DCDEE7717F for ; Mon, 16 Dec 2024 19:25:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=dyP52yXMvdCG/qeO/nxD7I5lO2ep30nuHc1XdZd9lVE=; b=TYSCBjOb/7o4pX+LZeqRdEWXpq xws8YsRDR1rnGRy+87jBGz20toyzumFV8oh20e+6sEByYF6Pc6wOw6Khsd86e8vdPNW1SFLwVmYl0 Lbyh+RIM4WlesKVMywOe4GJAZC4IFM83CruDibqbkEOyVZsRN16+GZXghvONREQp8pBpXBxee/f8+ knq1pFvpq7zcJC0LVfSJbQknDnXhjfm9Hwk05baYWen8FRvgi4jFRivUDTj/9k9GtOUTWU3zzGgzD 7AGjFkWH0KUDuZ0l4mTcNcQAoVFohEyWzRzWqDkYX4j8OT56FtYd8cQ6UN4bK0UjhLJ49nXZMqh/R 9DICP9eQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNGi4-0000000B5NH-1fX7; Mon, 16 Dec 2024 19:24:52 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFMX-0000000Ar2b-10LL for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:34 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d3e77fd3b3so5297587a12.0 for ; Mon, 16 Dec 2024 09:58:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371911; x=1734976711; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dyP52yXMvdCG/qeO/nxD7I5lO2ep30nuHc1XdZd9lVE=; b=b3JLtEkpPG31UkuZ6A2i6Rrom2QtOS/DfWKMwH/rZickm9sF3dcp726GuLpdrBm1Pq NjE/gSSmwi1XY71xMH7JOzduFxPBjs1Z3ixbnpEYdkNrmD7TOeW5iXSRmcc5AgIo5GpX iTVjIZSXlmOmNUF4gUJWMN8FU7C6MCno2sCTNLnfpdv/KgqOC4CjM3baYHUnB/ej3AtV nwpbOwbBYIB8GZoGGl++mdh3tZZkRGBAN2vFZ/6iab90ABSw1d3n9EPtp90LdQsBg1Yi +l13PSU1lxCdT0T7GIJk1kJYd0Qb38xGKTLSmfnWhHo38T5ia/5lHoZoKxrz9yIIGhhj 9g8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371911; x=1734976711; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dyP52yXMvdCG/qeO/nxD7I5lO2ep30nuHc1XdZd9lVE=; b=RSevMVautTdbvsnY9V8CD5jsHZb3Tw+QvLrUelAdSbJwDczJTySGXhSrzOtcR8eJwp s2Hf9VVcnr7EMwKDrle7BoW1azwUkGvf8QstMwkSroRK4kl6nx2hDyGBXjLUXcnXvSB3 It95ORE1QCzSVPugITrbbs9+MXV21w7cJygdh+WfXCs2lLgsx/XPt/q1HcnutJsBwqXd 87fHlUFuPEQpQ3Oq1BXSF4bIvONQJZU2W5bM/eLlZgWyIkoBFa9JoIqWnvhTxuwXEQz7 GH+6gRNzqh2V8M7k0f+qkFYi1I+CDq13q3o0SkgnGPViH/7SLm/gBOEQAjnjnHU/HJi8 7heg== X-Forwarded-Encrypted: i=1; AJvYcCWY8JBMsboEKbInqjxCFrXEfKIBeBisllfgdqAXeIKIG35Ixvh6AqMvNqn2sOd9jbPOYdLigBIpnu97QJwh73Ik@lists.infradead.org X-Gm-Message-State: AOJu0Yyy0AQFQ2yrkRNO2GkAVjrXAdNE5jvbNxrKfIvNfN2//o0+L3mq f5jwFEYrezaj6Aq9F84WD1ZFOGQ7iKIpDbXOWrlo6I4UVDTRi8lE1K4Jt0iy98oE9HbkCcsxeSE B02WjUg== X-Google-Smtp-Source: AGHT+IFP4nw4MedeF204ACC4I+mIvh9+F0wuQVKyEpLyYdXQPbpwYFvSWf+Gws8g2zlegmkHbaKOYI4g0Iku X-Received: from edbek21.prod.google.com ([2002:a05:6402:3715:b0:5d6:570c:dae5]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:5191:b0:5cf:bb9e:cca7 with SMTP id 4fb4d7f45d1cf-5d63c3c0697mr13666506a12.28.1734371910915; Mon, 16 Dec 2024 09:58:30 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:57 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-13-qperret@google.com> Subject: [PATCH v3 12/18] KVM: arm64: Introduce __pkvm_host_relax_guest_perms() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095833_276231_E6E46F70 X-CRM114-Status: GOOD ( 12.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a new hypercall allowing the host to relax the stage-2 permissions of mappings in a non-protected guest page-table. It will be used later once we start allowing RO memslots and dirty logging. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 20 ++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 23 +++++++++++++++++++ 4 files changed, 45 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 0b6c4d325134..66ee8542dcc9 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -67,6 +67,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_perms_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index e528a42ed60e..a308dcd3b5b8 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -41,6 +41,7 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 3c3a27c985a2..287e4ee93ef2 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -264,6 +264,25 @@ static void handle___pkvm_host_unshare_guest(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_relax_perms_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, gfn, host_ctxt, 1); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 2); + struct pkvm_hyp_vcpu *hyp_vcpu; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret = __pkvm_host_relax_perms_guest(gfn, hyp_vcpu, prot); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -475,6 +494,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_unshare_hyp), HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__pkvm_host_unshare_guest), + HANDLE_FUNC(__pkvm_host_relax_perms_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 30243b7922f1..aa8e0408aebb 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1488,3 +1488,26 @@ int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *vm) return ret; } + +int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot) +{ + struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 ipa = hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + if (prot & ~KVM_PGTABLE_PROT_RWX) + return -EINVAL; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_shared_guest(vm, &phys, ipa); + if (!ret) + ret = kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} From patchwork Mon Dec 16 17:57:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4A311E7717F for ; Mon, 16 Dec 2024 18:15:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=gDq9DwaPgsqNr2tHpW+bdhvyfIZgJqiuY6uN4axqGiQ=; b=REokS6obywqQ41YEB2JbigJV3C bxYqDYAjuK9o4wGX+VXUlYMyYiiRlXlws28GAhefnPHiYKkB91fG38gKpe3aIvOyIQ38l9o/nZgv1 L1YPLtP4BJo7ty1dh8o78FvyFDB8JvxMWgPL2sUPTNBWCb8XmQ9XP+P5PbytpV1avHuxRzhA2uJS3 +lxMceCzw/ehwinmE1r1kP+xDsy6A6I4ExGuR2RWnuAYqbr6X4+8Hr7Antnc4ILr+KZW0/jJcMcFV 4lCzGS82Bdgb30NX9h+N7jLAZ5FzZRaw++pgnlcdkui5vPX3HFd5u57t12j185YVQqzuPZBb7Ak+a 3utP6ODw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNFcb-0000000Avdx-2mQa; Mon, 16 Dec 2024 18:15:09 +0000 Received: from mail-ej1-x64a.google.com ([2a00:1450:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFMZ-0000000Ar3A-0DbH for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:36 +0000 Received: by mail-ej1-x64a.google.com with SMTP id a640c23a62f3a-aa66f6ce6bfso145504766b.2 for ; Mon, 16 Dec 2024 09:58:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371913; x=1734976713; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gDq9DwaPgsqNr2tHpW+bdhvyfIZgJqiuY6uN4axqGiQ=; b=TIoo7NxBe6yL+0QO5/0n7Po0v40kJpPbVEaeYcTsAn9mp2ovP1MHaOUiFADG59KRPc 3tGPbwbsNFX0LBWnZPLh+ZywLuRw/52IQwWrWAqtVmxo3/SwHYftAP+RJbjBKUr/1sNm dPManPBT6vWxM3BTcCwf8o2l9w63P+wJ0Gsud4SNjwgWszEIcq8mh4NIhNgjN1+uS15i hdPrnkcXYh206gbYt4hKPhLp8udSRufWMD9H5+gZ01VYCpcBbo/P/OOIsp9zxmDCbfiC pXvj10LV9799p+wzbVdIKuIjf+Dq7XeMDNYuYut7AT90uT2+DhhRx9nrG19/eO7gqPRT tz4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371913; x=1734976713; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gDq9DwaPgsqNr2tHpW+bdhvyfIZgJqiuY6uN4axqGiQ=; b=RsrmwstTd0schkWH75ogGg4JP8Ug1H3UJheTfItGEYkJnUrfhdlXjvQSX423+9f3iR WvawM5GsPMeqUkRgaHAohyB6emYrXuDBNuSt9OTZISR4dGAe9r94AGDNFmU5r3eM31+e frAt1CqIL9f8gjul0beq2ldeA6SQ/b4fsl3JulGTJ9AOSNQ4DSJppglzcW9ePSLoozen jHNtxa4qHy4qC3VLLTpVM2If5A4PXImWI5u/vVPVqdBw+THyRgcPlBfYCnlnqIan9ILr d5XgWhbGl8sZK6LEfiSJR3haFzyYGEu+hUhvJPE9U0ohfDlAgIxflZl6ruas4UgQ8Qfb CjAQ== X-Forwarded-Encrypted: i=1; AJvYcCU2LGvkvmsboJTS3NPISnkXtKUy43bHzK8HkvBhcbwUCl9Cdg58Yl+GESdbSfATT64MxQcfuz1Akpat8EBhvMAi@lists.infradead.org X-Gm-Message-State: AOJu0YxLbuz5hCiwYL1CKmJwcCcFMIp4Cutg7Rprk9L23VbIEm9GN9Cg x13iJdVmOJJfhIXiBTmIrsBlY6WYp5RR3Kg0loief1sj0X+k0m4b9MwQP06wqk3Y60efVMo4HyV QLDTNXg== X-Google-Smtp-Source: AGHT+IHI3UoYT3X+iYsCFbInDPhwRbeEU1+XXddDdVe8vSqsWTG70vyDHz/YcjLxXIwMarrVkYcdz5XXC6so X-Received: from edaq23.prod.google.com ([2002:a05:6402:2497:b0:5d1:229d:1ee5]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:311a:b0:aab:70d3:af43 with SMTP id a640c23a62f3a-aab779aae4cmr1377079866b.27.1734371912986; Mon, 16 Dec 2024 09:58:32 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:58 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-14-qperret@google.com> Subject: [PATCH v3 13/18] KVM: arm64: Introduce __pkvm_host_wrprotect_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095835_090486_CBE6DDBC X-CRM114-Status: GOOD ( 11.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a new hypercall to remove the write permission from a non-protected guest stage-2 mapping. This will be used for e.g. enabling dirty logging. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 21 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 19 +++++++++++++++++ 4 files changed, 42 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 66ee8542dcc9..8663a588cf34 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -68,6 +68,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_perms_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index a308dcd3b5b8..fc9fdd5b0a52 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -42,6 +42,7 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); +int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 287e4ee93ef2..98d317735107 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -283,6 +283,26 @@ static void handle___pkvm_host_relax_perms_guest(struct kvm_cpu_context *host_ct cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_wrprotect_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + struct pkvm_hyp_vm *hyp_vm; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm = get_np_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + + ret = __pkvm_host_wrprotect_guest(gfn, hyp_vm); + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -495,6 +515,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__pkvm_host_relax_perms_guest), + HANDLE_FUNC(__pkvm_host_wrprotect_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index aa8e0408aebb..94e4251b5077 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1511,3 +1511,22 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_ return ret; } + +int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) +{ + u64 ipa = hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_shared_guest(vm, &phys, ipa); + if (!ret) + ret = kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} From patchwork Mon Dec 16 17:57:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A4012E7717F for ; Mon, 16 Dec 2024 18:16:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=b5r7yCnkAiK5kyLrxO0D/txvshFXcnr6nZkeqCQZICw=; b=ulruoldX0YXs6sHCJYuMfklxUU R94UhYd+vimd1nWCDoghM0wC7nukaVmhTTcKcbIz/FYNRCkcRXTIsDORRkPqF6GQndJL1qWOs9MpR Aw6fWTOPqcH2d0mzocVM0rZ8GtQ/LOKkK0MXVyIbWD06GuAY8nyWVqieHqEaoXmC+prewTa++p0oQ roGE67K2FkIg6kEES82HX3cQDXevoWdxWtyjE0fCgMgqRx1sXjLBYOv3N3X095cWihW+eYvieYMk7 GqgsUnr7RjEmQ7ddXVkAHdocQYjQ1mBrV5fifIphkehkoR8fC8gx6QDTOfgOWaZt6Cb1YHAFIejG/ jHgtqDcw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNFdg-0000000AvsC-1CuO; Mon, 16 Dec 2024 18:16:16 +0000 Received: from mail-ej1-x649.google.com ([2a00:1450:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFMa-0000000Ar3g-2AEy for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:37 +0000 Received: by mail-ej1-x649.google.com with SMTP id a640c23a62f3a-aa689b88293so468713966b.3 for ; Mon, 16 Dec 2024 09:58:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371915; x=1734976715; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=b5r7yCnkAiK5kyLrxO0D/txvshFXcnr6nZkeqCQZICw=; b=S/7kEhxB5FsL8Bok9QEHEJ6jNo+kwddXY8O/INhJKcC43RdohsuzSLY1lbQm7ADnxg Bqq8qk2HmejkNUHAs/fc9epZBrn0m+axPyWIxnsBh9GRdZLcFikq8sRzyTqlTj7thU6A 9Dd1Gb76+73i3VzG6W+8VWDvk+cmL09L9HZYOKyRxzi2Y/ZV+KRWvN3k24aeI5pfWxP9 TjWOnOGJGeDKVOKjIDu3hJLZwUzHSz5oAK2AfPS5ZmD7/nE9j1NCF2TxeTn3kXeXGJXW a2hAmm++s5lh9fZ42EXf/zHrHI6hh7b8jt+kTapw+lu6b8bzSIurfYNWXvP+UZhhaP+W v1eA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371915; x=1734976715; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=b5r7yCnkAiK5kyLrxO0D/txvshFXcnr6nZkeqCQZICw=; b=oduw5xL/7rIT0Jews4MSFWY2HvEvs+U63QdgXrSy5VEj4tm1WLevjMbXBAFWYhpmP6 ybUWt4240ThmiScK5Hd+uE5UIkP7OgmpB6GS06TQp2MylkZ8cNq+X6YxWWrCu6OI+N+1 DusDxdEoJIZmeIqRTGGSugmDWNmV/iryh6ciOiq5E0uQwnBN0F61C0qYophZSvoi0NBO qUCj633S0dpSKehDJIkcWfDbU7m/Zw538NkELj/Eb6suE/5JQAWqeE5b/9pp96c0ZamU XmoJKR4yO1wbeYjSsrn4/fkJ8rScextNedk+c+nZWFkaN4SxF5h6cE8ctXzph6JOdm1F xPew== X-Forwarded-Encrypted: i=1; AJvYcCXYK6G4t5hl+ISTHLNYao40qh6bC5bauRzveGV3uWmVerlfhY63b7MgfEs8EaruyqHiu9DyZTQWVnPRsX+wVVcL@lists.infradead.org X-Gm-Message-State: AOJu0Yw8fml0Votn4zCyBNUdJv7jbt7Jn6IlWUeCKx1UlQ/nQRXKmPIi eujMucrIJAoI+n19tcgBxwZoHBVQGk76YD5WRmqED7AfLX7ZirhZsbmiDx45bhPpeDhGFG2IBe/ etYOIxQ== X-Google-Smtp-Source: AGHT+IETu+O3p1iOBFO9EmhBHTULF9iIkYmes1FbMSi6hvoepqbMqPvqpSwnKmEI7JcKZIr1ZyErBddwlhp2 X-Received: from edbek12.prod.google.com ([2002:a05:6402:370c:b0:5d0:225b:ed39]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:4584:b0:5d4:5e4:1555 with SMTP id 4fb4d7f45d1cf-5d63c3200acmr13094867a12.19.1734371915059; Mon, 16 Dec 2024 09:58:35 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:59 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-15-qperret@google.com> Subject: [PATCH v3 14/18] KVM: arm64: Introduce __pkvm_host_test_clear_young_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095836_556916_FD7E63CC X-CRM114-Status: GOOD ( 11.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Plumb the kvm_stage2_test_clear_young() callback into pKVM for non-protected guest. It will be later be called from MMU notifiers. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 22 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 19 ++++++++++++++++ 4 files changed, 43 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 8663a588cf34..4f97155d6323 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -69,6 +69,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_perms_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_test_clear_young_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index fc9fdd5b0a52..b3aaad150b3e 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,6 +43,7 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum k int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 98d317735107..616e172a9c48 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -303,6 +303,27 @@ static void handle___pkvm_host_wrprotect_guest(struct kvm_cpu_context *host_ctxt cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_test_clear_young_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(bool, mkold, host_ctxt, 3); + struct pkvm_hyp_vm *hyp_vm; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm = get_np_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + + ret = __pkvm_host_test_clear_young_guest(gfn, mkold, hyp_vm); + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -516,6 +537,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__pkvm_host_relax_perms_guest), HANDLE_FUNC(__pkvm_host_wrprotect_guest), + HANDLE_FUNC(__pkvm_host_test_clear_young_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 94e4251b5077..0e42c3baaf4b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1530,3 +1530,22 @@ int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) return ret; } + +int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm) +{ + u64 ipa = hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_shared_guest(vm, &phys, ipa); + if (!ret) + ret = kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mkold); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} From patchwork Mon Dec 16 17:58:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4338CE7717F for ; Mon, 16 Dec 2024 18:17:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=4YOHiNJrZBpE7+WMk67gEyR4d7/2QuIAsHv8fTvzpm8=; b=ZFO7BuDjLHmHfb4dOdStT1kcsi Pjg3iRDwduhomTV/g5ywVy88XN6rK8tdxIqKozRWA4vKKSSsAxuF1CMH+2upkNlQ8+2eVMZjJd8N2 OznHj9S3gCvz1ci086E/G+y42IIdTAVNOySiYjb/MV8xhIOoRUV4gBaYxGCqACkrhUP5kHt3gFb/j joUGT1DQJvGIbCg0JB7fBlXq+BhoCk0bWw0WMkLNAWX0yOfvXoQpBiESpFfDbrgIS3teA4AXvZgiq oKGVJ7nWrxcNKNlbRZd5hTqXEnqlTKtQIbg6yxrxFHk68mbvxYtTfgr8QEiQMeb/0XBFrUOOSIWcl LbmR51eA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNFel-0000000Aw68-08q9; Mon, 16 Dec 2024 18:17:23 +0000 Received: from mail-ej1-x64a.google.com ([2a00:1450:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFMc-0000000Ar4s-43c8 for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:40 +0000 Received: by mail-ej1-x64a.google.com with SMTP id a640c23a62f3a-aa6a6dcf9a3so142801166b.0 for ; Mon, 16 Dec 2024 09:58:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371917; x=1734976717; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4YOHiNJrZBpE7+WMk67gEyR4d7/2QuIAsHv8fTvzpm8=; b=EKG8MntrwkUsiHXO6PRk+UwX2Q+e6FElB8Xw8B96JAAiHePNavsKFPE44xUNqEwm79 oiAUjbsQcuxiNNZy2YPPblw7I/naJottvc9w2+9Aa1Ijc1cD55kQN0wqedXYCTH+PGj8 pNBgGKeTuelkeJ+p+Cmv1WBV2lNOYPQF2OSAuu0aajx1cw0bw61/e3pbFJfSGUgKZAVh lysQ8Ok85BzTNRA9yKLlGBh/KgBJWeJyo/0rFT6yqAj5VIEzonaokwsPW/QCv0GYCFDm FqtLmRbPo4FXWe4S70SnUQTYI8sJITDTBdC/miNcx/ulR/U6FirqhL7xZqT2JR/PHG9t txFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371917; x=1734976717; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4YOHiNJrZBpE7+WMk67gEyR4d7/2QuIAsHv8fTvzpm8=; b=ZXJ6vOu4KPa28G3pe/b4AwNLH/FWtTOaHieiEuL8kBbjg1laHzvgNwiQBo2rsm3MFh IsUTLOP7MGkhRx+WuCZ+6mWiaZXOL0Z3W3qFT+RGFh1ZWp93OMpHI4qQHLMfBsIYqyUs fmksu0xZDy3hUD4UaAVFd2eM2MTKVnaNY3cPB8XiS8qKUZqimDdgZp2DnvZK7TMe14Dk d8p35dDWPRdhaku14g8e2/V27atozivKFpTrrVECtrkz/rcLMtdD2F/vfBFbIxrBG/9K IMPEl/gPPOpD1lb++J2kuYBtXKZt5I986Y5E4zz59/3CyljvVNAt0Ji425xXFYVBnugQ VMhA== X-Forwarded-Encrypted: i=1; AJvYcCWVMZTOew0wFN4U2/PuyZRlBzv4I7n/b90OPLlTyEIe/cblw7kg79tgs1UEpzZMXBWmW120Rjk+h4C2R4e68xfI@lists.infradead.org X-Gm-Message-State: AOJu0YxC/NbitC+MMo0RG/iuF9GbFdCLu62KcoM8zrzYNXkeTl7YVJHD hOKzjtCDWkng6nZdB7PV/ZRocwhmvSsbkS9R0o6F6Q8SBBzeqY6LyM7cHRseS4aFo+5N2L/EJIM gllmioQ== X-Google-Smtp-Source: AGHT+IGwqENMAGZonee0eKfEqVGzhNKsT5jXcNwjSd/KRUBOe/TazOnE3jhEec1mHF2pcCq0KmtgL72Jbhuo X-Received: from edyd3.prod.google.com ([2002:a05:6402:783:b0:5d1:f6fd:8acc]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:60d6:b0:aa6:730c:acd with SMTP id a640c23a62f3a-aab7792c704mr1428447066b.16.1734371917086; Mon, 16 Dec 2024 09:58:37 -0800 (PST) Date: Mon, 16 Dec 2024 17:58:00 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-16-qperret@google.com> Subject: [PATCH v3 15/18] KVM: arm64: Introduce __pkvm_host_mkyoung_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095839_006541_092A3844 X-CRM114-Status: GOOD ( 12.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Plumb the kvm_pgtable_stage2_mkyoung() callback into pKVM for non-protected guests. It will be called later from the fault handling path. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 19 ++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 20 +++++++++++++++++++ 4 files changed, 41 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 4f97155d6323..a3b07db2776c 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -70,6 +70,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_perms_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_test_clear_young_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_mkyoung_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index b3aaad150b3e..65c34753d86c 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -44,6 +44,7 @@ int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm); +int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 616e172a9c48..32c4627b5b5b 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -324,6 +324,24 @@ static void handle___pkvm_host_test_clear_young_guest(struct kvm_cpu_context *ho cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_mkyoung_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, gfn, host_ctxt, 1); + struct pkvm_hyp_vcpu *hyp_vcpu; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret = __pkvm_host_mkyoung_guest(gfn, hyp_vcpu); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -538,6 +556,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_relax_perms_guest), HANDLE_FUNC(__pkvm_host_wrprotect_guest), HANDLE_FUNC(__pkvm_host_test_clear_young_guest), + HANDLE_FUNC(__pkvm_host_mkyoung_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 0e42c3baaf4b..eae03509d371 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1549,3 +1549,23 @@ int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm * return ret; } + +int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu) +{ + struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 ipa = hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_shared_guest(vm, &phys, ipa); + if (!ret) + kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} From patchwork Mon Dec 16 17:58:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910196 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 50C15E7717F for ; Mon, 16 Dec 2024 18:18:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=vLkxU7WF3J5Qh0Edw4/qg3eGiYkLZ2iXz4JSY5pi8io=; b=d4+xS+GWX7VIL56gF8Grydaz1C 5nAjyGij5OwFrBlfIxdx8uLEf9eF8toeo9yLDurMdgyfSYBGjogDc3/6tlQUDrwqVoWnJcbQOpwLx KR3N0DITlUEKXxI63KF76D0OYGufIF+oEMgSBtyAf7l74s7OLqZAiw5/eGeoDP+1qdGFSy+5UiQ7U lZ0H8KaSqTkTPEWnotyku2W3RLBgmhdzlgoUlxg6nM7391BCeoOP8qLj3dkfhThtaAJv0U0NxnI3K FdBkOfYauRdjYQpSb00rE45RRN2qKrnnIq5GHCTKPu4fmMFYY3Gt0DBiz4JzKP46HZLATjS82FWT3 m2J616Ng==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNFfq-0000000AwKY-2jbW; Mon, 16 Dec 2024 18:18:30 +0000 Received: from mail-ed1-x54a.google.com ([2a00:1450:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFMe-0000000Ar5o-43Kz for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:42 +0000 Received: by mail-ed1-x54a.google.com with SMTP id 4fb4d7f45d1cf-5d3e4e09ae7so5078616a12.3 for ; Mon, 16 Dec 2024 09:58:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371919; x=1734976719; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vLkxU7WF3J5Qh0Edw4/qg3eGiYkLZ2iXz4JSY5pi8io=; b=Nm4bXL7dRapiUvDAwmJrFs+Z/uZALNrFAwMy/Opkc9jHMQa5mqpFap865lb6253rAA akX2N7YFtERYAiQeAaQxLY98RHaD2cPylNp4qmI0XO6jlEbpSCIDxOp6KINqbVUnYp7N tn0ulsHHDZP5G5WM9bFjPs0GHHdGZqeu8TdX3QpxEZ+kXgyTDE98j9q0snUgEJ0Kwuh2 VBHTMYDmKuchDxKBr/DKQC+G2PYIBEveXiaq59eFdXOwh0SJvSQOSGkx2FG7se/bDJ6b 2TlopnjHxaHENFKWSWGPb/YJ4XYWex4bdI/Y9282BAwIThV5lbcy4vO5gLB/pOCi712L cCAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371919; x=1734976719; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vLkxU7WF3J5Qh0Edw4/qg3eGiYkLZ2iXz4JSY5pi8io=; b=q2yjwqHOYTOkzDPR/yHe7SR4D2/pmBoJM2CHwMtKV1rOMqOGydSVAUJX4nkVFh+xHP Rl8vTq7rfyN027xj+GYzpap/dATkEuX2VOceerD6KfMulefFSQZ9kzk4AX2SbAFw0xcJ 5LfCIBgmDH94B+IpQKUJ9DagB4qSdKWT2nM2ZthEOpMcgnozC+bMughU0V79eUFpssA9 Q/lDqPLpwmladscXBWZLDTsrZL1mHVjZYMHUeOqZqQ8smoCY2bSnh5lhDIHNYPhEWxlU RMrtYJh1JdINp1Nk3CSeIX36XtJZO/usj8UUaP7/HDxW6HH+A+C9AuQtgrxfcjgnyflb Uj6Q== X-Forwarded-Encrypted: i=1; AJvYcCUyBqTq5QQb0JicjfjvAVlEjBKxGds4xfluLl/kDDUM/xVTyThTK279DwvopAoC7+g4vgUs1dpEd6ggPgkg9MXt@lists.infradead.org X-Gm-Message-State: AOJu0YzxhkFtsHrWxonZSE+DLkrEYfAlSYW3uA49BIemQeQZ593m87dv yhV9DWH08t7P8DUcDx5HgtnFTpQN/huwEifEyuSB5YuWnLl0/HomV/TZY6fzS2IiDkstU2jWbaO ee5rkMw== X-Google-Smtp-Source: AGHT+IGbIEoG5vbn6iQy08UEUu36i7r1jxdCnom039llyQo30KhKhUU2a+YhuuCdj1YaRdinsIVQTfjkxghQ X-Received: from edvu14.prod.google.com ([2002:a05:6402:110e:b0:5cf:ca3f:365]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:2807:b0:5d0:d610:caa2 with SMTP id 4fb4d7f45d1cf-5d63c3ad29cmr12773175a12.26.1734371919408; Mon, 16 Dec 2024 09:58:39 -0800 (PST) Date: Mon, 16 Dec 2024 17:58:01 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-17-qperret@google.com> Subject: [PATCH v3 16/18] KVM: arm64: Introduce __pkvm_tlb_flush_vmid() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095841_011450_3978B0A4 X-CRM114-Status: GOOD ( 11.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a new hypercall to flush the TLBs of non-protected guests. The host kernel will be responsible for issuing this hypercall after changing stage-2 permissions using the __pkvm_host_relax_guest_perms() or __pkvm_host_wrprotect_guest() paths. This is left under the host's responsibility for performance reasons. Note however that the TLB maintenance for all *unmap* operations still remains entirely under the hypervisor's responsibility for security reasons -- an unmapped page may be donated to another entity, so a stale TLB entry could be used to leak private data. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 17 +++++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index a3b07db2776c..002088c6e297 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -87,6 +87,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put, + __KVM_HOST_SMCCC_FUNC___pkvm_tlb_flush_vmid, }; #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 32c4627b5b5b..130f5f23bcb5 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -389,6 +389,22 @@ static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) __kvm_tlb_flush_vmid(kern_hyp_va(mmu)); } +static void handle___pkvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + struct pkvm_hyp_vm *hyp_vm; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vm = get_np_pkvm_hyp_vm(handle); + if (!hyp_vm) + return; + + __kvm_tlb_flush_vmid(&hyp_vm->kvm.arch.mmu); + put_pkvm_hyp_vm(hyp_vm); +} + static void handle___kvm_flush_cpu_context(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -573,6 +589,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_teardown_vm), HANDLE_FUNC(__pkvm_vcpu_load), HANDLE_FUNC(__pkvm_vcpu_put), + HANDLE_FUNC(__pkvm_tlb_flush_vmid), }; static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) From patchwork Mon Dec 16 17:58:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910208 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CD899E7717F for ; Mon, 16 Dec 2024 18:19:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=QHTtHFS5HwQp5yZdDtuGaqFCf7n78KGed6qmJTmvD8M=; b=DRs2KPNY60fmkcwquE6QAxXtAa MQrm+15wIsm2F2NK2yXtCemgFyv7fwfW8FkaeMW9awwcLyouOGhi6ldBhhGqUwHvn1PZ2+tgTyYqj 3EdQ5CESY2DjqHaTs1SbYAkNOqJ4EfinrxyAFrlS/+T5zGUXp5EtUhcRUwE6kvaeIw5iNuvkaBDUc StBlUpXSM6djB0Vns/U7D2fMKrXYWQZNQHGmkDvgr3tsaf8Y5DqWrda6cCnjQNBTjEyCH2EL+DNyx QlVu7eTMNALqXGtvFJlTFwUlzOLSUuyteNp27Ro56zyl79xdvAu9HYFC+jbCEKr43sCkQ8LFxoe4l 8s6E5Ipw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNFgv-0000000AwXp-1AJ3; Mon, 16 Dec 2024 18:19:37 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFMh-0000000Ar72-0RyB for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:44 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d3ff3c1b34so4668230a12.1 for ; Mon, 16 Dec 2024 09:58:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371921; x=1734976721; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QHTtHFS5HwQp5yZdDtuGaqFCf7n78KGed6qmJTmvD8M=; b=jeFrPPd9UE+AI8FW4axBl1UXVxKz/DLZzPQRn13HfGtzYOBgHkauPydpyquAp3iNzb iCNqzx+GaeA/gFrou3xHIH9P5mnCkkrqBlydo9pw/mUIh7gAPiqfdCZzN7dZwJMH+AnJ 9WTYPhYexBoZzYnw9efLsLUD+7DP8h6FrmE1xgPsNkdvTKqb3ZREhY8enhL/QUArTjTR xBKTrGWbFFoEdOH7CtbF4qIx+ETTWbcbtNdhd/RA46Tyo5b3UnTx4+gG1F5jfHSGJGkt wVUAIc4SWDmN3qJjEFn5fb1CJzD9FiTrr1NWsy2kYy76caP/ByRg83TVfe+wvGU+aVcv Kmag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371921; x=1734976721; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QHTtHFS5HwQp5yZdDtuGaqFCf7n78KGed6qmJTmvD8M=; b=R5ZzNr54ogXnZP7kGergfjyobLG4noUVP/2vudo++s2XHu6URwM5liUG+WGGqT5qYW JNlEIPxI9G8GyrWE3on1WfVXUQ3jAITAMykQB0UtufjxQu9bdx1b8PvwPz7Nw/UPnEEe hBFUymbBwoU29jdTVWLb/jQnRcB3nYq6cLrBA/imT5SCzYntdAmPhOF+NpAUQZ6ayKoP CCCWOIvL0AmgA7gX87LZ+OwohV1WKbU9Y4+Sak1o04pNj8kZjQb521awBTIalUXzDakZ AcMNlGCXLP7F7mQH2bVz2gPfcLL7AT3kxv2RoIZTbUtDhV/BJsu84bf/lPj5wbyREsz+ Dj2A== X-Forwarded-Encrypted: i=1; AJvYcCWvR1nSlMQIfgFTO892bSCWdkdqgiVlzuoo+WgIPjyuQ403BrhCc5sr66UupoEeJxf2pmT5CsVXB74qd6vd++0x@lists.infradead.org X-Gm-Message-State: AOJu0YxP+ELbQSUbnN+orZyVud3ehagZEE7MABu/gFkQriRBsrNkl7e7 gp0clknwxEB2LAvpGzrC4Dy5lyfX/qnjH/KGpZAKiohpYvjksepS1H0XRYstxRjX82ZAn5+LX6W Zmpsr3A== X-Google-Smtp-Source: AGHT+IHPbTD8KaiYcFmXHQWnggpBPrsvUFvz8/+wp3Al0bflSEL8Fx1IyM4sfmVOB6RApaHAK0UEo7YC5GBv X-Received: from edcp13.prod.google.com ([2002:a05:6402:43cd:b0:5d0:1dc6:40e5]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:5193:b0:5d1:22c2:6c56 with SMTP id 4fb4d7f45d1cf-5d7d4092d1fmr554063a12.17.1734371921520; Mon, 16 Dec 2024 09:58:41 -0800 (PST) Date: Mon, 16 Dec 2024 17:58:02 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-18-qperret@google.com> Subject: [PATCH v3 17/18] KVM: arm64: Introduce the EL1 pKVM MMU From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095843_150501_B4C62507 X-CRM114-Status: GOOD ( 22.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a set of helper functions allowing to manipulate the pKVM guest stage-2 page-tables from EL1 using pKVM's HVC interface. Each helper has an exact one-to-one correspondance with the traditional kvm_pgtable_stage2_*() functions from pgtable.c, with a strictly matching prototype. This will ease plumbing later on in mmu.c. These callbacks track the gfn->pfn mappings in a simple rb_tree indexed by IPA in lieu of a page-table. This rb-tree is kept in sync with pKVM's state and is protected by a new rwlock -- the existing mmu_lock protection does not suffice in the map() path where the tree must be modified while user_mem_abort() only acquires a read_lock. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/kvm_pgtable.h | 23 ++-- arch/arm64/include/asm/kvm_pkvm.h | 23 ++++ arch/arm64/kvm/pkvm.c | 198 +++++++++++++++++++++++++++ 4 files changed, 236 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 1246f1d01dbf..f23f4ea9ec8b 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -85,6 +85,7 @@ void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu); struct kvm_hyp_memcache { phys_addr_t head; unsigned long nr_pages; + struct pkvm_mapping *mapping; /* only used from EL1 */ }; static inline void push_hyp_memcache(struct kvm_hyp_memcache *mc, diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 04418b5e3004..6b9d274052c7 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -412,15 +412,20 @@ static inline bool kvm_pgtable_walk_lock_held(void) * be used instead of block mappings. */ struct kvm_pgtable { - u32 ia_bits; - s8 start_level; - kvm_pteref_t pgd; - struct kvm_pgtable_mm_ops *mm_ops; - - /* Stage-2 only */ - struct kvm_s2_mmu *mmu; - enum kvm_pgtable_stage2_flags flags; - kvm_pgtable_force_pte_cb_t force_pte_cb; + union { + struct rb_root pkvm_mappings; + struct { + u32 ia_bits; + s8 start_level; + kvm_pteref_t pgd; + struct kvm_pgtable_mm_ops *mm_ops; + + /* Stage-2 only */ + enum kvm_pgtable_stage2_flags flags; + kvm_pgtable_force_pte_cb_t force_pte_cb; + }; + }; + struct kvm_s2_mmu *mmu; }; /** diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index cd56acd9a842..76a8b70176a6 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -137,4 +137,27 @@ static inline size_t pkvm_host_sve_state_size(void) SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_sve_max_vl))); } +struct pkvm_mapping { + struct rb_node node; + u64 gfn; + u64 pfn; +}; + +int pkvm_pgtable_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, struct kvm_pgtable_mm_ops *mm_ops); +void pkvm_pgtable_destroy(struct kvm_pgtable *pgt); +int pkvm_pgtable_map(struct kvm_pgtable *pgt, u64 addr, u64 size, + u64 phys, enum kvm_pgtable_prot prot, + void *mc, enum kvm_pgtable_walk_flags flags); +int pkvm_pgtable_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); +int pkvm_pgtable_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size); +int pkvm_pgtable_flush(struct kvm_pgtable *pgt, u64 addr, u64 size); +bool pkvm_pgtable_test_clear_young(struct kvm_pgtable *pgt, u64 addr, u64 size, bool mkold); +int pkvm_pgtable_relax_perms(struct kvm_pgtable *pgt, u64 addr, enum kvm_pgtable_prot prot, + enum kvm_pgtable_walk_flags flags); +void pkvm_pgtable_mkyoung(struct kvm_pgtable *pgt, u64 addr, enum kvm_pgtable_walk_flags flags); +int pkvm_pgtable_split(struct kvm_pgtable *pgt, u64 addr, u64 size, struct kvm_mmu_memory_cache *mc); +void pkvm_pgtable_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, s8 level); +kvm_pte_t *pkvm_pgtable_create_unlinked(struct kvm_pgtable *pgt, u64 phys, s8 level, + enum kvm_pgtable_prot prot, void *mc, bool force_pte); + #endif /* __ARM64_KVM_PKVM_H__ */ diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 85117ea8f351..9de9159afa5a 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -268,3 +269,200 @@ static int __init finalize_pkvm(void) return ret; } device_initcall_sync(finalize_pkvm); + +static int cmp_mappings(struct rb_node *node, const struct rb_node *parent) +{ + struct pkvm_mapping *a = rb_entry(node, struct pkvm_mapping, node); + struct pkvm_mapping *b = rb_entry(parent, struct pkvm_mapping, node); + + if (a->gfn < b->gfn) + return -1; + if (a->gfn > b->gfn) + return 1; + return 0; +} + +static struct rb_node *find_first_mapping_node(struct rb_root *root, u64 gfn) +{ + struct rb_node *node = root->rb_node, *prev = NULL; + struct pkvm_mapping *mapping; + + while (node) { + mapping = rb_entry(node, struct pkvm_mapping, node); + if (mapping->gfn == gfn) + return node; + prev = node; + node = (gfn < mapping->gfn) ? node->rb_left : node->rb_right; + } + + return prev; +} + +/* + * __tmp is updated to rb_next(__tmp) *before* entering the body of the loop to allow freeing + * of __map inline. + */ +#define for_each_mapping_in_range_safe(__pgt, __start, __end, __map) \ + for (struct rb_node *__tmp = find_first_mapping_node(&(__pgt)->pkvm_mappings, \ + ((__start) >> PAGE_SHIFT)); \ + __tmp && ({ \ + __map = rb_entry(__tmp, struct pkvm_mapping, node); \ + __tmp = rb_next(__tmp); \ + true; \ + }); \ + ) \ + if (__map->gfn < ((__start) >> PAGE_SHIFT)) \ + continue; \ + else if (__map->gfn >= ((__end) >> PAGE_SHIFT)) \ + break; \ + else + +int pkvm_pgtable_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, struct kvm_pgtable_mm_ops *mm_ops) +{ + pgt->pkvm_mappings = RB_ROOT; + pgt->mmu = mmu; + + return 0; +} + +void pkvm_pgtable_destroy(struct kvm_pgtable *pgt) +{ + struct kvm *kvm = kvm_s2_mmu_to_kvm(pgt->mmu); + pkvm_handle_t handle = kvm->arch.pkvm.handle; + struct pkvm_mapping *mapping; + struct rb_node *node; + + if (!handle) + return; + + node = rb_first(&pgt->pkvm_mappings); + while (node) { + mapping = rb_entry(node, struct pkvm_mapping, node); + kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); + node = rb_next(node); + rb_erase(&mapping->node, &pgt->pkvm_mappings); + kfree(mapping); + } +} + +int pkvm_pgtable_map(struct kvm_pgtable *pgt, u64 addr, u64 size, + u64 phys, enum kvm_pgtable_prot prot, + void *mc, enum kvm_pgtable_walk_flags flags) +{ + struct kvm *kvm = kvm_s2_mmu_to_kvm(pgt->mmu); + struct pkvm_mapping *mapping = NULL; + struct kvm_hyp_memcache *cache = mc; + u64 gfn = addr >> PAGE_SHIFT; + u64 pfn = phys >> PAGE_SHIFT; + int ret; + + if (size != PAGE_SIZE) + return -EINVAL; + + lockdep_assert_held_write(&kvm->mmu_lock); + ret = kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, prot); + if (ret) { + /* Is the gfn already mapped due to a racing vCPU? */ + if (ret == -EPERM) + return -EAGAIN; + } + + swap(mapping, cache->mapping); + mapping->gfn = gfn; + mapping->pfn = pfn; + WARN_ON(rb_find_add(&mapping->node, &pgt->pkvm_mappings, cmp_mappings)); + + return ret; +} + +int pkvm_pgtable_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct kvm *kvm = kvm_s2_mmu_to_kvm(pgt->mmu); + pkvm_handle_t handle = kvm->arch.pkvm.handle; + struct pkvm_mapping *mapping; + int ret = 0; + + lockdep_assert_held_write(&kvm->mmu_lock); + for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { + ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); + if (WARN_ON(ret)) + break; + rb_erase(&mapping->node, &pgt->pkvm_mappings); + kfree(mapping); + } + + return ret; +} + +int pkvm_pgtable_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct kvm *kvm = kvm_s2_mmu_to_kvm(pgt->mmu); + pkvm_handle_t handle = kvm->arch.pkvm.handle; + struct pkvm_mapping *mapping; + int ret = 0; + + lockdep_assert_held(&kvm->mmu_lock); + for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { + ret = kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->gfn); + if (WARN_ON(ret)) + break; + } + + return ret; +} + +int pkvm_pgtable_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct kvm *kvm = kvm_s2_mmu_to_kvm(pgt->mmu); + struct pkvm_mapping *mapping; + + lockdep_assert_held(&kvm->mmu_lock); + for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) + __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), PAGE_SIZE); + + return 0; +} + +bool pkvm_pgtable_test_clear_young(struct kvm_pgtable *pgt, u64 addr, u64 size, bool mkold) +{ + struct kvm *kvm = kvm_s2_mmu_to_kvm(pgt->mmu); + pkvm_handle_t handle = kvm->arch.pkvm.handle; + struct pkvm_mapping *mapping; + bool young = false; + + lockdep_assert_held(&kvm->mmu_lock); + for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) + young |= kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle, mapping->gfn, + mkold); + + return young; +} + +int pkvm_pgtable_relax_perms(struct kvm_pgtable *pgt, u64 addr, enum kvm_pgtable_prot prot, + enum kvm_pgtable_walk_flags flags) +{ + return kvm_call_hyp_nvhe(__pkvm_host_relax_perms_guest, addr >> PAGE_SHIFT, prot); +} + +void pkvm_pgtable_mkyoung(struct kvm_pgtable *pgt, u64 addr, enum kvm_pgtable_walk_flags flags) +{ + WARN_ON(kvm_call_hyp_nvhe(__pkvm_host_mkyoung_guest, addr >> PAGE_SHIFT)); +} + +void pkvm_pgtable_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, s8 level) +{ + WARN_ON_ONCE(1); +} + +kvm_pte_t *pkvm_pgtable_create_unlinked(struct kvm_pgtable *pgt, u64 phys, s8 level, + enum kvm_pgtable_prot prot, void *mc, bool force_pte) +{ + WARN_ON_ONCE(1); + return NULL; +} + +int pkvm_pgtable_split(struct kvm_pgtable *pgt, u64 addr, u64 size, struct kvm_mmu_memory_cache *mc) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} From patchwork Mon Dec 16 17:58:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13910209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 344C8E7717F for ; Mon, 16 Dec 2024 18:20:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=0WLd7b4UlDBftWKgCzWDkFinsnMNAvtsNvXfdB6R70k=; b=ah/hhV12zjvNAaGzqzXhCbiDeP 88ftckqmWy2/XE3ZG8naGPpzYf75WpUxYgfehcqP/k4gp/Lz9s9q1RprJ0S2zC1gLMmAQ5nsbcBx5 88mnJphtNZBvjVqM92GNex+Pgw8U+xQFjCU0DwA8KtvePwoapJnttlMXFJJLN5WVpUBelbEwV/RLP dzdKxHP1D0rkAJdjcTncIH+E9UneJ+SrcrgiYHybblczHkYCOKJnw73bH9JbqCP28aaPMYF5c7U7a M9nFjG/llzDLbIc9dgFHjW1QiuEg/Dr5BaQ8wol7jC3Oro+BcRzMmr1650eg0tuxfA0KQFR/pP/Ii Rh2snXIQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tNFi0-0000000Awox-3lKR; Mon, 16 Dec 2024 18:20:44 +0000 Received: from mail-ej1-x64a.google.com ([2a00:1450:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tNFMj-0000000Ar7o-2Lw6 for linux-arm-kernel@lists.infradead.org; Mon, 16 Dec 2024 17:58:46 +0000 Received: by mail-ej1-x64a.google.com with SMTP id a640c23a62f3a-aa68b4b957fso309851666b.3 for ; Mon, 16 Dec 2024 09:58:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371923; x=1734976723; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0WLd7b4UlDBftWKgCzWDkFinsnMNAvtsNvXfdB6R70k=; b=WHh21jLRrGTVCpDExwWDpDxktOXKsUq0/qVHmYPP1aKUFL7ivPS62GUQtscLTbfvRi pCotUM8KFvvieqZeIdVfZZF8FiRgbKIlV+V22UxXzg55PGxOQtSDVeWwuSaVLqA+HD5Y nyk5ti24VCJXj2g46yFI3ZHloLzQB4ECj3xP4sJhgB64lLEZR2r5hgGGX2S+AwZOMZy1 dkwDDNnj8keMbHG9r0OuTjmM5HJJXKI4ZLz3WQLY8MwstaAZ/fU4LppdvR79RUGTQZ2u ntzgED27AoM6jhgg4rYIbyTTxhk+o1bg9yUKleSx8JfYXfxRWqjaEwRkt9fz2Mu0RzvR KXng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371923; x=1734976723; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0WLd7b4UlDBftWKgCzWDkFinsnMNAvtsNvXfdB6R70k=; b=NrTN5/2VOrQNXCxX2Ol7MDb2eJeU7eNPx6dV8hJ/8oYJtx23R2KcW4PGA/JHg1AUIf KcZ+mu12zMakZR8AHBb0dg5jvlVz8XUbtVnAKvvxhVmaESOhNvvZkHhV4L7p8pnNJyNi sg6nfgSJYVazBeICJyC3QsuUwLO9Klq8VMLA4zEJprurwC2iCVX/ClKMkQEXoxsNXVbD xd2Z0XWOHN2yyzBKA+gQAEkdAn6sOxCOTfbkren489GonMwp4eTgha12pLfXOCwB3o9K qi2Chi2HzRmQExX9oI1C11iFd1HT1gef/4JNaQYyUBY56WXNlWnnEZPTwJ4fVtZO4Sxd ozSg== X-Forwarded-Encrypted: i=1; AJvYcCVV8622ChgojSPOZZk6pbIpYIMWE/w4Trq/TbsBi3N7kKRHNex182G9TGNvNLW6ynzaI+CZRdDkTXZ4z+gls1Te@lists.infradead.org X-Gm-Message-State: AOJu0YyJKvTOPoUe0e5DXwHA6UrncaFiPiYGDkCYp6CMYjGNQ+J9vLhI qCqcoLasH/Awc5HrNoKfnvEZKiaSr6w3p2ak5dk47W+ILoR0gXFVlZtQZw/zghDVylSgUNgKSJb RQsAwtA== X-Google-Smtp-Source: AGHT+IHKLg6kMvyS5z7roX/cFF+XVRsVkjcGlNEAY1Fj07XnCutqrkNlyaRElEPNtXSoIn6/RJ2zk88JSi3I X-Received: from ejcvg16.prod.google.com ([2002:a17:907:d310:b0:aab:d747:ee70]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:72c6:b0:aa5:2a57:1779 with SMTP id a640c23a62f3a-aab77eecc9cmr1409090666b.59.1734371923561; Mon, 16 Dec 2024 09:58:43 -0800 (PST) Date: Mon, 16 Dec 2024 17:58:03 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-19-qperret@google.com> Subject: [PATCH v3 18/18] KVM: arm64: Plumb the pKVM MMU in KVM From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241216_095845_605765_C483812B X-CRM114-Status: GOOD ( 20.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce the KVM_PGT_S2() helper macro to allow switching from the traditional pgtable code to the pKVM version easily in mmu.c. The cost of this 'indirection' is expected to be very minimal due to is_protected_kvm_enabled() being backed by a static key. With this, everything is in place to allow the delegation of non-protected guest stage-2 page-tables to pKVM, so let's stop using the host's kvm_s2_mmu from EL2 and enjoy the ride. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/kvm_mmu.h | 16 +++++ arch/arm64/kvm/arm.c | 9 ++- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 2 - arch/arm64/kvm/mmu.c | 107 +++++++++++++++++++++-------- 4 files changed, 101 insertions(+), 33 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 66d93e320ec8..d116ab4230e8 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -353,6 +353,22 @@ static inline bool kvm_is_nested_s2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu) return &kvm->arch.mmu != mmu; } +static inline void kvm_fault_lock(struct kvm *kvm) +{ + if (is_protected_kvm_enabled()) + write_lock(&kvm->mmu_lock); + else + read_lock(&kvm->mmu_lock); +} + +static inline void kvm_fault_unlock(struct kvm *kvm) +{ + if (is_protected_kvm_enabled()) + write_unlock(&kvm->mmu_lock); + else + read_unlock(&kvm->mmu_lock); +} + #ifdef CONFIG_PTDUMP_STAGE2_DEBUGFS void kvm_s2_ptdump_create_debugfs(struct kvm *kvm); #else diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 55cc62b2f469..9bcbc7b8ed38 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -502,7 +502,10 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) { - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + if (!is_protected_kvm_enabled()) + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + else + free_hyp_memcache(&vcpu->arch.pkvm_memcache); kvm_timer_vcpu_terminate(vcpu); kvm_pmu_vcpu_destroy(vcpu); kvm_vgic_vcpu_destroy(vcpu); @@ -574,6 +577,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) struct kvm_s2_mmu *mmu; int *last_ran; + if (is_protected_kvm_enabled()) + goto nommu; + if (vcpu_has_nv(vcpu)) kvm_vcpu_load_hw_mmu(vcpu); @@ -594,6 +600,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) *last_ran = vcpu->vcpu_idx; } +nommu: vcpu->cpu = cpu; kvm_vgic_load(vcpu); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 130f5f23bcb5..258d572eed62 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -103,8 +103,6 @@ static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) /* Limit guest vector length to the maximum supported by the host. */ hyp_vcpu->vcpu.arch.sve_max_vl = min(host_vcpu->arch.sve_max_vl, kvm_host_sve_max_vl); - hyp_vcpu->vcpu.arch.hw_mmu = host_vcpu->arch.hw_mmu; - hyp_vcpu->vcpu.arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; hyp_vcpu->vcpu.arch.hcr_el2 &= ~(HCR_TWI | HCR_TWE); hyp_vcpu->vcpu.arch.hcr_el2 |= READ_ONCE(host_vcpu->arch.hcr_el2) & diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 641e4fec1659..7c2995cb4577 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -31,6 +32,14 @@ static phys_addr_t __ro_after_init hyp_idmap_vector; static unsigned long __ro_after_init io_map_base; +#define KVM_PGT_S2(fn, ...) \ + ({ \ + typeof(kvm_pgtable_stage2_ ## fn) *__fn = kvm_pgtable_stage2_ ## fn; \ + if (is_protected_kvm_enabled()) \ + __fn = pkvm_pgtable_ ## fn; \ + __fn(__VA_ARGS__); \ + }) + static phys_addr_t __stage2_range_addr_end(phys_addr_t addr, phys_addr_t end, phys_addr_t size) { @@ -147,7 +156,7 @@ static int kvm_mmu_split_huge_pages(struct kvm *kvm, phys_addr_t addr, return -EINVAL; next = __stage2_range_addr_end(addr, end, chunk_size); - ret = kvm_pgtable_stage2_split(pgt, addr, next - addr, cache); + ret = KVM_PGT_S2(split, pgt, addr, next - addr, cache); if (ret) break; } while (addr = next, addr != end); @@ -168,15 +177,23 @@ static bool memslot_is_logging(struct kvm_memory_slot *memslot) */ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { - kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); + if (is_protected_kvm_enabled()) + kvm_call_hyp_nvhe(__pkvm_tlb_flush_vmid, kvm->arch.pkvm.handle); + else + kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); return 0; } int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages) { - kvm_tlb_flush_vmid_range(&kvm->arch.mmu, - gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT); + u64 size = nr_pages << PAGE_SHIFT; + u64 addr = gfn << PAGE_SHIFT; + + if (is_protected_kvm_enabled()) + kvm_call_hyp_nvhe(__pkvm_tlb_flush_vmid, kvm->arch.pkvm.handle); + else + kvm_tlb_flush_vmid_range(&kvm->arch.mmu, addr, size); return 0; } @@ -225,7 +242,7 @@ static void stage2_free_unlinked_table_rcu_cb(struct rcu_head *head) void *pgtable = page_to_virt(page); s8 level = page_private(page); - kvm_pgtable_stage2_free_unlinked(&kvm_s2_mm_ops, pgtable, level); + KVM_PGT_S2(free_unlinked, &kvm_s2_mm_ops, pgtable, level); } static void stage2_free_unlinked_table(void *addr, s8 level) @@ -280,6 +297,11 @@ static void invalidate_icache_guest_page(void *va, size_t size) __invalidate_icache_guest_page(va, size); } +static int kvm_s2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(unmap, pgt, addr, size); +} + /* * Unmapping vs dcache management: * @@ -324,8 +346,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 lockdep_assert_held_write(&kvm->mmu_lock); WARN_ON(size & ~PAGE_MASK); - WARN_ON(stage2_apply_range(mmu, start, end, kvm_pgtable_stage2_unmap, - may_block)); + WARN_ON(stage2_apply_range(mmu, start, end, kvm_s2_unmap, may_block)); } void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, @@ -334,9 +355,14 @@ void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, __unmap_stage2_range(mmu, start, size, may_block); } +static int kvm_s2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(flush, pgt, addr, size); +} + void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_flush); + stage2_apply_range_resched(mmu, addr, end, kvm_s2_flush); } static void stage2_flush_memslot(struct kvm *kvm, @@ -942,10 +968,14 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t return -ENOMEM; mmu->arch = &kvm->arch; - err = kvm_pgtable_stage2_init(pgt, mmu, &kvm_s2_mm_ops); + err = KVM_PGT_S2(init, pgt, mmu, &kvm_s2_mm_ops); if (err) goto out_free_pgtable; + mmu->pgt = pgt; + if (is_protected_kvm_enabled()) + return 0; + mmu->last_vcpu_ran = alloc_percpu(typeof(*mmu->last_vcpu_ran)); if (!mmu->last_vcpu_ran) { err = -ENOMEM; @@ -959,7 +989,6 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t mmu->split_page_chunk_size = KVM_ARM_EAGER_SPLIT_CHUNK_SIZE_DEFAULT; mmu->split_page_cache.gfp_zero = __GFP_ZERO; - mmu->pgt = pgt; mmu->pgd_phys = __pa(pgt->pgd); if (kvm_is_nested_s2_mmu(kvm, mmu)) @@ -968,7 +997,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t return 0; out_destroy_pgtable: - kvm_pgtable_stage2_destroy(pgt); + KVM_PGT_S2(destroy, pgt); out_free_pgtable: kfree(pgt); return err; @@ -1065,7 +1094,7 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) write_unlock(&kvm->mmu_lock); if (pgt) { - kvm_pgtable_stage2_destroy(pgt); + KVM_PGT_S2(destroy, pgt); kfree(pgt); } } @@ -1082,9 +1111,11 @@ static void *hyp_mc_alloc_fn(void *unused) void free_hyp_memcache(struct kvm_hyp_memcache *mc) { - if (is_protected_kvm_enabled()) - __free_hyp_memcache(mc, hyp_mc_free_fn, - kvm_host_va, NULL); + if (!is_protected_kvm_enabled()) + return; + + kfree(mc->mapping); + __free_hyp_memcache(mc, hyp_mc_free_fn, kvm_host_va, NULL); } int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages) @@ -1092,6 +1123,12 @@ int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages) if (!is_protected_kvm_enabled()) return 0; + if (!mc->mapping) { + mc->mapping = kzalloc(sizeof(struct pkvm_mapping), GFP_KERNEL_ACCOUNT); + if (!mc->mapping) + return -ENOMEM; + } + return __topup_hyp_memcache(mc, min_pages, hyp_mc_alloc_fn, kvm_host_pa, NULL); } @@ -1130,8 +1167,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, break; write_lock(&kvm->mmu_lock); - ret = kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, pa, prot, - &cache, 0); + ret = KVM_PGT_S2(map, pgt, addr, PAGE_SIZE, pa, prot, &cache, 0); write_unlock(&kvm->mmu_lock); if (ret) break; @@ -1143,6 +1179,10 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, return ret; } +static int kvm_s2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(wrprotect, pgt, addr, size); +} /** * kvm_stage2_wp_range() - write protect stage2 memory region range * @mmu: The KVM stage-2 MMU pointer @@ -1151,7 +1191,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, */ void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_wrprotect); + stage2_apply_range_resched(mmu, addr, end, kvm_s2_wrprotect); } /** @@ -1442,9 +1482,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, unsigned long mmu_seq; phys_addr_t ipa = fault_ipa; struct kvm *kvm = vcpu->kvm; - struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; struct vm_area_struct *vma; short vma_shift; + void *memcache; gfn_t gfn; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); @@ -1472,8 +1512,15 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * and a write fault needs to collapse a block entry into a table. */ if (!fault_is_perm || (logging_active && write_fault)) { - ret = kvm_mmu_topup_memory_cache(memcache, - kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu)); + int min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); + + if (!is_protected_kvm_enabled()) { + memcache = &vcpu->arch.mmu_page_cache; + ret = kvm_mmu_topup_memory_cache(memcache, min_pages); + } else { + memcache = &vcpu->arch.pkvm_memcache; + ret = topup_hyp_memcache(memcache, min_pages); + } if (ret) return ret; } @@ -1494,7 +1541,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * logging_active is guaranteed to never be true for VM_PFNMAP * memslots. */ - if (logging_active) { + if (logging_active || is_protected_kvm_enabled()) { force_pte = true; vma_shift = PAGE_SHIFT; } else { @@ -1634,7 +1681,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, prot |= kvm_encode_nested_level(nested); } - read_lock(&kvm->mmu_lock); + kvm_fault_lock(kvm); pgt = vcpu->arch.hw_mmu->pgt; if (mmu_invalidate_retry(kvm, mmu_seq)) { ret = -EAGAIN; @@ -1696,16 +1743,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * PTE, which will be preserved. */ prot &= ~KVM_NV_GUEST_MAP_SZ; - ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot, flags); + ret = KVM_PGT_S2(relax_perms, pgt, fault_ipa, prot, flags); } else { - ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, + ret = KVM_PGT_S2(map, pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, memcache, flags); } out_unlock: kvm_release_faultin_page(kvm, page, !!ret, writable); - read_unlock(&kvm->mmu_lock); + kvm_fault_unlock(kvm); /* Mark the page dirty only if the fault is handled successfully */ if (writable && !ret) @@ -1724,7 +1771,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) read_lock(&vcpu->kvm->mmu_lock); mmu = vcpu->arch.hw_mmu; - kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa, flags); + KVM_PGT_S2(mkyoung, mmu->pgt, fault_ipa, flags); read_unlock(&vcpu->kvm->mmu_lock); } @@ -1764,7 +1811,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) } /* Falls between the IPA range and the PARange? */ - if (fault_ipa >= BIT_ULL(vcpu->arch.hw_mmu->pgt->ia_bits)) { + if (fault_ipa >= BIT_ULL(VTCR_EL2_IPA(vcpu->arch.hw_mmu->vtcr))) { fault_ipa |= kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0); if (is_iabt) @@ -1930,7 +1977,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (!kvm->arch.mmu.pgt) return false; - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + return KVM_PGT_S2(test_clear_young, kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, size, true); /* @@ -1946,7 +1993,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (!kvm->arch.mmu.pgt) return false; - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + return KVM_PGT_S2(test_clear_young, kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, size, false); }