From patchwork Tue Dec 3 10:37:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892103 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F2894E69E9F for ; Tue, 3 Dec 2024 10:39:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=zqJAm+kSytYxVvAYW4fxQLPPZZDbz/LJCZvGfh8JEeM=; b=0SPNxIBHCmZcmisJgxqn6iMctg XlrnyLu2hmzXS1t18wPPKfmS+y521sMkWenuBeYteRYLS8wsa4mEe28euT8YVTL1/gO+SwFCK73eA I31tcrIwYLfSFFwaIJ76R6cbbUtYKuh77Y71XaZrTCLMNWjqjOrnQxKnNofBRk16PBMqMHPDiRtqU 3hIeIHYqrZTAmKtCWvl/etSGn8k39SOHxltkinEsiuRbg6ZrMdwnBQcsBtTqL/yCqzfBibs3RI6Nl FVSPfwT7xm3vfpnXfz5KuXk2tPFCfHVYnxBsIDGbMMt7hdi3LeTdapVx3+C/ZmWsjvS2sj/D0TZ2E oSiJkMgw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQJk-000000096Ef-3CTI; Tue, 03 Dec 2024 10:39:44 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQHl-000000095W3-1v3C for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:37:42 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d0cfaab94bso2236968a12.1 for ; Tue, 03 Dec 2024 02:37:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222259; x=1733827059; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zqJAm+kSytYxVvAYW4fxQLPPZZDbz/LJCZvGfh8JEeM=; b=FNchEBCAdD6h6+/c6T0cnU9fW8so0c4zX1bucWwoH0zwCzQhl6s0a6/9ICBktnu7I9 JM1kwWC0TxaL901xgATZ383L7QO18VYWlAZ/UHOU42ekS9JP9ZwtEoUwWttrFy/wkPSJ Sjg9VoX55WPyEdbqPYme63qTkWZjU7pOWNlSI44Vd7xNGAnEQO9Ec6V3HjWv9CzHHKkJ J3X8ee/y5qXg2zAo6cVZ73Ua9RHlTYkCMRMC8EI+VBdArqEwh1T+79DmdnEAMD4cQ3XS 9y4kLzZR0yu/NcdxisAqzYUZa/PiOQeaZRryZjWIc9Vu/neS/mKVEAnj+imNfUV8sNOz XmoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222259; x=1733827059; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zqJAm+kSytYxVvAYW4fxQLPPZZDbz/LJCZvGfh8JEeM=; b=OlPok2y+O1x87v3ZsbFnezrQiiAoOBrO1L/21+Kf6dZlWxZfAmRou19aPG60iHvSTd s9Ow7iQ/ybwvIOxNZeLp0r0jWSg8iuDlSjrwaol0KXZEzcFuR/QVtEc+tSByoOD23LtC Z+Uve/Xnvb+0053FoEy8tM/ZdQqwGXhdgJ5Xrqh4RigYFLyAx9PNT+M1CuYFogL86PpS bCFEIQ9XN/ne3ooeUq65apckPxyPvyHAPYdJFrFsgtpVfbTVVZOiruZkKYSX/Pe6V4Vj dau1OyD6hdg7CQxyZNwJbbUQnXSH6+yJHAYcAWaDruO5SgqTkhKOGv+3ugrFFDhsVX8x Tyrg== X-Forwarded-Encrypted: i=1; AJvYcCU3OWYn2yUhwa6PGcVQjfBmaAPe03vhbee3KSudGP+vJ/bW5vn+3r8VF2YsyXAtRfIe3JMbkIPIWR+4M9NUZz9Y@lists.infradead.org X-Gm-Message-State: AOJu0YwMxBqLFkEckiZOgKs65g7p9jiMyYZAhIOdKu/CRTOeNbgNwoto kEDqQ1TeX6PYS6W/q6FqYp7VI/e2nblfW3B+Uc/FkOCG+FJ0DzYHuCsAuCRWYPs9RKMK/hbHAvt 3esCmFA== X-Google-Smtp-Source: AGHT+IEEJ7rK/bdiM0bZrA50MGLa30le76F10afOeSti4NHKLxb/Hx80hARIZU6bFlY0t7qoOHZgeSespPMG X-Received: from edb10.prod.google.com ([2002:a05:6402:238a:b0:5d0:d98a:2511]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:5186:b0:5d0:214b:9343 with SMTP id 4fb4d7f45d1cf-5d10cb4f1fcmr1754647a12.4.1733222259494; Tue, 03 Dec 2024 02:37:39 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:18 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-2-qperret@google.com> Subject: [PATCH v2 01/18] KVM: arm64: Change the layout of enum pkvm_page_state From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023741_494856_168571DD X-CRM114-Status: GOOD ( 11.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The 'concrete' (a.k.a non-meta) page states are currently encoded using software bits in PTEs. For performance reasons, the abstract pkvm_page_state enum uses the same bits to encode these states as that makes conversions from and to PTEs easy. In order to prepare the ground for moving the 'concrete' state storage to the hyp vmemmap, re-arrange the enum to use bits 0 and 1 for this purpose. No functional changes intended. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 0972faccc2af..ca3177481b78 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -24,25 +24,28 @@ */ enum pkvm_page_state { PKVM_PAGE_OWNED = 0ULL, - PKVM_PAGE_SHARED_OWNED = KVM_PGTABLE_PROT_SW0, - PKVM_PAGE_SHARED_BORROWED = KVM_PGTABLE_PROT_SW1, - __PKVM_PAGE_RESERVED = KVM_PGTABLE_PROT_SW0 | - KVM_PGTABLE_PROT_SW1, + PKVM_PAGE_SHARED_OWNED = BIT(0), + PKVM_PAGE_SHARED_BORROWED = BIT(1), + __PKVM_PAGE_RESERVED = BIT(0) | BIT(1), /* Meta-states which aren't encoded directly in the PTE's SW bits */ - PKVM_NOPAGE, + PKVM_NOPAGE = BIT(2), }; +#define PKVM_PAGE_META_STATES_MASK (~(BIT(0) | BIT(1))) #define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT_SW1) static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot prot, enum pkvm_page_state state) { - return (prot & ~PKVM_PAGE_STATE_PROT_MASK) | state; + BUG_ON(state & PKVM_PAGE_META_STATES_MASK); + prot &= ~PKVM_PAGE_STATE_PROT_MASK; + prot |= FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); + return prot; } static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) { - return prot & PKVM_PAGE_STATE_PROT_MASK; + return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); } struct host_mmu { From patchwork Tue Dec 3 10:37:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892104 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 93BC8E69E9D for ; Tue, 3 Dec 2024 10:40:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=8VefaN+uYOgTsW1Gl3aOfwIF+ybrZqn9BRXod1CkyGg=; b=xtviI3rtT75BoSXEDEekFSMbvF NBWMlmBs65ftGMv7Z7C1bECbNMMkOxVIPxOb2P9UMghkxb5FTF7fsLhLmLrgB+yqI1g94Q/Eu9erx B+MPpiGY6Hj+xThKGOL0EfY15DFxP7SbEM4QoclSVWchOLSE7QKGQYp3qRxc6OM/nUQMnwf1ZGVYX 3Jt3Q4O23ZW37Tb7mo6mxkO5SEPfgrLZdW+QHYN91zLk7j/kPsp2hafoAx6Vcsocn0lb5vBqMjaPz 5gwIBUybU8JR5qYyaYW5AsN0K+mLJZv+jD6UX8TWOEJT4Zupe8GCR57zKqxvurPYuzUaVkZ4Uzn9A EG0ByM5Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQKi-000000096VM-2fjP; Tue, 03 Dec 2024 10:40:44 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQHn-000000095X1-0lD8 for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:37:44 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d0bd0f5ee4so2656134a12.3 for ; Tue, 03 Dec 2024 02:37:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222261; x=1733827061; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8VefaN+uYOgTsW1Gl3aOfwIF+ybrZqn9BRXod1CkyGg=; b=HwhdBY8DpTX0ROnMl5vsIikl5QER1J4PxHmu4Tg4Z8uiy0bBzQbYnV4KB1VAX3x12X Uoij4M5vcPSrIZ+y/HxpQj6C2MMWqrIVlYz4F4UczEnXSDtmFENJqzfCKFKsBa9YK1sX GiLXTR0K4lsFkIu3sgMa5QB0HyJdBRRL2kLSR/3HYoHdAlah8cwXkT3dEH0Y90M2dQWo ruHlbPUVd8nJwT0WxW2h/QHkfILsVbq5Vr7YQ105+BgWNFEK2057YaQFwmWeexwr8ui1 kDPXuJOQstqwcWfOiEyN47YwZjMMbdBFUsZS0zXeaFkG+NreUc1c3s9OV0jHeRbjxMdy P2Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222261; x=1733827061; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8VefaN+uYOgTsW1Gl3aOfwIF+ybrZqn9BRXod1CkyGg=; b=T40c6ebiB4t7nuQWqDERftgiiccmIY9TWt3jAhvh9zZK5EDjccQOfkwwF5EWmHppdX gVgLbjVB6DfkZxGOQ2os6npVMCrKQAtSC8HI2qCZmQRyM4EObEcCgvfUU+IN8nCOgsxf dGqPRnaD1+9AQ8/dMGEcmz27q7llyBBpQL8gDlzFfFNiSPPA2rOm9fMkvDuIaYbhlt3k igjQgLI/oc0W7YiqkjO3X7uQHoS8+AgLD0ZOwxcHwXo+alkCn7YxW3NgvDyUk+al10q4 h2kTq85uI/buNum/9ktogiIkWRkKJT5DBDkBFhtc/TxgVKXln98ighs+HyJ+Vtpo/zug e/DA== X-Forwarded-Encrypted: i=1; AJvYcCXDio1kzNsyNBg/BV644h7I0FNSxmSJjNQSAkcoyoDN/iHmzf5wLBraMfreULWvcB7CKFVE/U/qADhUliPIswto@lists.infradead.org X-Gm-Message-State: AOJu0YxFiiQPPmhPiDrSHOTas24aY1MPnO/HMI1uoJ/XuVE4WhUUpWCM an+rPcqGTUn9sd4Fvmwf5f3OT3y2DaYX69JmNbgNluJbMErSXa2Ffk9Ck0HCnjsX+HDtRmgmve9 Xcl3g6w== X-Google-Smtp-Source: AGHT+IGE5QTCAH5U7r3oFdzIO1ntikDg2nD7iaMmA+2JXan6nFEy0acuL36E+ffU/jQD6HNvAgowxLrE1pEQ X-Received: from edxa18.prod.google.com ([2002:a05:6402:13d2:b0:5cf:b86e:9cac]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:51d0:b0:5d0:d84c:abb3 with SMTP id 4fb4d7f45d1cf-5d10cb7f906mr1537351a12.26.1733222261602; Tue, 03 Dec 2024 02:37:41 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:19 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-3-qperret@google.com> Subject: [PATCH v2 02/18] KVM: arm64: Move enum pkvm_page_state to memory.h From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023743_221687_D54BEF66 X-CRM114-Status: GOOD ( 14.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In order to prepare the way for storing page-tracking information in pKVM's vmemmap, move the enum pkvm_page_state definition to nvhe/memory.h. No functional changes intended. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 35 +------------------ arch/arm64/kvm/hyp/include/nvhe/memory.h | 34 ++++++++++++++++++ 2 files changed, 35 insertions(+), 34 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index ca3177481b78..25038ac705d8 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -11,43 +11,10 @@ #include #include #include +#include #include #include -/* - * SW bits 0-1 are reserved to track the memory ownership state of each page: - * 00: The page is owned exclusively by the page-table owner. - * 01: The page is owned by the page-table owner, but is shared - * with another entity. - * 10: The page is shared with, but not owned by the page-table owner. - * 11: Reserved for future use (lending). - */ -enum pkvm_page_state { - PKVM_PAGE_OWNED = 0ULL, - PKVM_PAGE_SHARED_OWNED = BIT(0), - PKVM_PAGE_SHARED_BORROWED = BIT(1), - __PKVM_PAGE_RESERVED = BIT(0) | BIT(1), - - /* Meta-states which aren't encoded directly in the PTE's SW bits */ - PKVM_NOPAGE = BIT(2), -}; -#define PKVM_PAGE_META_STATES_MASK (~(BIT(0) | BIT(1))) - -#define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT_SW1) -static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot prot, - enum pkvm_page_state state) -{ - BUG_ON(state & PKVM_PAGE_META_STATES_MASK); - prot &= ~PKVM_PAGE_STATE_PROT_MASK; - prot |= FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); - return prot; -} - -static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) -{ - return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); -} - struct host_mmu { struct kvm_arch arch; struct kvm_pgtable pgt; diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index ab205c4d6774..6dfeb000371c 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -7,6 +7,40 @@ #include +/* + * SW bits 0-1 are reserved to track the memory ownership state of each page: + * 00: The page is owned exclusively by the page-table owner. + * 01: The page is owned by the page-table owner, but is shared + * with another entity. + * 10: The page is shared with, but not owned by the page-table owner. + * 11: Reserved for future use (lending). + */ +enum pkvm_page_state { + PKVM_PAGE_OWNED = 0ULL, + PKVM_PAGE_SHARED_OWNED = BIT(0), + PKVM_PAGE_SHARED_BORROWED = BIT(1), + __PKVM_PAGE_RESERVED = BIT(0) | BIT(1), + + /* Meta-states which aren't encoded directly in the PTE's SW bits */ + PKVM_NOPAGE = BIT(2), +}; +#define PKVM_PAGE_META_STATES_MASK (~(BIT(0) | BIT(1))) + +#define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT_SW1) +static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot prot, + enum pkvm_page_state state) +{ + BUG_ON(state & PKVM_PAGE_META_STATES_MASK); + prot &= ~PKVM_PAGE_STATE_PROT_MASK; + prot |= FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); + return prot; +} + +static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) +{ + return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); +} + struct hyp_page { unsigned short refcount; unsigned short order; From patchwork Tue Dec 3 10:37:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892105 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1D8A6E69E9D for ; Tue, 3 Dec 2024 10:41:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=h2InjsufEFntOTn6gngou7a+5/1DcqO5mqDCh0ChGiQ=; b=NHEr6KaXJoyt/BLwiOCh7U43mz BdgQ08I7oSWpW1iiBOmtzf1082GuyelQX4A6iI0/nbGh2Ss5OqBbg7Cv6bJC2zxyL3tKl7mhrOL+X 6wLWhg/OZYeMsQT82klxtDfZcZqKbneCy+cuvWEtLbxi2BMv8jNaLUJhvVUzMNXlT+O/6Wv9hhP/Z RbQ4ZbPeHWc9ThaJZBn2Scha3DXkQoGdOVbxnl3qTeovgqCK6rL5U2dKQWWFL2SkA+hMUts3zztVD zTlbSzLHhA8x+lu7Q7CRbKYjFnwv8nLsqc3x+17n0ZLT/4S+dSVRcZfeOLvOLS74ohOtf0o0fQgEd k9tJGehQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQLh-000000096k1-0zuV; Tue, 03 Dec 2024 10:41:45 +0000 Received: from mail-ej1-x64a.google.com ([2a00:1450:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQHq-000000095YT-0nIS for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:37:47 +0000 Received: by mail-ej1-x64a.google.com with SMTP id a640c23a62f3a-aa5462eb32aso188501366b.2 for ; Tue, 03 Dec 2024 02:37:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222264; x=1733827064; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=h2InjsufEFntOTn6gngou7a+5/1DcqO5mqDCh0ChGiQ=; b=v0FZubQlL7sCf5Eai+ZcmiYD6I5OO5V/WimXW7nlfbmTG5G21KEidCrFpchkw3c1WL FJh+cvqTdNR9h4XDA9BUG3J3HcSRJpMAuPtMkBZC++oPUls0V4q+oWfIyBXMBL9/e7Gu rTGzKQ8TpOE0d2PxzDZ8ZmY/c8P6ZxVR3Bh+sWypyn7qhLp2SLbz5RW5witvuQqZLTjk D/qjocbDp6BiUIpg9Ppfph7Sgk4QTEFrpEqYzWZf0W5U21WsMeD1tHHrLQj2G9yh8uxl Si7AzE5uv470gNy5Fv20ejoglu4e7mgs0QjUYALetQxdrsIbDf87oLAbGA4pRe/ti6dV ZA+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222264; x=1733827064; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=h2InjsufEFntOTn6gngou7a+5/1DcqO5mqDCh0ChGiQ=; b=HB2X6ULFyGj1c71ewbO2tAQkotZxcoAmovsWf2dgb0CqEWMznQ/3v8zc+PbZrO657r 3eeG45RF2lVec5P9/p3It9IRSqRb87Lp2AEF2C6v8YYlZZc85lfiLoBQ36lRl5WNdg6m Was4mnplcStWI5ETKZFPcIUjgHEUgGTT+11nM8XB4h3ewvtIgBzNbaaPYFyEVgSmh1gF 9srXgZ+/HLzfeCumq1LfzU/+h4F/vCgm5Gmp3PIvvMVJKo9g8ymYCH/hDXT5KZqf0/3F HjDKLCykfDlKtWszv6Pr3tMLU+nMk8/Wknj2K3G0y8cfsSZXk4VMJWLFSdwQg160Ce2H sudw== X-Forwarded-Encrypted: i=1; AJvYcCXLTklCNPm+r2fASE/2FqanZCVGh/JuDVKFgDM+9Gb7Fl8zfubj5CQg74OEVI3yb/3WMll6OiFH4jMLAtopgMs8@lists.infradead.org X-Gm-Message-State: AOJu0Yw2boojPz+tq6DmWTSNHwHlpTS4flgs8xZVC3LAC7hRNfWz76J3 IoHYwsRJavabtLH82mkwhlCkwbJJwpDeQ5RWtafLOphVdmju4E7bFkbZZxv7XLZthMcK5YFGM7Y ZQjFk0Q== X-Google-Smtp-Source: AGHT+IHSSZV41AorJLfo/1p2l1Qb/DDlm6YrY9d8yFrAzAEDYWvvckpC9yFrYjhtgszoT38Flr2x4P1q4kIY X-Received: from edty16.prod.google.com ([2002:aa7:ccd0:0:b0:5d0:adc6:6843]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:274d:b0:5d0:e73c:b7f0 with SMTP id 4fb4d7f45d1cf-5d10cb99c5dmr1943009a12.28.1733222263766; Tue, 03 Dec 2024 02:37:43 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:20 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-4-qperret@google.com> Subject: [PATCH v2 03/18] KVM: arm64: Make hyp_page::order a u8 From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023746_225556_E8CE502B X-CRM114-Status: GOOD ( 14.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We don't need 16 bits to store the hyp page order, and we'll need some bits to store page ownership data soon, so let's reduce the order member. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 6 +++--- arch/arm64/kvm/hyp/include/nvhe/memory.h | 5 +++-- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 14 +++++++------- 3 files changed, 13 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index 97c527ef53c2..f1725bad6331 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -7,7 +7,7 @@ #include #include -#define HYP_NO_ORDER USHRT_MAX +#define HYP_NO_ORDER 0xff struct hyp_pool { /* @@ -19,11 +19,11 @@ struct hyp_pool { struct list_head free_area[NR_PAGE_ORDERS]; phys_addr_t range_start; phys_addr_t range_end; - unsigned short max_order; + u8 max_order; }; /* Allocation */ -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order); +void *hyp_alloc_pages(struct hyp_pool *pool, u8 order); void hyp_split_page(struct hyp_page *page); void hyp_get_page(struct hyp_pool *pool, void *addr); void hyp_put_page(struct hyp_pool *pool, void *addr); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 6dfeb000371c..88cb8ff9e769 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -42,8 +42,9 @@ static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) } struct hyp_page { - unsigned short refcount; - unsigned short order; + u16 refcount; + u8 order; + u8 reserved; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index e691290d3765..a1eb27a1a747 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -32,7 +32,7 @@ u64 __hyp_vmemmap; */ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { phys_addr_t addr = hyp_page_to_phys(p); @@ -51,7 +51,7 @@ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, /* Find a buddy page currently available for allocation */ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { struct hyp_page *buddy = __find_buddy_nocheck(pool, p, order); @@ -94,7 +94,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { phys_addr_t phys = hyp_page_to_phys(p); - unsigned short order = p->order; + u8 order = p->order; struct hyp_page *buddy; memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order); @@ -129,7 +129,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { struct hyp_page *buddy; @@ -183,7 +183,7 @@ void hyp_get_page(struct hyp_pool *pool, void *addr) void hyp_split_page(struct hyp_page *p) { - unsigned short order = p->order; + u8 order = p->order; unsigned int i; p->order = 0; @@ -195,10 +195,10 @@ void hyp_split_page(struct hyp_page *p) } } -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order) +void *hyp_alloc_pages(struct hyp_pool *pool, u8 order) { - unsigned short i = order; struct hyp_page *p; + u8 i = order; hyp_spin_lock(&pool->lock); From patchwork Tue Dec 3 10:37:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892106 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7DA3FE69E9D for ; Tue, 3 Dec 2024 10:43:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=vL8QoyqDJGJKSzujql+FnjFwnxiows6yfxggRDFLqII=; b=2jNxk/6bi7tFugsneTq1HoRvaE 9QjtadJf68et/1Qy5coWOQaBvz/6rmtWWQJeI9YE2xTNhizGFJ3hg6B3M6eBHKvsOCdXl9QD7YSnU RXSYPGr8hlcefQfM0FjbgD5tG5QMv8fmuNK9TRSeRG6Pgl73FO2sjkRnTmjEtEqJ1y0DcfBpdzxXN IqonE/zfzTJAALlvJnaj/wgZ407PatZz/4YPYt1P4sO3ueBRb5luboxjO9Tbl0D3B09gAn3DQohjU GEFateb5vQL+78O3CdvcQpuhBWvBZNT1uwgjxxi0LOlGTsC8hlSyEul37RuI9So3gKqs7VdtqmIHf QO8vvwJw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQMf-0000000971F-41D2; Tue, 03 Dec 2024 10:42:45 +0000 Received: from mail-ed1-x54a.google.com ([2a00:1450:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQHs-000000095Zo-069F for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:37:49 +0000 Received: by mail-ed1-x54a.google.com with SMTP id 4fb4d7f45d1cf-5d0c64ce365so2795490a12.0 for ; Tue, 03 Dec 2024 02:37:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222266; x=1733827066; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vL8QoyqDJGJKSzujql+FnjFwnxiows6yfxggRDFLqII=; b=gOVn+DAWIO4TiwT59T6qjI81A8TyMxRRlxm6mwd5K1FKCG+r+tnJ/3Se3B5qFzoSw7 s6TopRJa5ezrAI2iQjCLSBiaDdpp4Hf0XVoDNB39feyQo2o2KqWfiwyJyYuixO5ipYdR dcMKMHDsBQ3C0UnGieWJtI8wbY4AJyf+JVlaPS8vLa09wI/1BkbqBeJc3/L/75FjugVD sIuftQMSJGn0SR+UvFkDuauuTPPKtOjjIV5NyelqdLf+ZhB0clZpPO1grv/uOIoVmuZt lY9G3wdkSyqumwlv+m5jOrtr2vSixIXwS4RlJPRfXVwx3wumYU93SoqjowNjpT7gcMAn SLrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222266; x=1733827066; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vL8QoyqDJGJKSzujql+FnjFwnxiows6yfxggRDFLqII=; b=mWY69TmSk0WNx3R7AMURaYngG6DIf93GZexQwrubrlQ86f+GPP4Rkg8WAj3KcUdWIa A/320OmoBujhitwtiU3eiWBqUw/j5OiJ2hmD5JpGItt84GsBes5aRZpqBSEDq//3c+k5 SA5bv9XTfIYkT9EdtYX/AOwVbOlyH6AU5juMgRcT36NbpG/vlUCCuRiMTv0faQP2mRod C/shGmKgc3phLDuJbO1U8sTyPj3r5JDLzem6Qqw3UNaHpJh4XudgtPZkeG9joe11CM8i P0LHNkIV5LXDcjdd316bq2evzlaarXYsXxrvYdxQoAdNOZvDgj9eAXs1vdHAQgDDqaeA Wjig== X-Forwarded-Encrypted: i=1; AJvYcCUiUd3MiVPB2rcePtb/nYHPVJ6UZfnfEXFglAPo2tLbpeY5J1MiNh75TzoLEoe5FiYc09Mat2EzkmqE/ahmf6oi@lists.infradead.org X-Gm-Message-State: AOJu0YwtPD9a+UYI+duieWd45nsNMFUtv8KRMzDzgg/+d0BoI1T452Bj 6ksSBtcL4X1LurXUkWjcScIYXGL25NH+QUgkKWgRllBBE0d5sFT3B3sAfKJGSmeiwqqTgCwY9fJ lcxOP/w== X-Google-Smtp-Source: AGHT+IHgN8lC9yyvpZDMorRNkMfBYB5LlKIEA9og279boNeGpPGd3SF8ruNhrF2CXXgsPlBBwGvBFPbPD090 X-Received: from edcl4.prod.google.com ([2002:a05:6402:3444:b0:5cf:d076:a5f2]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:5216:b0:5d0:cf88:fc63 with SMTP id 4fb4d7f45d1cf-5d10cb5574dmr1733387a12.10.1733222265956; Tue, 03 Dec 2024 02:37:45 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:21 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-5-qperret@google.com> Subject: [PATCH v2 04/18] KVM: arm64: Move host page ownership tracking to the hyp vmemmap From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023748_062932_045910DD X-CRM114-Status: GOOD ( 24.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We currently store part of the page-tracking state in PTE software bits for the host, guests and the hypervisor. This is sub-optimal when e.g. sharing pages as this forces to break block mappings purely to support this software tracking. This causes an unnecessarily fragmented stage-2 page-table for the host in particular when it shares pages with Secure, which can lead to measurable regressions. Moreover, having this state stored in the page-table forces us to do multiple costly walks on the page transition path, hence causing overhead. In order to work around these problems, move the host-side page-tracking logic from SW bits in its stage-2 PTEs to the hypervisor's vmemmap. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/memory.h | 6 +- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 94 ++++++++++++++++-------- arch/arm64/kvm/hyp/nvhe/setup.c | 7 +- 3 files changed, 71 insertions(+), 36 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 88cb8ff9e769..08f3a0416d4c 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -8,7 +8,7 @@ #include /* - * SW bits 0-1 are reserved to track the memory ownership state of each page: + * Bits 0-1 are reserved to track the memory ownership state of each page: * 00: The page is owned exclusively by the page-table owner. * 01: The page is owned by the page-table owner, but is shared * with another entity. @@ -44,7 +44,9 @@ static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) struct hyp_page { u16 refcount; u8 order; - u8 reserved; + + /* Host (non-meta) state. Guarded by the host stage-2 lock. */ + enum pkvm_page_state host_state : 8; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index caba3e4bd09e..1595081c4f6b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -201,8 +201,8 @@ static void *guest_s2_zalloc_page(void *mc) memset(addr, 0, PAGE_SIZE); p = hyp_virt_to_page(addr); - memset(p, 0, sizeof(*p)); p->refcount = 1; + p->order = 0; return addr; } @@ -268,6 +268,7 @@ int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd) void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc) { + struct hyp_page *page; void *addr; /* Dump all pgtable pages in the hyp_pool */ @@ -279,7 +280,9 @@ void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc) /* Drain the hyp_pool into the memcache */ addr = hyp_alloc_pages(&vm->pool, 0); while (addr) { - memset(hyp_virt_to_page(addr), 0, sizeof(struct hyp_page)); + page = hyp_virt_to_page(addr); + page->refcount = 0; + page->order = 0; push_hyp_memcache(mc, addr, hyp_virt_to_phys); WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(addr), 1)); addr = hyp_alloc_pages(&vm->pool, 0); @@ -382,19 +385,25 @@ bool addr_is_memory(phys_addr_t phys) return !!find_mem_range(phys, &range); } -static bool addr_is_allowed_memory(phys_addr_t phys) +static bool is_in_mem_range(u64 addr, struct kvm_mem_range *range) +{ + return range->start <= addr && addr < range->end; +} + +static int range_is_allowed_memory(u64 start, u64 end) { struct memblock_region *reg; struct kvm_mem_range range; - reg = find_mem_range(phys, &range); + /* Can't check the state of both MMIO and memory regions at once */ + reg = find_mem_range(start, &range); + if (!is_in_mem_range(end - 1, &range)) + return -EINVAL; - return reg && !(reg->flags & MEMBLOCK_NOMAP); -} + if (!reg || reg->flags & MEMBLOCK_NOMAP) + return -EPERM; -static bool is_in_mem_range(u64 addr, struct kvm_mem_range *range) -{ - return range->start <= addr && addr < range->end; + return 0; } static bool range_is_memory(u64 start, u64 end) @@ -454,8 +463,11 @@ static int host_stage2_adjust_range(u64 addr, struct kvm_mem_range *range) if (kvm_pte_valid(pte)) return -EAGAIN; - if (pte) + if (pte) { + WARN_ON(addr_is_memory(addr) && + !(hyp_phys_to_page(addr)->host_state & PKVM_NOPAGE)); return -EPERM; + } do { u64 granule = kvm_granule_size(level); @@ -477,10 +489,29 @@ int host_stage2_idmap_locked(phys_addr_t addr, u64 size, return host_stage2_try(__host_stage2_idmap, addr, addr + size, prot); } +static void __host_update_page_state(phys_addr_t addr, u64 size, enum pkvm_page_state state) +{ + phys_addr_t end = addr + size; + for (; addr < end; addr += PAGE_SIZE) + hyp_phys_to_page(addr)->host_state = state; +} + int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id) { - return host_stage2_try(kvm_pgtable_stage2_set_owner, &host_mmu.pgt, - addr, size, &host_s2_pool, owner_id); + int ret; + + ret = host_stage2_try(kvm_pgtable_stage2_set_owner, &host_mmu.pgt, + addr, size, &host_s2_pool, owner_id); + if (ret || !addr_is_memory(addr)) + return ret; + + /* Don't forget to update the vmemmap tracking for the host */ + if (owner_id == PKVM_ID_HOST) + __host_update_page_state(addr, size, PKVM_PAGE_OWNED); + else + __host_update_page_state(addr, size, PKVM_NOPAGE); + + return 0; } static bool host_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot prot) @@ -604,35 +635,38 @@ static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size, return kvm_pgtable_walk(pgt, addr, size, &walker); } -static enum pkvm_page_state host_get_page_state(kvm_pte_t pte, u64 addr) -{ - if (!addr_is_allowed_memory(addr)) - return PKVM_NOPAGE; - - if (!kvm_pte_valid(pte) && pte) - return PKVM_NOPAGE; - - return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); -} - static int __host_check_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - struct check_walk_data d = { - .desired = state, - .get_page_state = host_get_page_state, - }; + u64 end = addr + size; + int ret; + + ret = range_is_allowed_memory(addr, end); + if (ret) + return ret; hyp_assert_lock_held(&host_mmu.lock); - return check_page_state_range(&host_mmu.pgt, addr, size, &d); + for (; addr < end; addr += PAGE_SIZE) { + if (hyp_phys_to_page(addr)->host_state != state) + return -EPERM; + } + + return 0; } static int __host_set_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - enum kvm_pgtable_prot prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, state); + if (hyp_phys_to_page(addr)->host_state & PKVM_NOPAGE) { + int ret = host_stage2_idmap_locked(addr, size, PKVM_HOST_MEM_PROT); - return host_stage2_idmap_locked(addr, size, prot); + if (ret) + return ret; + } + + __host_update_page_state(addr, size, state); + + return 0; } static int host_request_owned_transition(u64 *completer_addr, diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index cbdd18cd3f98..7e04d1c2a03d 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -180,7 +180,6 @@ static void hpool_put_page(void *addr) static int fix_host_ownership_walker(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - enum kvm_pgtable_prot prot; enum pkvm_page_state state; phys_addr_t phys; @@ -203,16 +202,16 @@ static int fix_host_ownership_walker(const struct kvm_pgtable_visit_ctx *ctx, case PKVM_PAGE_OWNED: return host_stage2_set_owner_locked(phys, PAGE_SIZE, PKVM_ID_HYP); case PKVM_PAGE_SHARED_OWNED: - prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_BORROWED); + hyp_phys_to_page(phys)->host_state = PKVM_PAGE_SHARED_BORROWED; break; case PKVM_PAGE_SHARED_BORROWED: - prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_OWNED); + hyp_phys_to_page(phys)->host_state = PKVM_PAGE_SHARED_OWNED; break; default: return -EINVAL; } - return host_stage2_idmap_locked(phys, PAGE_SIZE, prot); + return 0; } static int fix_hyp_pgtable_refcnt_walker(const struct kvm_pgtable_visit_ctx *ctx, From patchwork Tue Dec 3 10:37:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892107 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BBE14E69E9D for ; Tue, 3 Dec 2024 10:43:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=m7tOYZxQzwS2lgtsj/R92cF0vRMD/ffbTmBJ2Qyy7h8=; b=d2+H2HwHScMYX4KAkq7jlMBcoj EN0CgDcZjv+XNmqI7X214Q0jJYjn1RcoHQ4LiepTD+ADIYULSu+5YgmMXLdGWkLkXAs8tvMsINmyU +lvaNfZ+mWRDlgarKRIJaUIRfYe90BaVJeNDnj3aZi1ImaXzgT48HhbcFH5C+yRlcQNwC5y6KWbM+ UdOB4Gs1oJi9SsBi1ItWGqVnHBXFV83LNGZWY/osPS1VIuIovSujIKjjvIp/gn5viSeeJfK5s/QEn +PHZlaaCbzLzUQkqr9fGHOx/Qj3cZ8Zsezf59B62R0vLlxjNjluBr8ClY2pANufdvSFD5jg2b2BlD TeL1y25w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQNd-000000097F2-2Jo6; Tue, 03 Dec 2024 10:43:45 +0000 Received: from mail-ed1-x54a.google.com ([2a00:1450:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQHt-000000095b9-40vu for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:37:51 +0000 Received: by mail-ed1-x54a.google.com with SMTP id 4fb4d7f45d1cf-5d0d186962fso2045806a12.0 for ; Tue, 03 Dec 2024 02:37:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222268; x=1733827068; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=m7tOYZxQzwS2lgtsj/R92cF0vRMD/ffbTmBJ2Qyy7h8=; b=xHcY1jD+FW0VxMlfMq4/iFN4XOjqlcjpHaUBBqBDDMBmyLD7PA0nYCCLhrWDKtMf1u UBd9J7Tb6avmAwPAMUBUxltbiY1Rqm7olFGJVQND3/3vVhHntF+3CvgkBQaDt1Xj4gEJ EGmmHYUI58NmBFRQNWY8BanhybugEsNjLXdW8GKf4UIHUeQbOV1/2jLtVbjsFkwJaBqc Taz1HtA9G/rgI/ThjZ03+MM6zbegj2JtM1LhywRi3b0nsJg7hc2a+I5zUhf9vDafNRbV 0c16xEIDgQToJ3YmHWcHI0RreRPZx4T+oFY6Sk6cuxw7wbGV/lacOD1/vBYSWNd31h1j 3j7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222268; x=1733827068; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=m7tOYZxQzwS2lgtsj/R92cF0vRMD/ffbTmBJ2Qyy7h8=; b=ErmVy0u4aKXTZKk1CPhgSN9/F3lyJ6o+zYCIYCVdyWweEdYylJZSeA0PjAZjnRyaop 9tUWDcI/atMxAV8xJWc3fn/GAP+jO21MnuuQ6vUfNLAq9MW76mn0Jylj9974M2ciU71s Ohhnj1bv8H2Eh9yWi+rt2fjfaBFMb6OK+nja22LlvUYrX3aztheGUCLzd0KzMaNLuO2l ip1ybt07isvzCkSNnJs4LK7luCkFfsT7gJRmaUpcT0YukLfR4YyzdfLyHnvc9aKrm/lU cfi+zBlH3AROyfgzSPq0XBfUbnPYRvfKcJbNnhG+vnZcJuUAs5pHFeCJk22iHuSuB/Fz FRzw== X-Forwarded-Encrypted: i=1; AJvYcCVbNFAeoG5MIi23mg7nKE1C8qgipIzLn+r516Sjfuo11ZgBEGiF5HpVrjNAcM9uJKbPD3w6ShrpeRFvxUmVca65@lists.infradead.org X-Gm-Message-State: AOJu0Yxm+LcxRjBmzUKjNpYSww44i11dlS3mujK4lbiId1bdP1yoo+JE kIgvLGZccsueMscRg6YGDsgwmdMUSdVHcVOMeyHqlAHf7xYhwR26Sobr7tFuD5NX0DlSoBl4Wup KBvNYNA== X-Google-Smtp-Source: AGHT+IE3xpes1v/Uc0j4pgJbHyJYCsHa1apqosK00O3zWfydKbAfCY5SFFSiIVSJTroNQjdKlzcHSaGmBrZh X-Received: from edrt5.prod.google.com ([2002:aa7:d4c5:0:b0:5cf:cc2d:7de4]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:40d3:b0:5d0:c8f4:d1d5 with SMTP id 4fb4d7f45d1cf-5d10cb553f0mr1671471a12.12.1733222268055; Tue, 03 Dec 2024 02:37:48 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:22 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-6-qperret@google.com> Subject: [PATCH v2 05/18] KVM: arm64: Pass walk flags to kvm_pgtable_stage2_mkyoung From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023750_001609_8AF399AB X-CRM114-Status: GOOD ( 15.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org kvm_pgtable_stage2_mkyoung currently assumes that it is being called from a 'shared' walker, which will not be true once called from pKVM. To allow for the re-use of that function, make the walk flags one of its parameters. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 4 +++- arch/arm64/kvm/hyp/pgtable.c | 7 +++---- arch/arm64/kvm/mmu.c | 3 ++- 3 files changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index aab04097b505..38b7ec1c8614 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -669,13 +669,15 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size); * kvm_pgtable_stage2_mkyoung() - Set the access flag in a page-table entry. * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address to identify the page-table entry. + * @flags: Flags to control the page-table walk (ex. a shared walk) * * The offset of @addr within a page is ignored. * * If there is a valid, leaf page-table entry used to translate @addr, then * set the access flag in that entry. */ -void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr); +void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr, + enum kvm_pgtable_walk_flags flags); /** * kvm_pgtable_stage2_test_clear_young() - Test and optionally clear the access diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 40bd55966540..0470aedb4bf4 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1245,14 +1245,13 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) NULL, NULL, 0); } -void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr) +void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr, + enum kvm_pgtable_walk_flags flags) { int ret; ret = stage2_update_leaf_attrs(pgt, addr, 1, KVM_PTE_LEAF_ATTR_LO_S2_AF, 0, - NULL, NULL, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + NULL, NULL, flags); if (!ret) dsb(ishst); } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c9d46ad57e52..a2339b76c826 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1718,13 +1718,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, /* Resolve the access fault by making the page young again. */ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) { + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED; struct kvm_s2_mmu *mmu; trace_kvm_access_fault(fault_ipa); read_lock(&vcpu->kvm->mmu_lock); mmu = vcpu->arch.hw_mmu; - kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa); + kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa, flags); read_unlock(&vcpu->kvm->mmu_lock); } From patchwork Tue Dec 3 10:37:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892111 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8EF40E69E9F for ; Tue, 3 Dec 2024 10:44:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=8+RnTOcmNRxXwhtm+wQBKlKgN7Eb65VjOuEvF6hfgqY=; b=Nd7fKgIzUl2KEGvnsBs/0pWANS l1GXEsbaOhPhL8OgEJtOqi5hSbt2Xfu/8YSEB70PHa8+r2KVIc5QX/M0eIMxMdTvHSJ4KsAEk3JvC asRUIfHLMAHLvb4ybDn/hv7C3vdWKu652kJbAlFZ0zyvPezyDbUZiB79T4CABLebSWywqAahxcGhu wuQ9x2RS40gcLT2N+LUwEFGe2tKV+7v3SpAIkiDmMJv8fV8mHAQH8EKJwVYmAUPqhbXGnvsKcXlba jbmu60t7HlQa2TYsDHgpl35ProMDmMt6zFszeXzdNzaGL0o+pSXPOatzxmvTMKQVBUDgKt5ofHFue I9k3Us7A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQOc-000000097Vn-0m5x; Tue, 03 Dec 2024 10:44:46 +0000 Received: from mail-ed1-x54a.google.com ([2a00:1450:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQHv-000000095c3-3a6A for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:37:52 +0000 Received: by mail-ed1-x54a.google.com with SMTP id 4fb4d7f45d1cf-5d0b6f3770eso2962885a12.3 for ; Tue, 03 Dec 2024 02:37:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222270; x=1733827070; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8+RnTOcmNRxXwhtm+wQBKlKgN7Eb65VjOuEvF6hfgqY=; b=JG+fkm5FvfhhjtcccZOQFZENIGrjfqjsTv3u90ordhbs7rco4yuZhiniYbhpb46zau gFbM3rwbYP1/AMvu+OMEf2u/brUa3yEETjN9sxYm9U+sg3dSWtjFqIHuAOY6hDmRLgKs 4X4prmIGSFt1UvVLS85vxE0+jjjUM35/pLohTPJTNrI1WWDy0liiZX6lEWk8Xd7TKSfi zvtN6nEf9Y6c2BcTON+qkwNgi9YXkge1yjccTqhD6q1vLknUvt+fSs7UJ7gmmLuOyiL0 Og56fSMNfIVJyL5eqzarShWzYqcJJ9CNuFe+rsx2Zsa1jObSNwJBF98vC47praALSMFg GeYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222270; x=1733827070; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8+RnTOcmNRxXwhtm+wQBKlKgN7Eb65VjOuEvF6hfgqY=; b=j3PBM1chfD0TeZiknV4yVFIEuGRlF3GxHSyXaoRynAJvhpOxP1ayHVMjLuShKorHeh EFnRld10XxBFN2BSixtJJEbzVEUYduTIDVluqDntJCY6BYtScGAt/aJmQuUfJrzHm8Yb fCnU27+LDv2u5v8PsJZ9XYhb5vfl9w88RTfH7cM38oqMAYc43KNa/ERTEg6kzjDYdbWV l36uakO8kSgfAElq8/Kh5dNW7NIf40h30yvqMNjuUHoLFmr9PZ6jAu63omgmaWpxQslt HKgBiL9lQQHmYYOwv38OdHPjxvJZDak+NIuSPAX1EA0HpUPWnv2WWt+fbX+5cjTC6CkI 0ARg== X-Forwarded-Encrypted: i=1; AJvYcCUtHIX92XM9K+rOH9idKlAw7I4yzhJY3sD4M8ZVhz6DLZfUirNA0bvJw2jfD9uBMSHKvKP0XT4VpBTpL3ZvOMVf@lists.infradead.org X-Gm-Message-State: AOJu0Yy1zwzJaxgS3riHZWk9TOg2xtNdH77AnDHcHOmLU44axeQ1d8j8 wscHY2phfQFYk4xPWoFMaxl3rO0r5SYXvOHj/eNBrhmqLkFZX7ExEGt3wEYwDnm/NeKlwj2Cm3e O2CQilQ== X-Google-Smtp-Source: AGHT+IEkCh4976H9zoeMRqzwAu1itMZtieMRxRM0H0hSSBC62K8xa96o6pizViFqZkrAm2FM/BgncFwuslk8 X-Received: from edrm26.prod.google.com ([2002:aa7:d35a:0:b0:5cd:564f:2e8c]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:b73:b0:5d1:11b6:2246 with SMTP id 4fb4d7f45d1cf-5d111b624b2mr138127a12.32.1733222270207; Tue, 03 Dec 2024 02:37:50 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:23 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-7-qperret@google.com> Subject: [PATCH v2 06/18] KVM: arm64: Pass walk flags to kvm_pgtable_stage2_relax_perms From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023751_892055_858031D7 X-CRM114-Status: GOOD ( 13.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org kvm_pgtable_stage2_relax_perms currently assumes that it is being called from a 'shared' walker, which will not be true once called from pKVM. To allow for the re-use of that function, make the walk flags one of its parameters. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 4 +++- arch/arm64/kvm/hyp/pgtable.c | 6 ++---- arch/arm64/kvm/mmu.c | 7 +++---- 3 files changed, 8 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 38b7ec1c8614..c2f4149283ef 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -707,6 +707,7 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address to identify the page-table entry. * @prot: Additional permissions to grant for the mapping. + * @flags: Flags to control the page-table walk (ex. a shared walk) * * The offset of @addr within a page is ignored. * @@ -719,7 +720,8 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, * Return: 0 on success, negative error code on failure. */ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, - enum kvm_pgtable_prot prot); + enum kvm_pgtable_prot prot, + enum kvm_pgtable_walk_flags flags); /** * kvm_pgtable_stage2_flush_range() - Clean and invalidate data cache to Point diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 0470aedb4bf4..b7a3b5363235 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1307,7 +1307,7 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, } int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, - enum kvm_pgtable_prot prot) + enum kvm_pgtable_prot prot, enum kvm_pgtable_walk_flags flags) { int ret; s8 level; @@ -1325,9 +1325,7 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, if (prot & KVM_PGTABLE_PROT_X) clr |= KVM_PTE_LEAF_ATTR_HI_S2_XN; - ret = stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + ret = stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level, flags); if (!ret || ret == -EAGAIN) kvm_call_hyp(__kvm_tlb_flush_vmid_ipa_nsh, pgt->mmu, addr, level); return ret; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index a2339b76c826..641e4fec1659 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1452,6 +1452,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; struct page *page; + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED; if (fault_is_perm) fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu); @@ -1695,13 +1696,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * PTE, which will be preserved. */ prot &= ~KVM_NV_GUEST_MAP_SZ; - ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot); + ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot, flags); } else { ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, - memcache, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + memcache, flags); } out_unlock: From patchwork Tue Dec 3 10:37:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892112 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DD973E69E9F for ; Tue, 3 Dec 2024 10:45:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=RoLTpcw2i/ARiDDfP5KR0JIwlxbCPGGxl8N0KcEuWeM=; b=QB2cLvBlAUgXGMz3NbysIzNP9H LPbwYIuVIyI7x9F7S7nxx+FZFr5ErT8T6foUfBmilgltrmh/M0QrkOlAE6YvGaS4QeUje4XJ2p7RG CExmnTVaWvN1PwMjMY+xZXahwrZJUu81rkLRYzA1/t9pkX8ZW3lXNJ1TsyfGh77LpsLge8mQyL+wt swqAqbvXwjpQnEPgcx1su6DTI6JlVe1qIa2q1HfERyNyZ3iXKSGlPMzWNBirRXMUJXPnAsVB4W8eZ 3UNmmcCc9957nhnP58JqL4NaLmJoe+U494DoIH0NcZJjcFBGv+wu5e8Lsof1np2GkEykRxFuMN5Wp /MATYucw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQPa-000000097pi-2SHJ; Tue, 03 Dec 2024 10:45:46 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQHy-000000095cl-0o6H for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:37:55 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d0bde5e90fso3722283a12.0 for ; Tue, 03 Dec 2024 02:37:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222272; x=1733827072; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RoLTpcw2i/ARiDDfP5KR0JIwlxbCPGGxl8N0KcEuWeM=; b=vp2fDDrnIr1C2BUMfkWZVzzaJiAE/+BXrtWhvjpwVsOcEdaZrvHlH09Yd3Shkwf4OW dIgJ93Yjgzq+DGNZSZVW8tysVTCmq7NDDcGQsAqlGgZVCqqUK+DevR5PGjREs9Wt2VOK 5QjJWLaxbLh+4ENzJyzhnPAwpdJh+0ktQBxVcdXJIejgVZHkw8PHftES0yYz6o+KNyin +0WTpUWa+k4ux7bgkd/tMf0zMiASfJK57xTWR0OfCPXuvZspB6U7OBvACfZhnE+/izB8 2T2VQN2IZxh9MT4vhKptks2F3IDnTQCH6juXUbShkzwrZzECOuPAO/GXzBM/XZ4/iPe+ DJfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222272; x=1733827072; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RoLTpcw2i/ARiDDfP5KR0JIwlxbCPGGxl8N0KcEuWeM=; b=BMuxtLKPegraiNYYw9qcP4/2oCO6imOjDFOkM/apNQs1n6fjbMhOFPx8V/feAkIfFr 42gFo4CzaVrXQOsO66G+nfuaslKhCootrKR8c0tubhbm1Dd4ZrFVo1bLnV6wMjLkgNEh 53LZgozDJpW3YV9wWMelYKc4ENIEqGTAoQgSbz7YZvFzlKmi3IGvIGNUQRaA0eECm8H6 rLWZGBfvbILmDuo+CWyuNh150BhuKuIVXs/EOMXrIWC+3AH6yP0APCmxukkH02zWdc+f v7nFa2e5ZLix8kEAruXXUy02KFQL8kvexpl3AH3YdQCYXvOuNPpH/Lz0Q6oUchyNwJer jHHA== X-Forwarded-Encrypted: i=1; AJvYcCVWIR5i0nHAjSRcnZ1G699EaVZyWxE5l7E2k4iDbxQTD3HFgYOUndfdkXuRXrW5PIPOL13vQRDH+LJPyENtrQwQ@lists.infradead.org X-Gm-Message-State: AOJu0YwZqMHENr1eTAA2idrlZ7lYnRVhGs2XmRV6H9iYZ95EkIlrusXx Islprmtj8bXIyPxUwznLJjWecWVdyUeS8rQWiXho1mcil5OunOKC7mp/czl/xL+/PeIo5cl8+k6 6D21IUw== X-Google-Smtp-Source: AGHT+IGydksz2oitkkS+PzunE3Xo1LLjOTwrM0JRUgGZOInmX86+vMlVFPZJqRfsJ77VjfmAauob+8D8Wh13 X-Received: from edtw23.prod.google.com ([2002:aa7:cb57:0:b0:5cb:ee45:27d6]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:27c9:b0:5cf:924f:9968 with SMTP id 4fb4d7f45d1cf-5d10cb4f7c1mr2156507a12.2.1733222272258; Tue, 03 Dec 2024 02:37:52 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:24 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-8-qperret@google.com> Subject: [PATCH v2 07/18] KVM: arm64: Make kvm_pgtable_stage2_init() a static inline function From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023754_230714_B64DCEE9 X-CRM114-Status: GOOD ( 10.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Turn kvm_pgtable_stage2_init() into a static inline function instead of a macro. This will allow the usage of typeof() on it later on. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index c2f4149283ef..04418b5e3004 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -526,8 +526,11 @@ int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, enum kvm_pgtable_stage2_flags flags, kvm_pgtable_force_pte_cb_t force_pte_cb); -#define kvm_pgtable_stage2_init(pgt, mmu, mm_ops) \ - __kvm_pgtable_stage2_init(pgt, mmu, mm_ops, 0, NULL) +static inline int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, + struct kvm_pgtable_mm_ops *mm_ops) +{ + return __kvm_pgtable_stage2_init(pgt, mmu, mm_ops, 0, NULL); +} /** * kvm_pgtable_stage2_destroy() - Destroy an unused guest stage-2 page-table. From patchwork Tue Dec 3 10:37:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892113 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2E43DE69E9D for ; Tue, 3 Dec 2024 10:46:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=BRZ2cEpW+rSNUfUs/q11EvB+KMkExBUSGtmow/HBwh4=; b=sJT2MxPFIdP4wXNGv5O18lol6J Yj48IC0poYbz3wrdDfWrWu4lIrpClb9XEhYiw2SFkEWnTU9Jo252btscbYEM7jUEVl6BJtDvyx+oO nFcAdwYK3LEyNqvs650rof81rQmOTDVNCYT7yPZiDAABN8P+2sXwAJ402unjqIf6j6eja3jCCcIq/ 0ARcX0UJoxkrMdHF4E7nXAO12ZA90bLGRFA7IWi5PuBy1gWvnTYKyswgdAQwo3N5c3bJ4kSEFrQcr ymSmaa6ngWOk5H4IHDUjFzZdXT0z8XzeunR6PYFLkRbeAi6uNjOGZBX1TR9gXSByfW8izXU+30X8T b7HT9wVQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQQZ-0000000986P-0kmq; Tue, 03 Dec 2024 10:46:47 +0000 Received: from mail-lj1-x249.google.com ([2a00:1450:4864:20::249]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQI0-000000095dg-3A9m for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:37:57 +0000 Received: by mail-lj1-x249.google.com with SMTP id 38308e7fff4ca-2ffc3b6bb58so32719291fa.0 for ; Tue, 03 Dec 2024 02:37:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222274; x=1733827074; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BRZ2cEpW+rSNUfUs/q11EvB+KMkExBUSGtmow/HBwh4=; b=b9OgEAa9fqD1U1c2t14BJkp/HPT4tRZoRHhjE5BpSreGJsy1Yx7CDqfS4NmdYTiahi 94XLvZbsJODZERdw9kVDouv4FVW47pecnlzpI2qbU1euHFckxI0D4g2frQ6hgR8K+ZxM gxJAQG9J7BtpPrfbZlxUbgDyQNuTYpJM6Rd74rxu6nGi4dvbq72dXNbrJOHItHV8iPQp zdhAElUlvcNK8Ipe2/Rb87YwU85RFLgO7Jl1wwbNccuyJCHS9iAV7M9XDsAX9+MlUDqc pemkbsk9zdxSEs0grm9fvudHF9R3QX71t+5rSDIEqQFUrjHvFUW0+ypHanG1w1Kl9a8x HjTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222274; x=1733827074; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BRZ2cEpW+rSNUfUs/q11EvB+KMkExBUSGtmow/HBwh4=; b=rzPnxli2VHzDp8isZocyh2n49vr5K5TFeOqzVms6mcoZ2pC2Aue0xYsDbu8AMZRjha HsllWDy7o7N/KBr7J+v/e9SMd5oQxyaD59sDe0z0piVQtzFahZo0D7OP6rA6ojvaQCcS +MILg0mMHm6DkytMq74B7gCYXEQ8ChXLLDXckWhukPCDy6bFmTQ9+fSsouFHAhNESWZt DbY8lV0pvCuP7srS92rYw4wFF4lVrfHGiwv31jN4LU9BU3I30ox43GmeuhrX5gghHquf B0ySXFQ7lbKEpBvAfMnbJ7A1S3+Mgc1FW/0t8IOqWq10rGaxN8z6U9Tz3NAk+2l3JExg /8AA== X-Forwarded-Encrypted: i=1; AJvYcCUo+6Vkzr2rRvNLZt0Qg6VGFfanosOSksTqWAhDhS/JS4yMOei5XGakCmNkasb1yCJPZjv5O051RwZ4ea6YsXeX@lists.infradead.org X-Gm-Message-State: AOJu0YyLkAsrngUWgI8uy8opS+97wH8wHJSwT0azk4jY/BtrO7A4UnAF 6ThuRuULDcvra+qGK7m8iKYNpWu6L+TnIAXyA9S6SdF8wDSTH1ev6ctwz9Ig/O5sbGbJLV26UW5 4Jyvu+w== X-Google-Smtp-Source: AGHT+IEj8AuZHCZpDYFglBJG72MANHTDv565iypJfWiYGscbgWwh2Qi7NN2thjRyh4mSRHfuIeZ5D8lm4Rqv X-Received: from edxz12.prod.google.com ([2002:aa7:cf8c:0:b0:5cf:ac1b:14fa]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a2e:b890:0:b0:2ff:8e69:77ef with SMTP id 38308e7fff4ca-30009c0d863mr13033771fa.1.1733222274568; Tue, 03 Dec 2024 02:37:54 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:25 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-9-qperret@google.com> Subject: [PATCH v2 08/18] KVM: arm64: Add {get,put}_pkvm_hyp_vm() helpers From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023756_792135_42A8E334 X-CRM114-Status: GOOD ( 11.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for accessing pkvm_hyp_vm structures at EL2 in a context where we can't always expect a vCPU to be loaded (e.g. MMU notifiers), introduce get/put helpers to get temporary references to hyp VMs from any context. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 3 +++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 20 ++++++++++++++++++++ 2 files changed, 23 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index 24a9a8330d19..f361d8b91930 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -70,4 +70,7 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, unsigned int vcpu_idx); void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); +struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle); +void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm); + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 01616c39a810..4db88bedf8d5 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -327,6 +327,26 @@ void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) hyp_spin_unlock(&vm_table_lock); } +struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle) +{ + struct pkvm_hyp_vm *hyp_vm; + + hyp_spin_lock(&vm_table_lock); + hyp_vm = get_vm_by_handle(handle); + if (hyp_vm) + hyp_page_ref_inc(hyp_virt_to_page(hyp_vm)); + hyp_spin_unlock(&vm_table_lock); + + return hyp_vm; +} + +void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm) +{ + hyp_spin_lock(&vm_table_lock); + hyp_page_ref_dec(hyp_virt_to_page(hyp_vm)); + hyp_spin_unlock(&vm_table_lock); +} + static void pkvm_init_features_from_host(struct pkvm_hyp_vm *hyp_vm, const struct kvm *host_kvm) { struct kvm *kvm = &hyp_vm->kvm; From patchwork Tue Dec 3 10:37:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892114 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C84C9E69E9D for ; Tue, 3 Dec 2024 10:48:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=mj964gODKbD1zmug2lj/VjWmQYe+AokCkWiMCuawBn0=; b=4joX4wZQa81nfLL1jtQ5suEoPs SPyXaO2vTE4RBybjaCYby/SYYCr8/G9UX6XHmXpf4xS4bc+F4rjFBJL9d9ocHrGHWar3MkxxpszyO PQuy612zeRkhKmGyuO9Yixut+VRmg+G0mie2V8UsMvFRR+BsKzA09pRU7oYfCAiQa522tfIeOl8Ij FKrsXfOOvJp0YFFhZr4qbep5zC0gUtAmNa4raZYF/NjF7iVZ0omunyzq86EGA10Q+byjccBWoPXWG FoL6lGCfVu6A+dmrhA84OoFfyJB+Xqiz+fY+/p35cDaw752IYa0dKPLBkkOhJclYDuOzNJXetTX1z mNGwxL9w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQRX-000000098K4-3dpe; Tue, 03 Dec 2024 10:47:47 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQI2-000000095eP-0x4J for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:37:59 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d035c8f3afso3481567a12.2 for ; Tue, 03 Dec 2024 02:37:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222276; x=1733827076; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mj964gODKbD1zmug2lj/VjWmQYe+AokCkWiMCuawBn0=; b=fYbE2Jeck6f9v9LVcP0r8r/Vb0KfIW3e3J6e1prYI0ryRvxVqZMlIMpwubbcsOAcWZ JIlO4fkAb2IEFlno2RBtY6f3CxhYx1OPuw2ihWwCrIrdjgqYJJhIHRqJn5O7n4W0eV7L n2q+dVHRvVzOeiwQZKo+OyGK8QryJ9UOQodwNIgypmdAIHONImXzNDr1TKSU232rNJuq F6MHWTlPrkxdG6lxG+PjtP6zkCAeI/lYCkCte5UxPZuwqJVIm0YX1kk7b3oUEeRGAOiP vN5ji/JVSoHYBcz22zRr69DnBwUHLzWswxX6XTlB0Hc9HLOu1rAK//c06UB00pkYKNrx tfBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222276; x=1733827076; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mj964gODKbD1zmug2lj/VjWmQYe+AokCkWiMCuawBn0=; b=U28n77QVS61Ee4DrlI9HcNNDHRpOLWcIG+/agab0JzpiaOgda4qbP5dCA+1gT681YJ Xt3vIK08vTRcbjxMWJdpduHbDlA38pLgwAvwe8JBNnVkB27NMMsRywp+sUN2NcBwH3sr 4XjQh4Gx4piT1TgQF2EOD1PqQx1TUigNTNj3bs3jHAcSHAlOCWQpdRU1f4JVTAe45hxh kIDEfbFnmQ8Q8BiHbMjxh4YP6g9XkA6DBcUNmvarxNCiiG1Mq6wLdnRJDNNK3j54VIOa Rkt1xjDN5+FOV1URL+c3KzljrRgERub+aE4pyJKH7L/2jfr+ioiihB0/K63+zEoUeNca MuZg== X-Forwarded-Encrypted: i=1; AJvYcCUtSnRhqXZY0a8CYetaVqSvjvUCA67qi0TfUmetucuQsE8LWf7MrKL3FPRBmRVTYpVuoX+6KGDujT8HW1E6572d@lists.infradead.org X-Gm-Message-State: AOJu0YydZkbQXdgYVvi9XYlLt3Go/dDOHawlHoasJKIyUS346Fxh03e9 LZwkoHpNcVj5AW9C7p1W8bt8s5Y4PO7R+FXPCYnoZQpGEexIzW77Cw7cyH/0clmYjNV09/UU+nP IEjgQiw== X-Google-Smtp-Source: AGHT+IHECbRMhTPqXhWhH0wYQw1eGtqOW5b56Fqo9wtWWLwVEFicyz6TFa+3iZqVf11rifTnBDtDiet5fpCW X-Received: from edsd12.prod.google.com ([2002:aa7:d5cc:0:b0:5cf:dcd3:49a1]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:27c9:b0:5d0:b925:a8a with SMTP id 4fb4d7f45d1cf-5d10cb5b909mr1775921a12.16.1733222276629; Tue, 03 Dec 2024 02:37:56 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:26 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-10-qperret@google.com> Subject: [PATCH v2 09/18] KVM: arm64: Introduce __pkvm_vcpu_{load,put}() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023758_271326_634F8D25 X-CRM114-Status: GOOD ( 20.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Rather than look-up the hyp vCPU on every run hypercall at EL2, introduce a per-CPU 'loaded_hyp_vcpu' tracking variable which is updated by a pair of load/put hypercalls called directly from kvm_arch_vcpu_{load,put}() when pKVM is enabled. Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_asm.h | 2 ++ arch/arm64/kvm/arm.c | 14 ++++++++ arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 7 ++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 47 ++++++++++++++++++++------ arch/arm64/kvm/hyp/nvhe/pkvm.c | 29 ++++++++++++++++ arch/arm64/kvm/vgic/vgic-v3.c | 6 ++-- 6 files changed, 93 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index ca2590344313..89c0fac69551 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -79,6 +79,8 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_init_vm, __KVM_HOST_SMCCC_FUNC___pkvm_init_vcpu, __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, + __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load, + __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put, }; #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a102c3aebdbc..55cc62b2f469 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -619,12 +619,26 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_arch_vcpu_load_debug_state_flags(vcpu); + if (is_protected_kvm_enabled()) { + kvm_call_hyp_nvhe(__pkvm_vcpu_load, + vcpu->kvm->arch.pkvm.handle, + vcpu->vcpu_idx, vcpu->arch.hcr_el2); + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, + &vcpu->arch.vgic_cpu.vgic_v3); + } + if (!cpumask_test_cpu(cpu, vcpu->kvm->arch.supported_cpus)) vcpu_set_on_unsupported_cpu(vcpu); } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { + if (is_protected_kvm_enabled()) { + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, + &vcpu->arch.vgic_cpu.vgic_v3); + kvm_call_hyp_nvhe(__pkvm_vcpu_put); + } + kvm_arch_vcpu_put_debug_state_flags(vcpu); kvm_arch_vcpu_put_fp(vcpu); if (has_vhe()) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index f361d8b91930..be52c5b15e21 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -20,6 +20,12 @@ struct pkvm_hyp_vcpu { /* Backpointer to the host's (untrusted) vCPU instance. */ struct kvm_vcpu *host_vcpu; + + /* + * If this hyp vCPU is loaded, then this is a backpointer to the + * per-cpu pointer tracking us. Otherwise, NULL if not loaded. + */ + struct pkvm_hyp_vcpu **loaded_hyp_vcpu; }; /* @@ -69,6 +75,7 @@ int __pkvm_teardown_vm(pkvm_handle_t handle); struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, unsigned int vcpu_idx); void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); +struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void); struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle); void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 6aa0b13d86e5..95d78db315b3 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -141,16 +141,46 @@ static void sync_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) host_cpu_if->vgic_lr[i] = hyp_cpu_if->vgic_lr[i]; } +static void handle___pkvm_vcpu_load(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(unsigned int, vcpu_idx, host_ctxt, 2); + DECLARE_REG(u64, hcr_el2, host_ctxt, 3); + struct pkvm_hyp_vcpu *hyp_vcpu; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vcpu = pkvm_load_hyp_vcpu(handle, vcpu_idx); + if (!hyp_vcpu) + return; + + if (pkvm_hyp_vcpu_is_protected(hyp_vcpu)) { + /* Propagate WFx trapping flags */ + hyp_vcpu->vcpu.arch.hcr_el2 &= ~(HCR_TWE | HCR_TWI); + hyp_vcpu->vcpu.arch.hcr_el2 |= hcr_el2 & (HCR_TWE | HCR_TWI); + } +} + +static void handle___pkvm_vcpu_put(struct kvm_cpu_context *host_ctxt) +{ + struct pkvm_hyp_vcpu *hyp_vcpu; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); + if (hyp_vcpu) + pkvm_put_hyp_vcpu(hyp_vcpu); +} + static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, host_vcpu, host_ctxt, 1); int ret; - host_vcpu = kern_hyp_va(host_vcpu); - if (unlikely(is_protected_kvm_enabled())) { - struct pkvm_hyp_vcpu *hyp_vcpu; - struct kvm *host_kvm; + struct pkvm_hyp_vcpu *hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); /* * KVM (and pKVM) doesn't support SME guests for now, and @@ -163,9 +193,6 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) goto out; } - host_kvm = kern_hyp_va(host_vcpu->kvm); - hyp_vcpu = pkvm_load_hyp_vcpu(host_kvm->arch.pkvm.handle, - host_vcpu->vcpu_idx); if (!hyp_vcpu) { ret = -EINVAL; goto out; @@ -176,12 +203,10 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) ret = __kvm_vcpu_run(&hyp_vcpu->vcpu); sync_hyp_vcpu(hyp_vcpu); - pkvm_put_hyp_vcpu(hyp_vcpu); } else { /* The host is fully trusted, run its vCPU directly. */ - ret = __kvm_vcpu_run(host_vcpu); + ret = __kvm_vcpu_run(kern_hyp_va(host_vcpu)); } - out: cpu_reg(host_ctxt, 1) = ret; } @@ -409,6 +434,8 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_init_vm), HANDLE_FUNC(__pkvm_init_vcpu), HANDLE_FUNC(__pkvm_teardown_vm), + HANDLE_FUNC(__pkvm_vcpu_load), + HANDLE_FUNC(__pkvm_vcpu_put), }; static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 4db88bedf8d5..d5c23449a64c 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -23,6 +23,12 @@ unsigned int kvm_arm_vmid_bits; unsigned int kvm_host_sve_max_vl; +/* + * The currently loaded hyp vCPU for each physical CPU. Used only when + * protected KVM is enabled, but for both protected and non-protected VMs. + */ +static DEFINE_PER_CPU(struct pkvm_hyp_vcpu *, loaded_hyp_vcpu); + /* * Set trap register values based on features in ID_AA64PFR0. */ @@ -306,15 +312,30 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, struct pkvm_hyp_vcpu *hyp_vcpu = NULL; struct pkvm_hyp_vm *hyp_vm; + /* Cannot load a new vcpu without putting the old one first. */ + if (__this_cpu_read(loaded_hyp_vcpu)) + return NULL; + hyp_spin_lock(&vm_table_lock); hyp_vm = get_vm_by_handle(handle); if (!hyp_vm || hyp_vm->nr_vcpus <= vcpu_idx) goto unlock; hyp_vcpu = hyp_vm->vcpus[vcpu_idx]; + + /* Ensure vcpu isn't loaded on more than one cpu simultaneously. */ + if (unlikely(hyp_vcpu->loaded_hyp_vcpu)) { + hyp_vcpu = NULL; + goto unlock; + } + + hyp_vcpu->loaded_hyp_vcpu = this_cpu_ptr(&loaded_hyp_vcpu); hyp_page_ref_inc(hyp_virt_to_page(hyp_vm)); unlock: hyp_spin_unlock(&vm_table_lock); + + if (hyp_vcpu) + __this_cpu_write(loaded_hyp_vcpu, hyp_vcpu); return hyp_vcpu; } @@ -323,10 +344,18 @@ void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) struct pkvm_hyp_vm *hyp_vm = pkvm_hyp_vcpu_to_hyp_vm(hyp_vcpu); hyp_spin_lock(&vm_table_lock); + hyp_vcpu->loaded_hyp_vcpu = NULL; + __this_cpu_write(loaded_hyp_vcpu, NULL); hyp_page_ref_dec(hyp_virt_to_page(hyp_vm)); hyp_spin_unlock(&vm_table_lock); } +struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void) +{ + return __this_cpu_read(loaded_hyp_vcpu); + +} + struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle) { struct pkvm_hyp_vm *hyp_vm; diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c index f267bc2486a1..c2ef41fff079 100644 --- a/arch/arm64/kvm/vgic/vgic-v3.c +++ b/arch/arm64/kvm/vgic/vgic-v3.c @@ -734,7 +734,8 @@ void vgic_v3_load(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; - kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); + if (likely(!is_protected_kvm_enabled())) + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); if (has_vhe()) __vgic_v3_activate_traps(cpu_if); @@ -746,7 +747,8 @@ void vgic_v3_put(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; - kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); + if (likely(!is_protected_kvm_enabled())) + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); WARN_ON(vgic_v4_put(vcpu)); if (has_vhe()) From patchwork Tue Dec 3 10:37:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892124 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 670EBE69EA6 for ; Tue, 3 Dec 2024 10:49:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=lv4KEzjZijRkD71IRYNYw2jUUd4RWJjU16yhXny5rwA=; b=Gj/MEMwZS1RJHOkAeSH9R6+92Z oTsOFKhoq8URSssAOyGOfanRRQoOyuh5j7/iDEJ8cA6B/vTa//pRAqd9CmZFnxsCFjJCm6J8J9yfd NgGnvYTGW30enmRbjiabKvjDxrMgZIzw3Phyxr7kA5hnr7SJWPWeCw92Jv+9WkGbMipkN9s+WZRWZ dZFWPflwOAPwruGKa6EaDSGIaO4Zt4h+ZjTHlDs5sIISJLKI3s4B8IEVm5F2W3EoZqiRlWq/fotwv WByddwwwSv59xM3bfEZ3dvf8TYDJ0JygDfZnC/KG+f5L8I0ifL+SIRpR8Q/7fjSz4HE+EdGX+FAOh 9TdRvMGA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQSW-000000098WZ-236F; Tue, 03 Dec 2024 10:48:48 +0000 Received: from mail-ed1-x54a.google.com ([2a00:1450:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQI4-000000095fG-1Ddf for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:38:01 +0000 Received: by mail-ed1-x54a.google.com with SMTP id 4fb4d7f45d1cf-5d0c64ce365so2795610a12.0 for ; Tue, 03 Dec 2024 02:37:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222278; x=1733827078; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lv4KEzjZijRkD71IRYNYw2jUUd4RWJjU16yhXny5rwA=; b=d/YYL8nVz2XHSeceiAjcGybrg2p7KhrjNhLpctdFGPV/ZmIhDXMdMMF+HMueCWpvlj T9o/ooWuTXu3h25um8lqdlpTYUUXBDnG5ynzml1VACm+vb302Su9WKbil4Tapp1yDoCg vWoM9EZYZGxuEfCy+3xvFQqUh0MGht2EMR/Wxsvm5qU/l+I4F5W+Xru4zn8Hm+peXlZE vOh1P4ZhaP8qtl9mF/XCq8WAnyLoe0Sill2ljY8zw7bVVRXqviM/i+H//+J+skB0F0SR XynZP2LzK4cXGTxkBtn4LRen7jXspNkBnujqMAbvmAIISHbC2hc265izxJQ30VjYVgtZ WZDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222278; x=1733827078; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lv4KEzjZijRkD71IRYNYw2jUUd4RWJjU16yhXny5rwA=; b=qIuwQqrD67BH7YcHUWrAIXijsqYXapzMUYPDqo96vwWVFWo4IYqqo5KU9fKlN5pUfy hJfxx6OCmevfxavTtdz5UAawRL+2jmZzNGMi3phOX7nrAPy1LTmvjikkQJwBv2mmBOX7 Xj++usGiDP9jOmFqn11porjLxkeDhaeeHeaK68MgJJ6naPG1GLbkciZJqmncfbW/IiAF sfYQmoPvGY0I2l+u/PbS3dCtUjUdGXt+Awnw7d1Is30NK9igvgkOstS44t0YZovrV1Jf 5SxXPRlyjffk2iFpp3wsGvWXt3Fr71a0eJ7ytEmF6o0E0VBZkbYQgEsgyL7ofbbmsy4B 2dyg== X-Forwarded-Encrypted: i=1; AJvYcCW6ahuCGjjpJdnCoDSLf/SG1l9L3MP/+yUjQHdgVlfTZs07I/hsVbl+vg4lW73vSbAb0F0HqutCE5VlTCCoOOgm@lists.infradead.org X-Gm-Message-State: AOJu0Yz9Yknc66blSicAzHHV5h56SRLsq9pVVUXBz2UaCN30++kHHJXS CoFzfsy+VfTgWMDxgArORgDUPe/IWy7Hx7VNVd/bBIjsJvhCMOi1fgwvTzNzXhHUaTx1CwdeeoT Chlp7Wg== X-Google-Smtp-Source: AGHT+IGff+ETr7qG25APrTMTL+CErquIIbBjKADRnqW2L3/LArYjsAb7TuzVUeWwUe1coVhCgj6aGCdjViIR X-Received: from edbio8.prod.google.com ([2002:a05:6402:2188:b0:5d0:cd7d:948a]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:278b:b0:5d0:e570:5084 with SMTP id 4fb4d7f45d1cf-5d10cb9a54dmr1508829a12.34.1733222278721; Tue, 03 Dec 2024 02:37:58 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:27 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-11-qperret@google.com> Subject: [PATCH v2 10/18] KVM: arm64: Introduce __pkvm_host_share_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023800_333386_716C2110 X-CRM114-Status: GOOD ( 17.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for handling guest stage-2 mappings at EL2, introduce a new pKVM hypercall allowing to share pages with non-protected guests. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/include/asm/kvm_host.h | 3 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/include/nvhe/memory.h | 2 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 34 +++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 70 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 7 ++ 7 files changed, 118 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 89c0fac69551..449337f5b2a3 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -65,6 +65,7 @@ enum __kvm_host_smccc_func { /* Hypercalls available after pKVM finalisation */ __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, + __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e18e9244d17a..f75988e3515b 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -771,6 +771,9 @@ struct kvm_vcpu_arch { /* Cache some mmu pages needed inside spinlock regions */ struct kvm_mmu_memory_cache mmu_page_cache; + /* Pages to be donated to pkvm/EL2 if it runs out */ + struct kvm_hyp_memcache pkvm_memcache; + /* Virtual SError ESR to restore when HCR_EL2.VSE is set */ u64 vsesr_el2; diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 25038ac705d8..a7976e50f556 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -39,6 +39,7 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); +int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 08f3a0416d4c..457318215155 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -47,6 +47,8 @@ struct hyp_page { /* Host (non-meta) state. Guarded by the host stage-2 lock. */ enum pkvm_page_state host_state : 8; + + u32 host_share_guest_count; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 95d78db315b3..d659462fbf5d 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -211,6 +211,39 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = ret; } +static int pkvm_refill_memcache(struct pkvm_hyp_vcpu *hyp_vcpu) +{ + struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu; + + return refill_memcache(&hyp_vcpu->vcpu.arch.pkvm_memcache, + host_vcpu->arch.pkvm_memcache.nr_pages, + &host_vcpu->arch.pkvm_memcache); +} + +static void handle___pkvm_host_share_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, pfn, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 3); + struct pkvm_hyp_vcpu *hyp_vcpu; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret = pkvm_refill_memcache(hyp_vcpu); + if (ret) + goto out; + + ret = __pkvm_host_share_guest(pfn, gfn, hyp_vcpu, prot); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -420,6 +453,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_share_hyp), HANDLE_FUNC(__pkvm_host_unshare_hyp), + HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 1595081c4f6b..a69d7212b64c 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -861,6 +861,27 @@ static int hyp_complete_donation(u64 addr, return pkvm_create_mappings_locked(start, end, prot); } +static enum pkvm_page_state guest_get_page_state(kvm_pte_t pte, u64 addr) +{ + if (!kvm_pte_valid(pte)) + return PKVM_NOPAGE; + + return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); +} + +static int __guest_check_page_state_range(struct pkvm_hyp_vcpu *vcpu, u64 addr, + u64 size, enum pkvm_page_state state) +{ + struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); + struct check_walk_data d = { + .desired = state, + .get_page_state = guest_get_page_state, + }; + + hyp_assert_lock_held(&vm->lock); + return check_page_state_range(&vm->pgt, addr, size, &d); +} + static int check_share(struct pkvm_mem_share *share) { const struct pkvm_mem_transition *tx = &share->tx; @@ -1343,3 +1364,52 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) return ret; } + +int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, + enum kvm_pgtable_prot prot) +{ + struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 phys = hyp_pfn_to_phys(pfn); + u64 ipa = hyp_pfn_to_phys(gfn); + struct hyp_page *page; + int ret; + + if (prot & ~KVM_PGTABLE_PROT_RWX) + return -EINVAL; + + ret = range_is_allowed_memory(phys, phys + PAGE_SIZE); + if (ret) + return ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __guest_check_page_state_range(vcpu, ipa, PAGE_SIZE, PKVM_NOPAGE); + if (ret) + goto unlock; + + page = hyp_phys_to_page(phys); + switch (page->host_state) { + case PKVM_PAGE_OWNED: + WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_OWNED)); + break; + case PKVM_PAGE_SHARED_OWNED: + /* Only host to np-guest multi-sharing is tolerated */ + WARN_ON(!page->host_share_guest_count); + break; + default: + ret = -EPERM; + goto unlock; + } + + WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, PAGE_SIZE, phys, + pkvm_mkstate(prot, PKVM_PAGE_SHARED_BORROWED), + &vcpu->vcpu.arch.pkvm_memcache, 0)); + page->host_share_guest_count++; + +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index d5c23449a64c..d6c61a5e7b6e 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -795,6 +795,13 @@ int __pkvm_teardown_vm(pkvm_handle_t handle) /* Push the metadata pages to the teardown memcache */ for (idx = 0; idx < hyp_vm->nr_vcpus; ++idx) { struct pkvm_hyp_vcpu *hyp_vcpu = hyp_vm->vcpus[idx]; + struct kvm_hyp_memcache *vcpu_mc = &hyp_vcpu->vcpu.arch.pkvm_memcache; + + while (vcpu_mc->nr_pages) { + void *addr = pop_hyp_memcache(vcpu_mc, hyp_phys_to_virt); + push_hyp_memcache(mc, addr, hyp_virt_to_phys); + unmap_donated_memory_noclear(addr, PAGE_SIZE); + } teardown_donated_memory(mc, hyp_vcpu, sizeof(*hyp_vcpu)); } From patchwork Tue Dec 3 10:37:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4A845E69E9F for ; Tue, 3 Dec 2024 10:50:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Smbxsi8PbLTPu6HhJ3ZtrU1gOE8VT3JZi9oskB4fii8=; b=0l4tGpToRuxSyD9HMr7CJxI2pE /u/b27DON7J5a3PZup/yNaFqu02n9PNNSExXJT2ejXPdewOqSvd31BnipNaG+ThDSnCvYmirdglEI ICJna7H4gqWxMUsjvYrntXNBnVc24S14NSTOUvNim/WtqGaEtarPHd2o+ni0Dd8D292yuL2veU3Cp 4DeqfyN9qDQhxvfv0PwMR3qtrvrFwv0cgxrpreDarq7zij3F7SOyl6PXzjg5Rz5bTpDzzwu+V/2M/ dFO75lLRIVPphxJcAbAQHq/qiPJpDyQw+O4DtFLAVIk50xpa5lQX/LR6DDYZ5gex0rzj+pwxD3HxM drrMTcLg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQTU-000000098f9-0MZD; Tue, 03 Dec 2024 10:49:48 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQI6-000000095gH-3AOH for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:38:03 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d0c64ce365so2795642a12.0 for ; Tue, 03 Dec 2024 02:38:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222281; x=1733827081; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Smbxsi8PbLTPu6HhJ3ZtrU1gOE8VT3JZi9oskB4fii8=; b=dso4iIozjPZZY6HKkYQjVISU9l+reuqtj127Y0lFnt9WExyYO+7QE7yvwYW35RgGVo GlTmklaWdXfdjaB/Oom6WSZ3yWwtWHjQwb2SJuXk87j2DgkIM3SvF4JbRwVZIYS2um5U Gf6xXZtvgP9urnMsE+3Kor3YDTzyTgqJ8TWGMer0qAHb2gcRqsV72i4U9XgnNJTUAwua w8RTLu91pqASYNY6nB8l7AT+8NKPrrpIgeMp0In8tBEdjW4DNOx/c3ORG5mPUo8AF30d wC2P2IWRHz3e5xzdYmojTouDQkhrM+dWBnzKLsYXqgsVeoe+tA12cbd8XbPCd8fsFX8F Kexw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222281; x=1733827081; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Smbxsi8PbLTPu6HhJ3ZtrU1gOE8VT3JZi9oskB4fii8=; b=ahRkd0M5dS2p/4LiDvUAj7B5Z5+s25g4lbggruZAxL/ivQtUUUDEL6q9in38fEm5tT 4NYYDVqNNoPTkq2fKaDSX05M3Ty3vH6iaejwg001DVJrJH5lpec5Wm3WXxSUhiBUZb/9 HgdhEaZOwENUFAYc3DbVhpFjBlj12CWyBH91LxosbSDBPIk9SINhyB5v3Z27UVn9Codt sD8RmOFjBE0gOF3a7pt4b/hN4lOxmwYr+WCU3MrH2xTIn5RyGOR6iyeU8FTIiMtrANgL nxP0Wp7OsglJSzj+Z5N3GuoCcTL8/7H/bCSlqGXtYSC+K9zQ+TOrGujN85OPbG+aeK6O 4JHQ== X-Forwarded-Encrypted: i=1; AJvYcCVXe0YlYSNx8Vqovm7F2QQBYCf0sF0WBBxALhmRqiW23y13e6pltbGiiq2zJc4SyL0fZInRwX3UkHiJLOzux2lw@lists.infradead.org X-Gm-Message-State: AOJu0YwYo3GQOtwxs7rpHw+/9XbirlfT3P06g4C6JuQOn+rAXLAIA3ux 6InTj+oDLm8JeEL2efeIJLb5OTCfcJreyEEJHzEfPln0l2QdHNt9hSee15jUzryz+ceXJ0xpWFf DDOZtSA== X-Google-Smtp-Source: AGHT+IFpsely1W9hENzdoh4l+A3Db67CgMYOTTsCiRDVa8ZG99eLdQ3hckne6J6ZI5aiGA5XghWwWkBg4p9o X-Received: from edql13.prod.google.com ([2002:aa7:c30d:0:b0:5d0:2139:dedd]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:4301:b0:5d0:e243:8c57 with SMTP id 4fb4d7f45d1cf-5d10cb5568dmr1255645a12.9.1733222280831; Tue, 03 Dec 2024 02:38:00 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:28 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-12-qperret@google.com> Subject: [PATCH v2 11/18] KVM: arm64: Introduce __pkvm_host_unshare_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023802_791283_675532B0 X-CRM114-Status: GOOD ( 13.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for letting the host unmap pages from non-protected guests, introduce a new hypercall implementing the host-unshare-guest transition. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 5 ++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 24 +++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 67 +++++++++++++++++++ 5 files changed, 98 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 449337f5b2a3..0b6c4d325134 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -66,6 +66,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index a7976e50f556..e528a42ed60e 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -40,6 +40,7 @@ int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); +int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index be52c5b15e21..5dfc9ece9aa5 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -64,6 +64,11 @@ static inline bool pkvm_hyp_vcpu_is_protected(struct pkvm_hyp_vcpu *hyp_vcpu) return vcpu_is_protected(&hyp_vcpu->vcpu); } +static inline bool pkvm_hyp_vm_is_protected(struct pkvm_hyp_vm *hyp_vm) +{ + return kvm_vm_is_protected(&hyp_vm->kvm); +} + void pkvm_hyp_vm_table_init(void *tbl); int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva, diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index d659462fbf5d..04a9053ae1d5 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -244,6 +244,29 @@ static void handle___pkvm_host_share_guest(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_unshare_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + struct pkvm_hyp_vm *hyp_vm; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm = get_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + if (pkvm_hyp_vm_is_protected(hyp_vm)) + goto put_hyp_vm; + + ret = __pkvm_host_unshare_guest(gfn, hyp_vm); +put_hyp_vm: + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -454,6 +477,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_share_hyp), HANDLE_FUNC(__pkvm_host_unshare_hyp), HANDLE_FUNC(__pkvm_host_share_guest), + HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index a69d7212b64c..aa27a3e42e5e 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1413,3 +1413,70 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, return ret; } + +static int __check_host_unshare_guest(struct pkvm_hyp_vm *vm, u64 *__phys, u64 ipa) +{ + enum pkvm_page_state state; + struct hyp_page *page; + kvm_pte_t pte; + u64 phys; + s8 level; + int ret; + + ret = kvm_pgtable_get_leaf(&vm->pgt, ipa, &pte, &level); + if (ret) + return ret; + if (level != KVM_PGTABLE_LAST_LEVEL) + return -E2BIG; + if (!kvm_pte_valid(pte)) + return -ENOENT; + + state = guest_get_page_state(pte, ipa); + if (state != PKVM_PAGE_SHARED_BORROWED) + return -EPERM; + + phys = kvm_pte_to_phys(pte); + ret = range_is_allowed_memory(phys, phys + PAGE_SIZE); + if (WARN_ON(ret)) + return ret; + + page = hyp_phys_to_page(phys); + if (page->host_state != PKVM_PAGE_SHARED_OWNED) + return -EPERM; + if (WARN_ON(!page->host_share_guest_count)) + return -EINVAL; + + *__phys = phys; + + return 0; +} + +int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm) +{ + u64 ipa = hyp_pfn_to_phys(gfn); + struct hyp_page *page; + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(hyp_vm); + + ret = __check_host_unshare_guest(hyp_vm, &phys, ipa); + if (ret) + goto unlock; + + ret = kvm_pgtable_stage2_unmap(&hyp_vm->pgt, ipa, PAGE_SIZE); + if (ret) + goto unlock; + + page = hyp_phys_to_page(phys); + page->host_share_guest_count--; + if (!page->host_share_guest_count) + WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_OWNED)); + +unlock: + guest_unlock_component(hyp_vm); + host_unlock_component(); + + return ret; +} From patchwork Tue Dec 3 10:37:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892126 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 13C87E69E9F for ; Tue, 3 Dec 2024 10:51:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=s38xN9T/XvfaM/79S4sHcPd1Nmgpw0fQ7Wk8TWQP3tM=; b=ZmR3Vg6Y6v0wllomh9SpTM4LEH pnQ9UiGN3iYXvre+2wVE1FrF3RtFyi4mT7l1TDc80i/sgJ02IwuOcz9IwCstCqjVAM6ZsmTfTpP/+ 7rnsczlU5jGE98pRGpUdkzO75vpRg1J6R0xGSkbRX8vzE/xPuX1+YGlstfqSeS2tkxCBfN2fwjM07 r1z3z6nIMG+CRa0fbCL4vjnoW8eqqIMrg/gPRcC3nQgHusow362ILFz6S9TAKmn4rkR5tdeK0P/uW YDyA17AKuG8jkdITnoPgvZU0u6tTqgiwH4Ckmnc7pCSFhKixZwi8pked7nANh0Abs+imonByWOv2p Zfj13ObA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQUS-000000098sv-377x; Tue, 03 Dec 2024 10:50:48 +0000 Received: from mail-lj1-x249.google.com ([2a00:1450:4864:20::249]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQI8-000000095hD-3xMx for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:38:06 +0000 Received: by mail-lj1-x249.google.com with SMTP id 38308e7fff4ca-2ffd1bb0f13so33248061fa.3 for ; Tue, 03 Dec 2024 02:38:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222283; x=1733827083; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=s38xN9T/XvfaM/79S4sHcPd1Nmgpw0fQ7Wk8TWQP3tM=; b=D5Whdxpfsl0BhcS8rWEiPmQ5UBw6j+9bLSLgaPJ7NSlwb4nNUgB1xVyFdkqtcFI6WJ yvQppJ5kHUIbY9/Oldh4kdIKwTUCutd08duTRDTwAZHZeq6iZN9BkTL6ekZHX0PcoA5s r+DZn1xbs29lNLgVjBsQIW9L2+QOt9I2SbYFTI0PcRA4oqAGXEc/8zimVMdtbgdRsdnC gTIEiSZDXH24FAqxH9Af1+IwSkWOVOyZOgez1Ut2A1R8vNL+iCIDCWrBrs9BFzQy0YW6 UiLUw5umXWoOQx48mJdNd19mMDkxHHsWEw4KNilL6K1QgvxiS88hbpQY0Xxbqoshrivw It7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222283; x=1733827083; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=s38xN9T/XvfaM/79S4sHcPd1Nmgpw0fQ7Wk8TWQP3tM=; b=u2fZiTbP70Dl64zsrxNYVOdWw8EpQEKV84AtUhcTc4DtTgmGLNYQqVMQFRWfQhyKN6 +GIsb//YC4Qwogq75DM6Q1bu4vsInffEyCKTjnA8XDpvhR7PaiWT+9pVcY814bpC0Hib 5DAJowYXxfUjxR9U4CG6DvhelXu/wC8tZyWnmJirIA9CXy7X7rCKa6Yk3mx7AKwIk70z zXCnwyyXndVB3PYygH+42p1slT9KGZboMTsSO1yBV8dlIoWfr7qKq06Qy0TB+ccJo5Qv M8rPXtMriv1yruOoj7zS0yoGkeEbwC1wNMYPIOA8Ggf78tISyDdh/Cwh2gOsT/HSTS7h EpLw== X-Forwarded-Encrypted: i=1; AJvYcCXjj2VbmvqoX3aDctvUCD7EN1ZCaErt2V7+CncR/xH0BUXYIoQJytEWsRy3V5VNyZv0uHNb6QxiHkaNSqiaO/Rj@lists.infradead.org X-Gm-Message-State: AOJu0YyGW34vXcqED67WPcJLLnxbc/whcIxIMBInpuF5iXxZJN/m1MHC J+faBK/xJeNqu2lxJPgbIsrIgEqiGXIxhm/FTF8uws4oHF1slgW27+wFFZdRq3f+bPKj6PLhCmb GMP5BxQ== X-Google-Smtp-Source: AGHT+IFntBO/LOmyiPo2CB6E6Ns8WVjgPpUfexcNVdqJeQs2dulJCxuyV2C/KpfJwaEPASs3j3n/XtaL2mEj X-Received: from edbin8.prod.google.com ([2002:a05:6402:2088:b0:5d0:83a0:b479]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6512:3181:b0:53d:ab12:fbbd with SMTP id 2adb3069b0e04-53e12a2e966mr1206040e87.51.1733222282794; Tue, 03 Dec 2024 02:38:02 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:29 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-13-qperret@google.com> Subject: [PATCH v2 12/18] KVM: arm64: Introduce __pkvm_host_relax_guest_perms() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023804_981717_8BE62035 X-CRM114-Status: GOOD ( 12.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a new hypercall allowing the host to relax the stage-2 permissions of mappings in a non-protected guest page-table. It will be used later once we start allowing RO memslots and dirty logging. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 20 ++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 23 +++++++++++++++++++ 4 files changed, 45 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 0b6c4d325134..5d51933e44fb 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -67,6 +67,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_guest_perms, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index e528a42ed60e..db0dd83c2457 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -41,6 +41,7 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_pgtable_prot prot, struct pkvm_hyp_vcpu *vcpu); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 04a9053ae1d5..60dd56bbd743 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -267,6 +267,25 @@ static void handle___pkvm_host_unshare_guest(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_relax_guest_perms(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, gfn, host_ctxt, 1); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 2); + struct pkvm_hyp_vcpu *hyp_vcpu; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret = __pkvm_host_relax_guest_perms(gfn, prot, hyp_vcpu); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -478,6 +497,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_unshare_hyp), HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__pkvm_host_unshare_guest), + HANDLE_FUNC(__pkvm_host_relax_guest_perms), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index aa27a3e42e5e..d4b28e93e790 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1480,3 +1480,26 @@ int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm) return ret; } + +int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_pgtable_prot prot, struct pkvm_hyp_vcpu *vcpu) +{ + struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 ipa = hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + if ((prot & KVM_PGTABLE_PROT_RWX) != prot) + return -EPERM; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_unshare_guest(vm, &phys, ipa); + if (!ret) + ret = kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} From patchwork Tue Dec 3 10:37:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892127 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D329FE69E9F for ; Tue, 3 Dec 2024 10:52:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=yheeE+t2hfFO7oe3GWstJ0qpUPIfuz1IZC6MMTVFSoE=; b=CjixvTm8CUuNM19/qo0cx/Vzo2 1xrXUhwSW1tnf041IKf2T5gDxBV4KDDJ8MCGgG98YuRF0JrIiqJO4AJNNnbHqB7Z/d+bxufe7vd9I SUt26dyRZKYtFFObCLxCzfTe0r8KRmYtCypWxDSFwNKeu1EqDRm3/s/iMNbAnaCgBRSpLmMxPFpI5 c9ss0upsnT9NwNfy+AymfrwUPky3gZQ9z46CjdJ0NXWKsTAyJ3pulPC+8nOdSiIAq9bEI2RUuOB0p re8EDdxQZOFzrhgbV4cMdL6o2pVOVFWIkLXyD+pIBynxCRmdCuUL6C1vMu9ESCVbijoKTI46+s5Yy wW4/Iqzg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQVQ-0000000990V-1Ztk; Tue, 03 Dec 2024 10:51:48 +0000 Received: from mail-ed1-x54a.google.com ([2a00:1450:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQIA-000000095iB-2Rts for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:38:07 +0000 Received: by mail-ed1-x54a.google.com with SMTP id 4fb4d7f45d1cf-5d0e78fb68aso2354397a12.0 for ; Tue, 03 Dec 2024 02:38:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222285; x=1733827085; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yheeE+t2hfFO7oe3GWstJ0qpUPIfuz1IZC6MMTVFSoE=; b=LkfC6xzWSgZ7Oln3TiH6rwYUliqvfjc/tsynd6Wviu+fuzU2KU4KBpEfQpmGS6I+vU FDslItwWC7qOB+n8QT2ubY3H1650I0xhl8R2+j0SVNBp90PwPiNPxof9IkcEhjPBz9ky nurqrUDVCPXqv6/K33dLBATN+9tWo9kaayOWY3IkoAiPSseXCk/FxfkoYExDyCD1VUtM f6+nOv0dDe2+T3o4u83KAcvglEV6FRfv1gqOX8LFGgBkB3GoZnabDLI5mr7nE8x7/T8d OdLHW+ppflX9fL+QL2+WHcvQ5uUwkFQpTiIlBt/N6OkCzq7+m0g3mTkXnC1H1NAflmQo 0aqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222285; x=1733827085; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yheeE+t2hfFO7oe3GWstJ0qpUPIfuz1IZC6MMTVFSoE=; b=gPcKfdPPwBusOw3RYi/eRQR53xK4Z2khEQs5BRU/DvHH8ekAvq8fogZw9Y+pojErL1 qT7ssAGkESK7+XBPJHtpH4kpkSic+eVwDojuXWmK0snQ9jM5yyOyjaA8p9Z00niXB1ae JKrsDJQ3pQ2KRRAhmK5Kzwd2LQyQ6/5EalZeZJmtgWV1UjNjyxL4Ovw61faSJEJJGGue CKHfWuodh+IMp3HauQpv/o9zXNiOVmOa5VkPgDdavi4G1tgQ/6ZbhBS/Q5OK8agz3E2h z7BeupA+tHaVfWzcchpczj1rB+pVe6VaCf1/yZ8Z40In3x8BIOziLzXTj8kCwt7+rNny IQUw== X-Forwarded-Encrypted: i=1; AJvYcCXY7bDqj1cQ0YaxKD5Z1RKaXQan5GmOnpkUiTZc0rn5mKHG3OSuHqO9mOLhC1ELhtf6D028GVqn5g/2FcaU2spq@lists.infradead.org X-Gm-Message-State: AOJu0YyydCkLcYsYYBzst5GX1BckVgvZymRKNc9gWHwk4qXfllBdVUQW k/X2EidKL4VnMOE7+HJBnj8Vv2etkB8olLYyBZPOOeE4Va+OEx99C0dseMgZE08nCpfDLieGx5/ 9pT8WgA== X-Google-Smtp-Source: AGHT+IExVeQtcYGpfAepPbJDg3yiYredE0DKYzNxlYui1UocLOJSueSs8b8zmvT5eRKNrveJMBgk42mIYHn3 X-Received: from edyp8.prod.google.com ([2002:a05:6402:748:b0:5cf:bcbb:8179]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:3456:b0:5d1:1064:3274 with SMTP id 4fb4d7f45d1cf-5d11064369fmr408490a12.16.1733222285093; Tue, 03 Dec 2024 02:38:05 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:30 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-14-qperret@google.com> Subject: [PATCH v2 13/18] KVM: arm64: Introduce __pkvm_host_wrprotect_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023806_620078_340337B8 X-CRM114-Status: GOOD ( 11.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a new hypercall to remove the write permission from a non-protected guest stage-2 mapping. This will be used for e.g. enabling dirty logging. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 24 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 19 +++++++++++++++ 4 files changed, 45 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 5d51933e44fb..4d7d20ea03df 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -68,6 +68,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_guest_perms, + __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index db0dd83c2457..8658b5932473 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -42,6 +42,7 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_pgtable_prot prot, struct pkvm_hyp_vcpu *vcpu); +int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 60dd56bbd743..3feaf2119e51 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -286,6 +286,29 @@ static void handle___pkvm_host_relax_guest_perms(struct kvm_cpu_context *host_ct cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_wrprotect_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + struct pkvm_hyp_vm *hyp_vm; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm = get_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + if (pkvm_hyp_vm_is_protected(hyp_vm)) + goto put_hyp_vm; + + ret = __pkvm_host_wrprotect_guest(gfn, hyp_vm); +put_hyp_vm: + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -498,6 +521,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__pkvm_host_relax_guest_perms), + HANDLE_FUNC(__pkvm_host_wrprotect_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index d4b28e93e790..89312d7cde2a 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1503,3 +1503,22 @@ int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_pgtable_prot prot, struct pk return ret; } + +int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) +{ + u64 ipa = hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_unshare_guest(vm, &phys, ipa); + if (!ret) + ret = kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} From patchwork Tue Dec 3 10:37:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892128 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A769FE69EA6 for ; Tue, 3 Dec 2024 10:53:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=I9JsvIEZXCS7Jr+kmsJzau1fLfSngv5TQOi1bKyxyJ0=; b=vIchlmUTAxSGGzPzwontxOYUy5 KMr8shrc7XhbmBQFCDmjcMHP3x3oTAHx+t9wLQYqn9txtPUG/xi7i8u3eJV8WXnfHexZX6QIrr0M8 La4eQAHyhgli4m9JJ5toFLsFjAR5rEQeDJrAx1gPbvs+opruNKr25IqPNwbIyro0x4yO9BJ7xQDbD GEeVKd7aGpb/Bnzc+tzp9NWQdBWYetN5BOxxdVkswBoI3/NMXO8nDhGfSCmi58wa3NPXS9UIbQu09 d2Br7p8GDWMRU8y+O7MTTdXndB+dYhSBes+G4WbUbCM1VeICaK+1YsqwAMfVD1R0ilzxUmWrtB0UO ihOOfjEA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQWP-000000099FS-0KPP; Tue, 03 Dec 2024 10:52:49 +0000 Received: from mail-ed1-x54a.google.com ([2a00:1450:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQIC-000000095j1-43VT for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:38:10 +0000 Received: by mail-ed1-x54a.google.com with SMTP id 4fb4d7f45d1cf-5d0bcd51932so3777888a12.3 for ; Tue, 03 Dec 2024 02:38:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222287; x=1733827087; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=I9JsvIEZXCS7Jr+kmsJzau1fLfSngv5TQOi1bKyxyJ0=; b=mcJ0sqcgs7YdxeX/s2z0z3l3UyqW10ZMYk3BeZscQoltuGT0m6H4wFzDcRnzrVXz/0 xxswHqddWGtnyFWA9apcHBJg6rioKViPsqyCYAu5S6i2cV3Je9V8IXv5GT68puWEDyQG D0MNvEE5f2kQh97eR49OngrRJ0qaH5V/QFTYI26zOh0gCn7j/nUFRudtMFRVcBCUkTg1 Afyn/6jPR0xOoP7Fkmy1Waug5giKf5cHmFqSlz++kkX0bng47N0y+DA0WI4F0by5YGKC NpJ7+YyzrwdMWOJ8tCfo+7mmgmdrKMG+JE4trlU2VosNJZ2PnZHbdzfdzYEBaap/KlEi UO3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222287; x=1733827087; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=I9JsvIEZXCS7Jr+kmsJzau1fLfSngv5TQOi1bKyxyJ0=; b=K8GTLGrFZEN2M+AZ0SawCZSpqR1/5QJOZ1niCNBkjDrMimKlye9SPudtoV90wChLR7 hasqNeWhL9eykNwjrzLpZSQ9Yr6xKL0oZXFvwAgM/gHNTbWyb59Somzyb2sFRf7ei3dN kBv2Dsb0CGLOLWUSYfOCRjZhBGT+28TJs83l8WGwDrn7zcXqm9EDjuiwAsBItS8MWpXg U0JEt3qLr2lTMWzwEWtMZ3Kxlmz4BpkT+bN5g9ysM8o5VmxLD/rxRSQmylPFX4c9Q9az dCLn6VGr+lZoLP17llWjGOM9Os/izdUdHW/arlYSb8c+xu8gTepw+92eD9DJHunN9pnY /VPQ== X-Forwarded-Encrypted: i=1; AJvYcCWTqLXRHQ1I5ecNiAszbGAlBAI4vWgbQgy5PpjV1VweFLWnAUPSejBRlvSSJ9IlVVMZ9/pjyf276tXxHxh8hd/R@lists.infradead.org X-Gm-Message-State: AOJu0YzgduX1B2mMoZcDGra5hVFOqO5RpbqV/ZLH8U8TsqKB2SsS81jg cdczaY+v587b4t6ikh5wPKQLt8cPBbvzTPgQQxZoacfdEa1s+5NGbG7ahsd6aNoFlDrv4UZdr9H XjRTj5w== X-Google-Smtp-Source: AGHT+IFdWegGcOuq4VNlCrIIkQlpyo4Z++Lp+kcClHAhjJ/QKpJll2AcCbmRcGxxqa+FPKwZaF2FsSqM5zPQ X-Received: from edsw18.prod.google.com ([2002:aa7:da52:0:b0:5d0:6d5c:d4e4]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:1ecf:b0:5d0:b4ea:9743 with SMTP id 4fb4d7f45d1cf-5d10cb57048mr1964875a12.8.1733222287044; Tue, 03 Dec 2024 02:38:07 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:31 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-15-qperret@google.com> Subject: [PATCH v2 14/18] KVM: arm64: Introduce __pkvm_host_test_clear_young_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023809_009988_1D07D266 X-CRM114-Status: GOOD ( 12.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Plumb the kvm_stage2_test_clear_young() callback into pKVM for non-protected guest. It will be later be called from MMU notifiers. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 25 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 19 ++++++++++++++ 4 files changed, 46 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 4d7d20ea03df..cb676017d591 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -69,6 +69,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_guest_perms, __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_test_clear_young_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 8658b5932473..554ce31882e6 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,6 +43,7 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum k int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_pgtable_prot prot, struct pkvm_hyp_vcpu *vcpu); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 3feaf2119e51..67cb6e284180 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -309,6 +309,30 @@ static void handle___pkvm_host_wrprotect_guest(struct kvm_cpu_context *host_ctxt cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_test_clear_young_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(bool, mkold, host_ctxt, 3); + struct pkvm_hyp_vm *hyp_vm; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm = get_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + if (pkvm_hyp_vm_is_protected(hyp_vm)) + goto put_hyp_vm; + + ret = __pkvm_host_test_clear_young_guest(gfn, mkold, hyp_vm); +put_hyp_vm: + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -522,6 +546,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__pkvm_host_relax_guest_perms), HANDLE_FUNC(__pkvm_host_wrprotect_guest), + HANDLE_FUNC(__pkvm_host_test_clear_young_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 89312d7cde2a..0e064a7ed7c4 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1522,3 +1522,22 @@ int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) return ret; } + +int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm) +{ + u64 ipa = hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_unshare_guest(vm, &phys, ipa); + if (!ret) + ret = kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mkold); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} From patchwork Tue Dec 3 10:37:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892129 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C5712E69E9F for ; Tue, 3 Dec 2024 10:54:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=EgRs0HkjSCh59IZhf7+TrvM3b16VAv7mDc29HHStdgo=; b=uFPzANE2b8FcaRIV5Mib6j8LY0 cs0wr/MzlmlUtHsji6JYpwOD92nzLjDkFX/pPnY11Wu8j1GptlfSOHUP7SWd08HqAMU2csxOQEkQy 8HaqOlLVrjNeIi/bIQ+0tr6XGeEQskhNCUBKTKlm0PWmKfLn8NaP4iungwCd41lgj+uvMY6me81OJ /wC+HbKwYGqeyaCppCEBknL9Lnd65hDK1R1fI/c8+ylu04OiN1/JZEn6CyZLdVDh4eWgmFFyktd1d mxqvurYDxJONpUok3y0Gmhdm3H55Awq32nToXCbrtCB7MMKveLfuHZEDAjmTNW7OmFqVIg3VkVzvU PeDEQLxQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQXM-000000099Nb-2keK; Tue, 03 Dec 2024 10:53:48 +0000 Received: from mail-ej1-x649.google.com ([2a00:1450:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQIF-000000095js-1WUW for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:38:12 +0000 Received: by mail-ej1-x649.google.com with SMTP id a640c23a62f3a-aa53beff6f0so173541666b.1 for ; Tue, 03 Dec 2024 02:38:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222289; x=1733827089; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EgRs0HkjSCh59IZhf7+TrvM3b16VAv7mDc29HHStdgo=; b=uJggSyTheOSroqk/d+40gdOg1KxLlXwtfMJhO+BpTORP6YxGJyVgz8WwYVfx4MPAr4 1UsiYg9S/ADmYeGEM2SL2GnQMGNdIAwWRGTRYo3SkGZwT6ag657omsihr4PqyErBJ6Fi aCnhj4PrGC6nz5Oj9rdKJTWpsKHcqN3ForQohc2cphlj8N0yCW7dja4Oj4H21Orq5Mc4 wlcJ+EpKDTIoga40GV/EL2DVIasE55l5yveRs9gHXQkSvxbn4ZuIwZJscFwXQpNspmRh f6CM7QWiSSEIU/qj9z/QCxjyEwLg9yUAniInKdJ0whjHVUAbvIprpPOPBCbLekwOx5cl 0h+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222289; x=1733827089; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EgRs0HkjSCh59IZhf7+TrvM3b16VAv7mDc29HHStdgo=; b=K4bVjyjWpHGJdF49LZKiGzHkYOi8ZUA0q5+Yk7CADoRChy6mQ/TxNrBMuMxtUvXibH T6Dz+aPn/UXM64Lby+Rzgn8ifWg2nBFRPL0ZVXNu8thquKiDw99Uz1RkaMGaeeLc27cD fXFNDVXPlYtfbjx0wusdDHd/5uFgHHnctgUGYO654BEAtNQlqEl7ENQsWoi4/inro2YN AVaOZpvbTMt1Whp4ZX7OSM4dBhnAZaEJZIO0H9v8ruBqe6oJSeCxe4k3HJgo943LMikt reSdZczUdxARH3M1EveCeNreREfcF8CYlfMl9cjT5sP9ss9/Y75fx6C33E25SgbNxZV4 8A8w== X-Forwarded-Encrypted: i=1; AJvYcCWTGLC88Pmn1dSUcN4USn2D9ZOlLXe4WmF2zZQpsJU62ahoIsgPEOOtkG5dRPzzc7RvTLuXP9+AnBnVpUTxLvWF@lists.infradead.org X-Gm-Message-State: AOJu0Yyc7zKCi/5fCKeCua739e1kX+nAhkZlLeYVHO0BzXm3s4WjxbT0 xk2LFSxRF3DTrgxK8p8r+Ax7StION0NeNpSDvP8bbw/CR+e8Jc7OmwIb4Pb3TaEo59dMqV9E2wn ZNa0OcQ== X-Google-Smtp-Source: AGHT+IGowzrGpSC0FLSlw3WKUPPBp1evDjYqr9UC0mO5wZAxbWJofoXIaJzqMrV6fp0J8VpSR7nkc7pw29FT X-Received: from edql9.prod.google.com ([2002:aa7:c309:0:b0:5cf:ad96:abb]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:5243:b0:5d0:bf5e:eb8 with SMTP id 4fb4d7f45d1cf-5d10cb800fcmr1839841a12.23.1733222289249; Tue, 03 Dec 2024 02:38:09 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:32 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-16-qperret@google.com> Subject: [PATCH v2 15/18] KVM: arm64: Introduce __pkvm_host_mkyoung_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023811_400558_7D7F31CD X-CRM114-Status: GOOD ( 13.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Plumb the kvm_pgtable_stage2_mkyoung() callback into pKVM for non-protected guests. It will be called later from the fault handling path. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 19 ++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 20 +++++++++++++++++++ 4 files changed, 41 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index cb676017d591..6178e12a0dbc 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -70,6 +70,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_guest_perms, __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_test_clear_young_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_mkyoung_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 554ce31882e6..3ae0c3ecff48 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -44,6 +44,7 @@ int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_pgtable_prot prot, struct pkvm_hyp_vcpu *vcpu); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm); +int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 67cb6e284180..de0012a75827 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -333,6 +333,24 @@ static void handle___pkvm_host_test_clear_young_guest(struct kvm_cpu_context *ho cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_mkyoung_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, gfn, host_ctxt, 1); + struct pkvm_hyp_vcpu *hyp_vcpu; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret = __pkvm_host_mkyoung_guest(gfn, hyp_vcpu); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -547,6 +565,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_relax_guest_perms), HANDLE_FUNC(__pkvm_host_wrprotect_guest), HANDLE_FUNC(__pkvm_host_test_clear_young_guest), + HANDLE_FUNC(__pkvm_host_mkyoung_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 0e064a7ed7c4..7605bd7f80b5 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1541,3 +1541,23 @@ int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm * return ret; } + +int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu) +{ + struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 ipa = hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_unshare_guest(vm, &phys, ipa); + if (!ret) + kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} From patchwork Tue Dec 3 10:37:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892140 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B80FFE69E9F for ; Tue, 3 Dec 2024 10:55:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=eYizfs5ugNIRCne9WipVM8NnIjcJOlVcRn/zTXEmfg0=; b=VyugStBL13aeyo/3vnTlcLwF/4 wWNyRXF3NBBtnnY02B7aTq5MWiztlFE7LpZzz2q+geF5O62YpuxpJwpjFPjqvNnSU/AJzNz7WMI5O yYXXSWuYH9YUa8/nLMNG+3i+jTB0XLWPklzZJTALeBoG0q/cPBujwj2r/xO3SFA8a0tXssWI29xyH UJrxdx/oQVf4snnLW4Wew8XUhDSSCZfC7kP/AA46fqL6wXMU/ddmjnt0plhlj3eQgsBEnJNz+PicY 22pzaFNByw5Yvr/ZHLfXmaxkafgU4brCEkGX/8XSDiXa+shWgNUIidgBys0NbKCOdN80B3m/RMLla gFAzRwuA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQYL-000000099X7-1J3c; Tue, 03 Dec 2024 10:54:49 +0000 Received: from mail-ej1-x649.google.com ([2a00:1450:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQIH-000000095km-3KQx for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:38:14 +0000 Received: by mail-ej1-x649.google.com with SMTP id a640c23a62f3a-aa52ec07bcbso334643066b.2 for ; Tue, 03 Dec 2024 02:38:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222291; x=1733827091; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=eYizfs5ugNIRCne9WipVM8NnIjcJOlVcRn/zTXEmfg0=; b=yx53bSrEsMEKg6LS/atoTvWoWpZx5kC8QUjQHcBGMKz2HU9y8YCygMpofuFmf+g9qj xIUfY3XsnXiq27yuyx95+H/4QqjeGcGXomeSjlS4ELJUhFqY0pk0OaF4+29YKTpINncr 5NCszsPL8vhpV9dVI3aaX4f25EKHrYTP4DfgM0RkgEo5QjJ6dZw1EVGwLW0LvtuF/T5x uIF6fT4Dv3fLiXDyHOnut49DVQGeYFhbTjuEgN5c9luy0C27AuHS1EkJM6gHWpZ2cp2q NVzrPYTjJyQtyqrQkCeCOQh5KadcTeQTnnN2T1Iha4FIdzt7yOU93+fYQ5pk5W12bFZ0 jvqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222291; x=1733827091; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eYizfs5ugNIRCne9WipVM8NnIjcJOlVcRn/zTXEmfg0=; b=lj/nrpX8V8Z3TFo7u5pz840Oh18IdyNQkWjThTNwWmCtmCpPyU3RD/IJLHOWHyOvJb IkS60MgpA5g5if34QUgjU92hY3fYCTIciAm/olQo7/ORM2cZ4zUK/VJc0Zo2lQ4oxkMC 1pAVLLXiYzBarV1tmIPcQRebAtZfYnW8JxwpV+sSuxIWyVzGkrH7PN+FYpG3At5wjJ6T 9keAXMiMWwlrGdTKofiz3H0mKP0vww7sNPDtRK1hmHvjHAE/j8eOMQBKLIfiw++dKOU8 uyAgphCtWuiSkP//C9zF2D0FLifOEiKw0raBeqcMrVwE2zREd0KJp5R63zjbqqLjjY4v wiBA== X-Forwarded-Encrypted: i=1; AJvYcCWkf3MqLx3/S2Z4zLQjWQrgw/Q4Slj04GhO0704sm2NWE+K1Ej0iyJu+AL5CfJUTPoe3IDwg3N/zC5nFxdM3Gnu@lists.infradead.org X-Gm-Message-State: AOJu0YzCPv6yvKRTti7N3Igi0Bye+4KGm/cAJRgHARXKCdL4cBocb0GK pwPdK8qcVcyqj8Fw0w82S1+26P3o5Cd5Lvn/InOtH+0jhLWTHRC3J6D1Qg1e4wzvYxutwE9ALc+ zUybD4Q== X-Google-Smtp-Source: AGHT+IHy5ejf0OiCrkEdWE2fH7ml0DEW+PK/xc+o7UbY9Xx9O7ZNOT6ubaPhP/xD9o/HpxIVzCb6WRgnDi57 X-Received: from edix21.prod.google.com ([2002:a50:d615:0:b0:5d0:a9a6:5abc]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:7703:b0:aa5:1a75:dcd9 with SMTP id a640c23a62f3a-aa5f7f1ad71mr161678366b.48.1733222291421; Tue, 03 Dec 2024 02:38:11 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:33 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-17-qperret@google.com> Subject: [PATCH v2 16/18] KVM: arm64: Introduce __pkvm_tlb_flush_vmid() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023813_829594_A2B0DD74 X-CRM114-Status: GOOD ( 11.92 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a new hypercall to flush the TLBs of non-protected guests. The host kernel will be responsible for issuing this hypercall after changing stage-2 permissions using the __pkvm_host_relax_guest_perms() or __pkvm_host_wrprotect_guest() paths. This is left under the host's responsibility for performance reasons. Note however that the TLB maintenance for all *unmap* operations still remains entirely under the hypervisor's responsibility for security reasons -- an unmapped page may be donated to another entity, so a stale TLB entry could be used to leak private data. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 17 +++++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 6178e12a0dbc..df6237d0459c 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -87,6 +87,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put, + __KVM_HOST_SMCCC_FUNC___pkvm_tlb_flush_vmid, }; #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index de0012a75827..219d7fb850ec 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -398,6 +398,22 @@ static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) __kvm_tlb_flush_vmid(kern_hyp_va(mmu)); } +static void handle___pkvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + struct pkvm_hyp_vm *hyp_vm; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vm = get_pkvm_hyp_vm(handle); + if (!hyp_vm) + return; + + __kvm_tlb_flush_vmid(&hyp_vm->kvm.arch.mmu); + put_pkvm_hyp_vm(hyp_vm); +} + static void handle___kvm_flush_cpu_context(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -582,6 +598,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_teardown_vm), HANDLE_FUNC(__pkvm_vcpu_load), HANDLE_FUNC(__pkvm_vcpu_put), + HANDLE_FUNC(__pkvm_tlb_flush_vmid), }; static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) From patchwork Tue Dec 3 10:37:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 10088E69E9F for ; Tue, 3 Dec 2024 10:56:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=CyANWLQ0Nrz1zThfCRFFyoV42B/jdyXda0aXqdJ1iuE=; b=sacyfdzlQ1c141wbHeDKAKsN/x 7AFIT/gV74naKAOlPwXk+Kid4N52KOJ4TBxmiJVx6qzsGZI1SapIH/VmquqONjjNiSYw4+ECzKBqa l3JszIr510rvtmjn7azCWEBxQuvifAmA+TInDqj+o/z0oVPgjZxIcs6iSZI6aaScrErlkFrUw4TC3 PHG/wsqosFuAIzCdRdiyGaJKAvw8l4X8P+HJO2kKfSwhhZ0vx3IKRsGtihPU/rOouNhO/rSJtejBj 5cL5cAWnQ/rXhv3UW4l1hKnHvqAQiejINQteQYd1cNj8osuhfzZueoOxoykhrmatEM5aVDpVuTWu5 bz+YPPBA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQZI-000000099hX-3pRk; Tue, 03 Dec 2024 10:55:48 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQIJ-000000095lt-2PGZ for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:38:16 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d110669c91so62557a12.3 for ; Tue, 03 Dec 2024 02:38:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222294; x=1733827094; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CyANWLQ0Nrz1zThfCRFFyoV42B/jdyXda0aXqdJ1iuE=; b=h1Qh8FSJ8hq3pEobJ4xO3riOqvVntOVxLZ9EFNbefd66ylW3UWlsE1KryuY7PU44sh OwjKi1+InQAIWVE8uCZ0/8+RUMNvqfehseLuVcivmpuHnNydESIM7V8hY5eqqCDHODvC lGrWlmDXEDR60G2jeLA4Trvef/ix/SmBxIAOp6xLYV25jCYHsuHQsRAdzsl3zMDkTOss nZSm6kf2njYIDImTqo0ax8CXiOCoP8HvEVV+pgWlZCx2tNmn4gWwHP5Ft+ssE3WnOPT2 bPbbG+GSfl9QoW5R+jAAyfxdbgylA5A6X/CPxkMPmuxZDNjgnbM206hEHdfT4IHFCaA8 lEgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222294; x=1733827094; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CyANWLQ0Nrz1zThfCRFFyoV42B/jdyXda0aXqdJ1iuE=; b=CCvC6IJtxqy1Gd9RY4eOCMfnGYr55ieFcc4nq3JXcZEtEI9tSvuKdk10FW+lNk9JFD VDHhofoJxBnoTst0I2v83DwY7B5KhSVY4YtXn10x2fDjfd0blaZJ/LtzXKOmboLuw+T4 S1/jFwmcSB9BLiT6bHVv6qyAgFNZ7bjLCVBHgrul2SwBJ1Au9zgJ8Z0KVwHb5Wrmhjie qfbex7NaRwuslGMJ5NsRV3sUIrS6k3ywlbNuq+OWylzdAl0SHNDxPPC2YXFTKzIEPYYy J63Nd516FuXE+WhCJaE9ieHCW+Vvi5werFtga5+hsXv9sycwrsreEUjj0+7OeTYcMTle kP5w== X-Forwarded-Encrypted: i=1; AJvYcCVHLtYoHWeLNvU17SvlQUIau4PTMwnLjN72ruVTPeWOkLAd178LMAEOSn8k+E0I8tvNP1QYj6OzJ1fRFmXkMbhn@lists.infradead.org X-Gm-Message-State: AOJu0Yxx2dOVpr/K+8p2gCT7oGwLPR1j0//A/y1c51bCFf/YCwC4dV1L Ls6y5EEPhRfNZYhTRD36w9lbcUU8LaWActcd6hbzWjC/XrjITB2qFSGO+pK1AxkU3TreGBx4r1a w9tAxvg== X-Google-Smtp-Source: AGHT+IHFD7wNw7SnPt9sR/c1Yw3V2XviIZN7nLFcTVtzbHAE65P8Ps78iKqgqLfIRYUjjGhKRJ3w/oQiHMFW X-Received: from edfa42.prod.google.com ([2002:a50:9ead:0:b0:5cf:aae5:54ce]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:3506:b0:5d0:9054:b119 with SMTP id 4fb4d7f45d1cf-5d10cb80133mr1667289a12.21.1733222293753; Tue, 03 Dec 2024 02:38:13 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:34 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-18-qperret@google.com> Subject: [PATCH v2 17/18] KVM: arm64: Introduce the EL1 pKVM MMU From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023815_618283_22951669 X-CRM114-Status: GOOD ( 25.76 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a set of helper functions allowing to manipulate the pKVM guest stage-2 page-tables from EL1 using pKVM's HVC interface. Each helper has an exact one-to-one correspondance with the traditional kvm_pgtable_stage2_*() functions from pgtable.c, with a strictly matching prototype. This will ease plumbing later on in mmu.c. These callbacks track the gfn->pfn mappings in a simple rb_tree indexed by IPA in lieu of a page-table. This rb-tree is kept in sync with pKVM's state and is protected by a new rwlock -- the existing mmu_lock protection does not suffice in the map() path where the tree must be modified while user_mem_abort() only acquires a read_lock. Signed-off-by: Quentin Perret --- The embedded union inside struct kvm_pgtable is arguably a bit horrible currently... I considered making the pgt argument to all kvm_pgtable_*() functions an opaque void * ptr, and moving the definition of struct kvm_pgtable to pgtable.c and the pkvm version into pkvm.c. Given that the allocation of that data-structure is done by the caller, that means we'd need to expose kvm_pgtable_get_pgd_size() or something that each MMU (pgtable.c and pkvm.c) would have to implement and things like that. But that felt like a bigger surgery, so I went with the simpler option. Thoughts welcome :-) Similarly, happy to drop the mappings_lock if we want to teach user_mem_abort() about taking a write lock on the mmu_lock in the pKVM case, but again this implementation is the least invasive into normal KVM so that felt like a reasonable starting point. --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/kvm_pgtable.h | 27 ++-- arch/arm64/include/asm/kvm_pkvm.h | 28 ++++ arch/arm64/kvm/pkvm.c | 195 +++++++++++++++++++++++++++ 4 files changed, 242 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f75988e3515b..05936b57a3a4 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -85,6 +85,7 @@ void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu); struct kvm_hyp_memcache { phys_addr_t head; unsigned long nr_pages; + struct pkvm_mapping *mapping; /* only used from EL1 */ }; static inline void push_hyp_memcache(struct kvm_hyp_memcache *mc, diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 04418b5e3004..d24d18874015 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -412,15 +412,24 @@ static inline bool kvm_pgtable_walk_lock_held(void) * be used instead of block mappings. */ struct kvm_pgtable { - u32 ia_bits; - s8 start_level; - kvm_pteref_t pgd; - struct kvm_pgtable_mm_ops *mm_ops; - - /* Stage-2 only */ - struct kvm_s2_mmu *mmu; - enum kvm_pgtable_stage2_flags flags; - kvm_pgtable_force_pte_cb_t force_pte_cb; + union { + struct { + u32 ia_bits; + s8 start_level; + kvm_pteref_t pgd; + struct kvm_pgtable_mm_ops *mm_ops; + + /* Stage-2 only */ + struct kvm_s2_mmu *mmu; + enum kvm_pgtable_stage2_flags flags; + kvm_pgtable_force_pte_cb_t force_pte_cb; + }; + struct { + struct kvm *kvm; + struct rb_root mappings; + rwlock_t mappings_lock; + } pkvm; + }; }; /** diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index cd56acd9a842..84211d5daf87 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -11,6 +11,12 @@ #include #include +struct pkvm_mapping { + u64 gfn; + u64 pfn; + struct rb_node node; +}; + /* Maximum number of VMs that can co-exist under pKVM. */ #define KVM_MAX_PVMS 255 @@ -137,4 +143,26 @@ static inline size_t pkvm_host_sve_state_size(void) SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_sve_max_vl))); } +static inline pkvm_handle_t pkvm_pgt_to_handle(struct kvm_pgtable *pgt) +{ + return pgt->pkvm.kvm->arch.pkvm.handle; +} + +int pkvm_pgtable_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, struct kvm_pgtable_mm_ops *mm_ops); +void pkvm_pgtable_destroy(struct kvm_pgtable *pgt); +int pkvm_pgtable_map(struct kvm_pgtable *pgt, u64 addr, u64 size, + u64 phys, enum kvm_pgtable_prot prot, + void *mc, enum kvm_pgtable_walk_flags flags); +int pkvm_pgtable_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); +int pkvm_pgtable_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size); +int pkvm_pgtable_flush(struct kvm_pgtable *pgt, u64 addr, u64 size); +bool pkvm_pgtable_test_clear_young(struct kvm_pgtable *pgt, u64 addr, u64 size, bool mkold); +int pkvm_pgtable_relax_perms(struct kvm_pgtable *pgt, u64 addr, enum kvm_pgtable_prot prot, + enum kvm_pgtable_walk_flags flags); +void pkvm_pgtable_mkyoung(struct kvm_pgtable *pgt, u64 addr, enum kvm_pgtable_walk_flags flags); +int pkvm_pgtable_split(struct kvm_pgtable *pgt, u64 addr, u64 size, struct kvm_mmu_memory_cache *mc); +void pkvm_pgtable_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, s8 level); +kvm_pte_t *pkvm_pgtable_create_unlinked(struct kvm_pgtable *pgt, u64 phys, s8 level, + enum kvm_pgtable_prot prot, void *mc, bool force_pte); + #endif /* __ARM64_KVM_PKVM_H__ */ diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 85117ea8f351..9c648a510671 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -268,3 +269,197 @@ static int __init finalize_pkvm(void) return ret; } device_initcall_sync(finalize_pkvm); + +static int cmp_mappings(struct rb_node *node, const struct rb_node *parent) +{ + struct pkvm_mapping *a = rb_entry(node, struct pkvm_mapping, node); + struct pkvm_mapping *b = rb_entry(parent, struct pkvm_mapping, node); + + if (a->gfn < b->gfn) + return -1; + if (a->gfn > b->gfn) + return 1; + return 0; +} + +static struct rb_node *find_first_mapping_node(struct rb_root *root, u64 gfn) +{ + struct rb_node *node = root->rb_node, *prev = NULL; + struct pkvm_mapping *mapping; + + while (node) { + mapping = rb_entry(node, struct pkvm_mapping, node); + if (mapping->gfn == gfn) + return node; + prev = node; + node = (gfn < mapping->gfn) ? node->rb_left : node->rb_right; + } + + return prev; +} + +#define for_each_mapping_in_range(pgt, start_ipa, end_ipa, mapping, tmp) \ + for (tmp = find_first_mapping_node(&pgt->pkvm.mappings, ((start_ipa) >> PAGE_SHIFT)); \ + tmp && ({ mapping = rb_entry(tmp, struct pkvm_mapping, node); tmp = rb_next(tmp); 1; });) \ + if (mapping->gfn < ((start_ipa) >> PAGE_SHIFT)) \ + continue; \ + else if (mapping->gfn >= ((end_ipa) >> PAGE_SHIFT)) \ + break; \ + else + +int pkvm_pgtable_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, struct kvm_pgtable_mm_ops *mm_ops) +{ + pgt->pkvm.kvm = kvm_s2_mmu_to_kvm(mmu); + pgt->pkvm.mappings = RB_ROOT; + rwlock_init(&pgt->pkvm.mappings_lock); + + return 0; +} + +void pkvm_pgtable_destroy(struct kvm_pgtable *pgt) +{ + pkvm_handle_t handle = pkvm_pgt_to_handle(pgt); + struct pkvm_mapping *mapping; + struct rb_node *node; + + if (!handle) + return; + + node = rb_first(&pgt->pkvm.mappings); + while (node) { + mapping = rb_entry(node, struct pkvm_mapping, node); + kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); + node = rb_next(node); + rb_erase(&mapping->node, &pgt->pkvm.mappings); + kfree(mapping); + } +} + +int pkvm_pgtable_map(struct kvm_pgtable *pgt, u64 addr, u64 size, + u64 phys, enum kvm_pgtable_prot prot, + void *mc, enum kvm_pgtable_walk_flags flags) +{ + struct pkvm_mapping *mapping = NULL; + struct kvm_hyp_memcache *cache = mc; + u64 gfn = addr >> PAGE_SHIFT; + u64 pfn = phys >> PAGE_SHIFT; + int ret; + + if (size != PAGE_SIZE) + return -EINVAL; + + write_lock(&pgt->pkvm.mappings_lock); + ret = kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, prot); + if (ret) { + /* Is the gfn already mapped due to a racing vCPU? */ + if (ret == -EPERM) + ret = -EAGAIN; + goto unlock; + } + + swap(mapping, cache->mapping); + mapping->gfn = gfn; + mapping->pfn = pfn; + WARN_ON(rb_find_add(&mapping->node, &pgt->pkvm.mappings, cmp_mappings)); +unlock: + write_unlock(&pgt->pkvm.mappings_lock); + + return ret; +} + +int pkvm_pgtable_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + pkvm_handle_t handle = pkvm_pgt_to_handle(pgt); + struct pkvm_mapping *mapping; + struct rb_node *tmp; + int ret = 0; + + write_lock(&pgt->pkvm.mappings_lock); + for_each_mapping_in_range(pgt, addr, addr + size, mapping, tmp) { + ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); + if (WARN_ON(ret)) + break; + + rb_erase(&mapping->node, &pgt->pkvm.mappings); + kfree(mapping); + } + write_unlock(&pgt->pkvm.mappings_lock); + + return ret; +} + +int pkvm_pgtable_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + pkvm_handle_t handle = pkvm_pgt_to_handle(pgt); + struct pkvm_mapping *mapping; + struct rb_node *tmp; + int ret = 0; + + read_lock(&pgt->pkvm.mappings_lock); + for_each_mapping_in_range(pgt, addr, addr + size, mapping, tmp) { + ret = kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->gfn); + if (WARN_ON(ret)) + break; + } + read_unlock(&pgt->pkvm.mappings_lock); + + return ret; +} + +int pkvm_pgtable_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct pkvm_mapping *mapping; + struct rb_node *tmp; + + read_lock(&pgt->pkvm.mappings_lock); + for_each_mapping_in_range(pgt, addr, addr + size, mapping, tmp) + __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), PAGE_SIZE); + read_unlock(&pgt->pkvm.mappings_lock); + + return 0; +} + +bool pkvm_pgtable_test_clear_young(struct kvm_pgtable *pgt, u64 addr, u64 size, bool mkold) +{ + pkvm_handle_t handle = pkvm_pgt_to_handle(pgt); + struct pkvm_mapping *mapping; + struct rb_node *tmp; + bool young = false; + + read_lock(&pgt->pkvm.mappings_lock); + for_each_mapping_in_range(pgt, addr, addr + size, mapping, tmp) + young |= kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle, mapping->gfn, + mkold); + read_unlock(&pgt->pkvm.mappings_lock); + + return young; +} + +int pkvm_pgtable_relax_perms(struct kvm_pgtable *pgt, u64 addr, enum kvm_pgtable_prot prot, + enum kvm_pgtable_walk_flags flags) +{ + return kvm_call_hyp_nvhe(__pkvm_host_relax_guest_perms, addr >> PAGE_SHIFT, prot); +} + +void pkvm_pgtable_mkyoung(struct kvm_pgtable *pgt, u64 addr, enum kvm_pgtable_walk_flags flags) +{ + WARN_ON(kvm_call_hyp_nvhe(__pkvm_host_mkyoung_guest, addr >> PAGE_SHIFT)); +} + +void pkvm_pgtable_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, s8 level) +{ + WARN_ON(1); +} + +kvm_pte_t *pkvm_pgtable_create_unlinked(struct kvm_pgtable *pgt, u64 phys, s8 level, + enum kvm_pgtable_prot prot, void *mc, bool force_pte) +{ + WARN_ON(1); + return NULL; +} + +int pkvm_pgtable_split(struct kvm_pgtable *pgt, u64 addr, u64 size, struct kvm_mmu_memory_cache *mc) +{ + WARN_ON(1); + return -EINVAL; +} From patchwork Tue Dec 3 10:37:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13892142 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EF1ACE69EA6 for ; Tue, 3 Dec 2024 10:57:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=wgPt/MJOT4c2hQ5rFMWT4KcK8gju2jp760Q6iP+EbKk=; b=IggzP2DPOH5vpnrRujbC9sFXnx ke3JyOmDk2i+t8aCCP1Eel0wXSFnLctuJxpQBuWBNbkPbe678qAu3yWxmHwzDuQpL7VhDxVfW5N25 4m4yQf36uYimzDEyuO72WfgyogvAEtpN73/iEwMpq2UbZqKoOioH5A2XAawOsGFG3nWOMrcEJ4e4z xhMDB2LyfE8/9E+tX8Fl0It6TrqzQW31BLBtvlVXna3JRXFCaYgCc144tY6I0BQ2JwhI3aazjSNpf gNPAQvpuUv2Trdqe5zZ/+OSQba50UsNXcTOZ05u3z/9LGlJq6dK4iWwpHIPEFcW1NMSG0/XPAbzgB /c6+koxQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tIQaH-000000099pn-283T; Tue, 03 Dec 2024 10:56:49 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tIQIL-000000095mb-2yNV for linux-arm-kernel@lists.infradead.org; Tue, 03 Dec 2024 10:38:18 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5d0c8ba49bfso2499257a12.2 for ; Tue, 03 Dec 2024 02:38:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733222296; x=1733827096; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wgPt/MJOT4c2hQ5rFMWT4KcK8gju2jp760Q6iP+EbKk=; b=04DDw13EqldS8Rj03jVoyS05EeJ0QcIxvCyE8vFSH1/I55KNN3SQvMZiFDn5NIZvZI 9ibev51SPFakd2H3t4GO9CrxOC2BQCQb4bivEE3sHn7AfaEBRdUAZMishwyH77hFCK2O r61K+MYK92QpR4ebpNl/ALrRFwpgv/PyOx+w2Xh8/9nEjc2NYClUMSjVt3hEKSjkphUJ OdYKzH+E3KlEN8KaJy6gmPARXKIe33+WtQtmhSXFdgvi7zeMsUNt4TqCn6z+qJCn1Mnw A1B9FhsDnhXTZDOIfzsGULnowIrThde+SmuO4cT4P7LtrhPdU1y6ozD0RU4QLqBxjo+1 HJpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733222296; x=1733827096; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wgPt/MJOT4c2hQ5rFMWT4KcK8gju2jp760Q6iP+EbKk=; b=hnea2omHtBypjchFZ1uppExI+lTZT9IXCvxG3UJQaXvArZPfh+t6Ctb9gfGdRF+OKM JLwXP4tYFWY62vYIQbtrzbLlGUSihXjP+0nnNKxlOZBqF0187a8aDBZoQUWclVGh8/4p GKgjJKgD4Diti4FLyygE40fDftwW5ZLIgO+P66uj+HGJJ5JMdwNi2j762rUi/fCUflYN BKuT1YoMSTAbO42Lr1CuxB73Ok34mBHfEqh49g64M1TTjIMyTsecCeJy3EhsuSrxHBh8 6jhQsmyaGVM9cpIUmoMN01YVjZI9vuMkVmEog6Ib2Q/m7rXWx5ckjlEm0+ilpe1t80iD EAuw== X-Forwarded-Encrypted: i=1; AJvYcCWKZlfG2m4L801zKifAoY3YDFzgwWrbWa2V+jVJSg2qu3cHJdIIMn0B/rkLLMLBVGu7J3x0pjKWRfpolF4fbed5@lists.infradead.org X-Gm-Message-State: AOJu0YzlBs5IysBGRH4j6ygGIA12PlpoIcS8MQiftCcj9ySgu+QZra8r 3vHNQufFVmoYUNqw+sET5CGOTaDociDET464CRaF3ju1yQYft8o77Yij40wY7OQKQJmR0IHEUiB lau6ZbQ== X-Google-Smtp-Source: AGHT+IFrkeZUOoGtB0RPzPui01GUZqeg3xMwt2VakMyQaQyu7YzIPmFugyAqeYB9MhEFEqFP32OTlJQu9U2t X-Received: from edrw6.prod.google.com ([2002:a50:fa86:0:b0:5cf:c898:62e0]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:510a:b0:5d0:f9a0:7c1f with SMTP id 4fb4d7f45d1cf-5d10cb817d3mr1313892a12.23.1733222295925; Tue, 03 Dec 2024 02:38:15 -0800 (PST) Date: Tue, 3 Dec 2024 10:37:35 +0000 In-Reply-To: <20241203103735.2267589-1-qperret@google.com> Mime-Version: 1.0 References: <20241203103735.2267589-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241203103735.2267589-19-qperret@google.com> Subject: [PATCH v2 18/18] KVM: arm64: Plumb the pKVM MMU in KVM From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241203_023817_758551_426ED790 X-CRM114-Status: GOOD ( 20.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce the KVM_PGT_S2() helper macro to allow switching from the traditional pgtable code to the pKVM version easily in mmu.c. The cost of this 'indirection' is expected to be very minimal due to is_protected_kvm_enabled() being backed by a static key. With this, everything is in place to allow the delegation of non-protected guest stage-2 page-tables to pKVM, so let's stop using the host's kvm_s2_mmu from EL2 and enjoy the ride. Signed-off-by: Quentin Perret --- arch/arm64/kvm/arm.c | 9 ++- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 2 - arch/arm64/kvm/mmu.c | 103 +++++++++++++++++++++-------- 3 files changed, 83 insertions(+), 31 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 55cc62b2f469..9bcbc7b8ed38 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -502,7 +502,10 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) { - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + if (!is_protected_kvm_enabled()) + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + else + free_hyp_memcache(&vcpu->arch.pkvm_memcache); kvm_timer_vcpu_terminate(vcpu); kvm_pmu_vcpu_destroy(vcpu); kvm_vgic_vcpu_destroy(vcpu); @@ -574,6 +577,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) struct kvm_s2_mmu *mmu; int *last_ran; + if (is_protected_kvm_enabled()) + goto nommu; + if (vcpu_has_nv(vcpu)) kvm_vcpu_load_hw_mmu(vcpu); @@ -594,6 +600,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) *last_ran = vcpu->vcpu_idx; } +nommu: vcpu->cpu = cpu; kvm_vgic_load(vcpu); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 219d7fb850ec..64c7dc595218 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -103,8 +103,6 @@ static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) /* Limit guest vector length to the maximum supported by the host. */ hyp_vcpu->vcpu.arch.sve_max_vl = min(host_vcpu->arch.sve_max_vl, kvm_host_sve_max_vl); - hyp_vcpu->vcpu.arch.hw_mmu = host_vcpu->arch.hw_mmu; - hyp_vcpu->vcpu.arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; hyp_vcpu->vcpu.arch.hcr_el2 &= ~(HCR_TWI | HCR_TWE); hyp_vcpu->vcpu.arch.hcr_el2 |= READ_ONCE(host_vcpu->arch.hcr_el2) & diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 641e4fec1659..058bc2c8f3c6 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -31,6 +32,14 @@ static phys_addr_t __ro_after_init hyp_idmap_vector; static unsigned long __ro_after_init io_map_base; +#define KVM_PGT_S2(fn, ...) \ + ({ \ + typeof(kvm_pgtable_stage2_ ## fn) *__fn = kvm_pgtable_stage2_ ## fn; \ + if (is_protected_kvm_enabled()) \ + __fn = pkvm_pgtable_ ## fn; \ + __fn(__VA_ARGS__); \ + }) + static phys_addr_t __stage2_range_addr_end(phys_addr_t addr, phys_addr_t end, phys_addr_t size) { @@ -147,7 +156,7 @@ static int kvm_mmu_split_huge_pages(struct kvm *kvm, phys_addr_t addr, return -EINVAL; next = __stage2_range_addr_end(addr, end, chunk_size); - ret = kvm_pgtable_stage2_split(pgt, addr, next - addr, cache); + ret = KVM_PGT_S2(split, pgt, addr, next - addr, cache); if (ret) break; } while (addr = next, addr != end); @@ -168,15 +177,23 @@ static bool memslot_is_logging(struct kvm_memory_slot *memslot) */ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { - kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); + if (is_protected_kvm_enabled()) + kvm_call_hyp_nvhe(__pkvm_tlb_flush_vmid, kvm->arch.pkvm.handle); + else + kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); return 0; } int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages) { - kvm_tlb_flush_vmid_range(&kvm->arch.mmu, - gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT); + u64 size = nr_pages << PAGE_SHIFT; + u64 addr = gfn << PAGE_SHIFT; + + if (is_protected_kvm_enabled()) + kvm_call_hyp_nvhe(__pkvm_tlb_flush_vmid, kvm->arch.pkvm.handle); + else + kvm_tlb_flush_vmid_range(&kvm->arch.mmu, addr, size); return 0; } @@ -225,7 +242,7 @@ static void stage2_free_unlinked_table_rcu_cb(struct rcu_head *head) void *pgtable = page_to_virt(page); s8 level = page_private(page); - kvm_pgtable_stage2_free_unlinked(&kvm_s2_mm_ops, pgtable, level); + KVM_PGT_S2(free_unlinked, &kvm_s2_mm_ops, pgtable, level); } static void stage2_free_unlinked_table(void *addr, s8 level) @@ -280,6 +297,11 @@ static void invalidate_icache_guest_page(void *va, size_t size) __invalidate_icache_guest_page(va, size); } +static int kvm_s2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(unmap, pgt, addr, size); +} + /* * Unmapping vs dcache management: * @@ -324,8 +346,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 lockdep_assert_held_write(&kvm->mmu_lock); WARN_ON(size & ~PAGE_MASK); - WARN_ON(stage2_apply_range(mmu, start, end, kvm_pgtable_stage2_unmap, - may_block)); + WARN_ON(stage2_apply_range(mmu, start, end, kvm_s2_unmap, may_block)); } void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, @@ -334,9 +355,14 @@ void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, __unmap_stage2_range(mmu, start, size, may_block); } +static int kvm_s2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(flush, pgt, addr, size); +} + void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_flush); + stage2_apply_range_resched(mmu, addr, end, kvm_s2_flush); } static void stage2_flush_memslot(struct kvm *kvm, @@ -942,10 +968,14 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t return -ENOMEM; mmu->arch = &kvm->arch; - err = kvm_pgtable_stage2_init(pgt, mmu, &kvm_s2_mm_ops); + err = KVM_PGT_S2(init, pgt, mmu, &kvm_s2_mm_ops); if (err) goto out_free_pgtable; + mmu->pgt = pgt; + if (is_protected_kvm_enabled()) + return 0; + mmu->last_vcpu_ran = alloc_percpu(typeof(*mmu->last_vcpu_ran)); if (!mmu->last_vcpu_ran) { err = -ENOMEM; @@ -959,7 +989,6 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t mmu->split_page_chunk_size = KVM_ARM_EAGER_SPLIT_CHUNK_SIZE_DEFAULT; mmu->split_page_cache.gfp_zero = __GFP_ZERO; - mmu->pgt = pgt; mmu->pgd_phys = __pa(pgt->pgd); if (kvm_is_nested_s2_mmu(kvm, mmu)) @@ -968,7 +997,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t return 0; out_destroy_pgtable: - kvm_pgtable_stage2_destroy(pgt); + KVM_PGT_S2(destroy, pgt); out_free_pgtable: kfree(pgt); return err; @@ -1065,7 +1094,7 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) write_unlock(&kvm->mmu_lock); if (pgt) { - kvm_pgtable_stage2_destroy(pgt); + KVM_PGT_S2(destroy, pgt); kfree(pgt); } } @@ -1082,9 +1111,11 @@ static void *hyp_mc_alloc_fn(void *unused) void free_hyp_memcache(struct kvm_hyp_memcache *mc) { - if (is_protected_kvm_enabled()) - __free_hyp_memcache(mc, hyp_mc_free_fn, - kvm_host_va, NULL); + if (!is_protected_kvm_enabled()) + return; + + kfree(mc->mapping); + __free_hyp_memcache(mc, hyp_mc_free_fn, kvm_host_va, NULL); } int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages) @@ -1092,6 +1123,12 @@ int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages) if (!is_protected_kvm_enabled()) return 0; + if (!mc->mapping) { + mc->mapping = kzalloc(sizeof(struct pkvm_mapping), GFP_KERNEL_ACCOUNT); + if (!mc->mapping) + return -ENOMEM; + } + return __topup_hyp_memcache(mc, min_pages, hyp_mc_alloc_fn, kvm_host_pa, NULL); } @@ -1130,8 +1167,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, break; write_lock(&kvm->mmu_lock); - ret = kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, pa, prot, - &cache, 0); + ret = KVM_PGT_S2(map, pgt, addr, PAGE_SIZE, pa, prot, &cache, 0); write_unlock(&kvm->mmu_lock); if (ret) break; @@ -1143,6 +1179,10 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, return ret; } +static int kvm_s2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(wrprotect, pgt, addr, size); +} /** * kvm_stage2_wp_range() - write protect stage2 memory region range * @mmu: The KVM stage-2 MMU pointer @@ -1151,7 +1191,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, */ void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_wrprotect); + stage2_apply_range_resched(mmu, addr, end, kvm_s2_wrprotect); } /** @@ -1442,9 +1482,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, unsigned long mmu_seq; phys_addr_t ipa = fault_ipa; struct kvm *kvm = vcpu->kvm; - struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; struct vm_area_struct *vma; short vma_shift; + void *memcache; gfn_t gfn; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); @@ -1472,8 +1512,15 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * and a write fault needs to collapse a block entry into a table. */ if (!fault_is_perm || (logging_active && write_fault)) { - ret = kvm_mmu_topup_memory_cache(memcache, - kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu)); + int min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); + + if (!is_protected_kvm_enabled()) { + memcache = &vcpu->arch.mmu_page_cache; + ret = kvm_mmu_topup_memory_cache(memcache, min_pages); + } else { + memcache = &vcpu->arch.pkvm_memcache; + ret = topup_hyp_memcache(memcache, min_pages); + } if (ret) return ret; } @@ -1494,7 +1541,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * logging_active is guaranteed to never be true for VM_PFNMAP * memslots. */ - if (logging_active) { + if (logging_active || is_protected_kvm_enabled()) { force_pte = true; vma_shift = PAGE_SHIFT; } else { @@ -1696,9 +1743,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * PTE, which will be preserved. */ prot &= ~KVM_NV_GUEST_MAP_SZ; - ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot, flags); + ret = KVM_PGT_S2(relax_perms, pgt, fault_ipa, prot, flags); } else { - ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, + ret = KVM_PGT_S2(map, pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, memcache, flags); } @@ -1724,7 +1771,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) read_lock(&vcpu->kvm->mmu_lock); mmu = vcpu->arch.hw_mmu; - kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa, flags); + KVM_PGT_S2(mkyoung, mmu->pgt, fault_ipa, flags); read_unlock(&vcpu->kvm->mmu_lock); } @@ -1764,7 +1811,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) } /* Falls between the IPA range and the PARange? */ - if (fault_ipa >= BIT_ULL(vcpu->arch.hw_mmu->pgt->ia_bits)) { + if (fault_ipa >= BIT_ULL(VTCR_EL2_IPA(vcpu->arch.hw_mmu->vtcr))) { fault_ipa |= kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0); if (is_iabt) @@ -1930,7 +1977,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (!kvm->arch.mmu.pgt) return false; - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + return KVM_PGT_S2(test_clear_young, kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, size, true); /* @@ -1946,7 +1993,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (!kvm->arch.mmu.pgt) return false; - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + return KVM_PGT_S2(test_clear_young, kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, size, false); }