From patchwork Mon Nov 4 13:31:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861426 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9D875D132C8 for ; Mon, 4 Nov 2024 13:42:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=C3cxg1LygKNu1O6MtAV/wXocYa0JdKRd+G1uF5af3p4=; b=1K21VAfxfDXkJwMikc3Gb7oq36 bZgz5fVDdp41+F3QlCP6YqJoS9WlC335boCvUMJweJtPk8AZ6q90PFO9xBwIiCpxak3vdXDFihvMw jj8uE2brOjBcwwsjfUJYWo9VAgMfI6islgsKTcaq/5/byTM8gkb+dRY34QSfJCkHAS+7ZOgy+2tP9 GUtOoX1FtJ/gofFl76xKbZiqlAJsrFRBHn1yD48f0V4xe/JmRnkyGV/BwAuE4x2YzYnzcPrY64HAb aGMFWHZMyQhi+otStEuxy1lQZ0lZV+D8Xx6MushqnyxabW2rpcgwyx/tqJ1zj598lHQueT+rFVutc CaGUK/5A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7xLW-0000000Dunh-21NQ; Mon, 04 Nov 2024 13:42:18 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xBj-0000000DsGE-3rKN for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:13 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-6e59dc7df64so53895847b3.1 for ; Mon, 04 Nov 2024 05:32:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727130; x=1731331930; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=C3cxg1LygKNu1O6MtAV/wXocYa0JdKRd+G1uF5af3p4=; b=Hezc8ft1mY+UL6hsfmP7Few/TK4fl3GQgnWzSX6GV6TUM+Zn3U+/XKKX/r3iLaoFT1 oTB9I6QUb7G45e7ES65UvHyb87VjSrMMgmZNADmy9ooslk8yW+xHD5RLoc68LGjgNEpa nVIRR/hwgVi46X44IW4mGbF4FCxvwWhxAZJz8tgOqh6/Pw1n1tQLQjp2TUfTUQYgCIoT nk0eQ5hjApzYLITU6StojS5hiE/OyyvxN4hxxCl2AtPt6d/U9rLCmPaGgFrexWATSipW TbTyxb2KVWw4bp2j4U6pnBmbgXd+bJXMx02Xj3y/gBeyXCafS3eiXQsIzctVOt5KArL4 4oxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727130; x=1731331930; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=C3cxg1LygKNu1O6MtAV/wXocYa0JdKRd+G1uF5af3p4=; b=pye9uYDFXgukASzWSus5EXlbeRndG3D1X5iwA63ukb2lXSYIGZB6xWWEGL35wd9/5r sATxF61/EoN9dL7m1Uk/H6cJ5eVIWht5bfZFs2gmjNSxHv/QgQXREvhazEJlQuKDx941 OQMXgpXn32JEeOO5UIwasxjaVZCPkNpCIXVWs+uUpTJwW93a6o0bmoZ3iYDGB/2qd3Tk 3KQ2VALK3pdnfBdR3e15i4hCcWA0dtRD8kr0PxfEZGtE5MKYepd9VDMdXFYXj0VbE7pA npAT3qeWbQnxZnYBcJgoqAAuGzfpkaWOptpR3dTPmq19iOawfke5N5hBRRJmQQcBIWzf ZbpA== X-Forwarded-Encrypted: i=1; AJvYcCU/RtO7xhkBaJOqLJNj7SHCK9FayznRGABCYFNc/3MSgwzKkGt6P2iw68uw+2DzKQy45vPslLD19Uky7wofxymv@lists.infradead.org X-Gm-Message-State: AOJu0YwXhkVD5UV2HyjsKwOsUkMIQDW+sPikcpaboVEH17ZhTO8DPW9e XPV6omET/FxanklzkqpeklOd6I2+ND/KDMRPIlEDsh9PPTaN0Uh4G7yCLzTgjbyw8E2En8q5JBe r9WFVbw== X-Google-Smtp-Source: AGHT+IG4oLvWKjpenUPcwb9MUEXuA4k9iUal4VcJqIm7bVxly8pKYoHiLyM9jmlzbnyR07kVBLqdiArEOR21 X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a05:690c:3603:b0:6e3:ad3:1f19 with SMTP id 00721157ae682-6ea557a7a11mr2085187b3.3.1730727130106; Mon, 04 Nov 2024 05:32:10 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:47 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-2-qperret@google.com> Subject: [PATCH 01/18] KVM: arm64: Change the layout of enum pkvm_page_state From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053211_981496_C4916806 X-CRM114-Status: GOOD ( 11.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The 'concrete' (a.k.a non-meta) page states are currently encoded using software bits in PTEs. For performance reasons, the abstract pkvm_page_state enum uses the same bits to encode these states as that makes conversions from and to PTEs easy. In order to prepare the ground for moving the 'concrete' state storage to the hyp vmemmap, re-arrange the enum to use bits 0 and 1 for this purpose. No functional changes intended. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 0972faccc2af..ca3177481b78 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -24,25 +24,28 @@ */ enum pkvm_page_state { PKVM_PAGE_OWNED = 0ULL, - PKVM_PAGE_SHARED_OWNED = KVM_PGTABLE_PROT_SW0, - PKVM_PAGE_SHARED_BORROWED = KVM_PGTABLE_PROT_SW1, - __PKVM_PAGE_RESERVED = KVM_PGTABLE_PROT_SW0 | - KVM_PGTABLE_PROT_SW1, + PKVM_PAGE_SHARED_OWNED = BIT(0), + PKVM_PAGE_SHARED_BORROWED = BIT(1), + __PKVM_PAGE_RESERVED = BIT(0) | BIT(1), /* Meta-states which aren't encoded directly in the PTE's SW bits */ - PKVM_NOPAGE, + PKVM_NOPAGE = BIT(2), }; +#define PKVM_PAGE_META_STATES_MASK (~(BIT(0) | BIT(1))) #define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT_SW1) static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot prot, enum pkvm_page_state state) { - return (prot & ~PKVM_PAGE_STATE_PROT_MASK) | state; + BUG_ON(state & PKVM_PAGE_META_STATES_MASK); + prot &= ~PKVM_PAGE_STATE_PROT_MASK; + prot |= FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); + return prot; } static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) { - return prot & PKVM_PAGE_STATE_PROT_MASK; + return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); } struct host_mmu { From patchwork Mon Nov 4 13:31:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861437 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E327CD132C7 for ; Mon, 4 Nov 2024 13:44:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=PKkm6+349ykGA9NvXy7LF0BQjMAqTfQ2kTd431L999Q=; b=tsVhleHN24J/aqHeYr0YwFcShu NC314aRL8SgrGAUwJEMbuDrPSKthi1YYEDDBSMoHXgMBdVMtM44BQxtb4SU47yX28Eu+rNZh43YuX HU6JywACHN3yOCx+OOAxqyWojBKPAfjUzacFUQftYmCvmPAqZUc6dgeyhvLa7V00UUtXjtCxYBmNi Vkfx3yscmaUauiu1bT4pWoEHkPBRdBblNwen02cW3m41hJwd+vtr2R+gv4vBryzt9w8FjE2Kzf7gT QSx4eSwITSnV/rxZCiD4TMdIBrxYCgrcrCAzbB4TESQQOElpLRsHp4ReXwasPFfaOTFhhgr9Ht/9Q ojszrajQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7xNC-0000000DvGk-3rbl; Mon, 04 Nov 2024 13:44:02 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xBm-0000000DsH0-3cJG for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:16 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-e3313b47a95so4415820276.3 for ; Mon, 04 Nov 2024 05:32:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727133; x=1731331933; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PKkm6+349ykGA9NvXy7LF0BQjMAqTfQ2kTd431L999Q=; b=IUkgHOq9aq5ZQw6eIBC2Wt3R7j9XaTyH7Pvb1O3x9jT6rL7hYBoR/QeJwtxvXiGyJq vMgIiL6SfdtREdqni+/uNq052Oo+LPxivmCNi07bCowIGlnv0KU1xf41+cptqX0KHB5M Mcxvw7on9NhTF78MjNNJphX17qxNDW/gJzzxYPAhdgmTFWMg/ZskpOEPlXN93/fcUMb1 YA7n8DGLxTIuH2EUAOujqstUN2a0qo3wJKJqXATY3TtdqoZzU7Jo9R5Ngb6WxoT+pSnu VuVVlwl+DQfSuCzInabKa4yXwgsS0f3u6O0iOlx+/Llr/F7GsglhbNXnh1jKnmRQwHE0 MHpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727133; x=1731331933; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PKkm6+349ykGA9NvXy7LF0BQjMAqTfQ2kTd431L999Q=; b=Mf06Epsb0cTqETrPSanv4JB1VfV3qQdBaGF01BEiamt41Adqv+98e+pU2Z7Upv3aiW 960OoCgMtpgYM9KMoRm5Pjn4qZDNiJ0xDegCaFVJLmn1rUIdTcjxKfBPmqFqbdx9BBek 4igVuO6PzC9wTR2Wg4o+P1BtDSEGG/i01TRE7bQc92ArCHyECMhD/1gFpymaK2C/Gf1w 4LswEU0dbIZZ7MyLVFlDe81WxPIZboUxRs9kH8KsWXvv8s5RVsixqeO7XJTNXIRm/7ZK mQGHCDKBXt9T+0OgzyYbf1BEk0zLct1hGSwXkv37MRl3oxtQy5ErCgsH4gQWdlntHsmi h0Ew== X-Forwarded-Encrypted: i=1; AJvYcCXWtOF5IKuQJKnu0I5WJ6v4LwYc8gPIZQ5G0/ecdIhJ3lm3Ll4mEAwV35AZsrDhImVWMx94xzOF0P//7k4e93zW@lists.infradead.org X-Gm-Message-State: AOJu0YyriK3m/hdZlbEpvVfuinS2RaqufXCLZ9co3WSqzorE9P+w2lUK lKSOLecFe+vjMTmQkFaJ8sGfHGy4mQuGH+F046EwKj0MBuPFQXAKDglM3HCFsHw4hI+I/c1xPx2 a2NbcGA== X-Google-Smtp-Source: AGHT+IEeIacQkmxOu92UwsFG/d4mBUfXOkjaaTFyk9RQX+gm0U4bljU3gFu+U2mMY/h3tB6h+FVJRJ+qoRNf X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a5b:f0f:0:b0:e2e:2cba:ac1f with SMTP id 3f1490d57ef6-e3087bd5414mr144751276.6.1730727132285; Mon, 04 Nov 2024 05:32:12 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:48 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-3-qperret@google.com> Subject: [PATCH 02/18] KVM: arm64: Move enum pkvm_page_state to memory.h From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053214_923412_B6A71540 X-CRM114-Status: GOOD ( 15.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In order to prepare the way for storing page-tracking information in pKVM's vmemmap, move the enum pkvm_page_state definition to nvhe/memory.h. No functional changes intended. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 35 +------------------ arch/arm64/kvm/hyp/include/nvhe/memory.h | 34 ++++++++++++++++++ 2 files changed, 35 insertions(+), 34 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index ca3177481b78..25038ac705d8 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -11,43 +11,10 @@ #include #include #include +#include #include #include -/* - * SW bits 0-1 are reserved to track the memory ownership state of each page: - * 00: The page is owned exclusively by the page-table owner. - * 01: The page is owned by the page-table owner, but is shared - * with another entity. - * 10: The page is shared with, but not owned by the page-table owner. - * 11: Reserved for future use (lending). - */ -enum pkvm_page_state { - PKVM_PAGE_OWNED = 0ULL, - PKVM_PAGE_SHARED_OWNED = BIT(0), - PKVM_PAGE_SHARED_BORROWED = BIT(1), - __PKVM_PAGE_RESERVED = BIT(0) | BIT(1), - - /* Meta-states which aren't encoded directly in the PTE's SW bits */ - PKVM_NOPAGE = BIT(2), -}; -#define PKVM_PAGE_META_STATES_MASK (~(BIT(0) | BIT(1))) - -#define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT_SW1) -static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot prot, - enum pkvm_page_state state) -{ - BUG_ON(state & PKVM_PAGE_META_STATES_MASK); - prot &= ~PKVM_PAGE_STATE_PROT_MASK; - prot |= FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); - return prot; -} - -static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) -{ - return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); -} - struct host_mmu { struct kvm_arch arch; struct kvm_pgtable pgt; diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index ab205c4d6774..6dfeb000371c 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -7,6 +7,40 @@ #include +/* + * SW bits 0-1 are reserved to track the memory ownership state of each page: + * 00: The page is owned exclusively by the page-table owner. + * 01: The page is owned by the page-table owner, but is shared + * with another entity. + * 10: The page is shared with, but not owned by the page-table owner. + * 11: Reserved for future use (lending). + */ +enum pkvm_page_state { + PKVM_PAGE_OWNED = 0ULL, + PKVM_PAGE_SHARED_OWNED = BIT(0), + PKVM_PAGE_SHARED_BORROWED = BIT(1), + __PKVM_PAGE_RESERVED = BIT(0) | BIT(1), + + /* Meta-states which aren't encoded directly in the PTE's SW bits */ + PKVM_NOPAGE = BIT(2), +}; +#define PKVM_PAGE_META_STATES_MASK (~(BIT(0) | BIT(1))) + +#define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT_SW1) +static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot prot, + enum pkvm_page_state state) +{ + BUG_ON(state & PKVM_PAGE_META_STATES_MASK); + prot &= ~PKVM_PAGE_STATE_PROT_MASK; + prot |= FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); + return prot; +} + +static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) +{ + return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); +} + struct hyp_page { unsigned short refcount; unsigned short order; From patchwork Mon Nov 4 13:31:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861438 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C1BBFD132C8 for ; Mon, 4 Nov 2024 13:45:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=DjUYluJ7thLERHfqDAbw880QNuSB0DBGhk0tN16op68=; b=SOQ9gjAt7NZZrnaT3sk3TgMd8i 2TTF5QmHTwgrTmBgeLHZsGAjGOBvg1hmvTVkx3AHqIvfz+B2YgPaIv0CPbm+8mY4Os1ooNtMnwaE7 5Os3vkjBvUy6Hs+gJf1BY4iG+H4MX+CozvIGBE18fP8VN88SdPMfxRk1kZByxt67pVUVLEZSHp3vM wF63DfCCYaI5TMrpw+sxVqR2g7D318VgFX+ICJsf6nKKn+aWbqk/zDBZnciR12HbQK/vXYQdVSvyD bvvfLEbnfohzBlcw71OfuPqt+gYYiRcdPM++YpSveRS54OqqH3M5Qo1Cdd0DRDUSA6PLlB5NkArOe 0vPcHxJw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7xOs-0000000DvWg-1daV; Mon, 04 Nov 2024 13:45:46 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xBp-0000000DsHT-0uDD for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:18 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-e30cf121024so7095155276.1 for ; Mon, 04 Nov 2024 05:32:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727135; x=1731331935; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DjUYluJ7thLERHfqDAbw880QNuSB0DBGhk0tN16op68=; b=V1niZV6VUG5eFTIPrFF0ssdekhORqE4Hno6vH+JWo+2sacYVwiz4TAk4o9iT30MSLX xr3bvhW5Io++WuTI4+WEN+vPT5wR/MnGXAUFgumREh3jrq8LLx1YGXPgDAd7G7V1Bntw vO//SGyxxP8f6NTqJuDKBCeapB+024yEbUzMa8416ifIKhMRhwAmNtAS1FjBSJlDuSQ2 I0VsmKgpqC1jRFlApysRwdafGDDflcQ0tMYQxy2Jx0/ipeg9lWu+l0wD+++hRkgRkWwL iKewBlDihEbxN+bGASuH+c7oKAk0mDGVwFsvy5kTZMSXn3WfNjZW7kwBX+NR56UALUcP DIVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727135; x=1731331935; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DjUYluJ7thLERHfqDAbw880QNuSB0DBGhk0tN16op68=; b=hDQT99Z6MSH4EIUNhgLFgk0gVGSeFjEPKJ+FK3aAn2OybDXdxMrDRE1KuKqg2RRTZq BcWsiWMkj6UBxqN08OqAN8mdWVFticlmkSkZN3+SQ0/XqQIKKci3f0jLzQIGMnT54E6O bTVtwlAygscF/0aXyPsmONDWkBivJmtkc6rsrmyZBimV/OBLysUHteCi/ltErWG10juT 1/0bxGLfADoQYYftPC1eTjq4Ts5Jb5pG02N/0tlm3RDRUvtA0uExmPK0rZn/wELlfSR4 RVkzK7OOPf5Hard06yCCXdE7ABDNSsD8iSN3vfsh3nkBVolyOwnqb7iyoz6IQLG7kMsx DuAw== X-Forwarded-Encrypted: i=1; AJvYcCUQWPXQr5CG65ku3j5Xs0pLIKjCu1qs7f5FhK9MAoOTLQN6mFoZRPE9iSEyUIMSzFuOV0wmTBY1JbU+PyPA5YEO@lists.infradead.org X-Gm-Message-State: AOJu0YycLvzlwEtoUcaHfVQrLHFqlBIv2/+cdV0Q/8vS87yvq5jS8X4x APhirjRm5ir4SGoLS0QrRz2XDZ4gowInFBuhfRslUKq9IUgh3zt6+McbAg5uWDaShF8Bew4M43X 8QvUC9A== X-Google-Smtp-Source: AGHT+IGCwsvypUWNk6RlNMRAjdWFDm97ufr3Hg3Q8v/EOjc5GY5BaJQCKOeF2R4oR705NB7B1GCdCKn9Vkg0 X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a25:ba0d:0:b0:e28:fb8b:9155 with SMTP id 3f1490d57ef6-e3087bfc93amr63281276.9.1730727135113; Mon, 04 Nov 2024 05:32:15 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:49 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-4-qperret@google.com> Subject: [PATCH 03/18] KVM: arm64: Make hyp_page::order a u8 From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053217_280393_DE432572 X-CRM114-Status: GOOD ( 14.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We don't need 16 bits to store the hyp page order, and we'll need some bits to store page ownership data soon, so let's reduce the order member. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 6 +++--- arch/arm64/kvm/hyp/include/nvhe/memory.h | 5 +++-- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 14 +++++++------- 3 files changed, 13 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index 97c527ef53c2..f1725bad6331 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -7,7 +7,7 @@ #include #include -#define HYP_NO_ORDER USHRT_MAX +#define HYP_NO_ORDER 0xff struct hyp_pool { /* @@ -19,11 +19,11 @@ struct hyp_pool { struct list_head free_area[NR_PAGE_ORDERS]; phys_addr_t range_start; phys_addr_t range_end; - unsigned short max_order; + u8 max_order; }; /* Allocation */ -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order); +void *hyp_alloc_pages(struct hyp_pool *pool, u8 order); void hyp_split_page(struct hyp_page *page); void hyp_get_page(struct hyp_pool *pool, void *addr); void hyp_put_page(struct hyp_pool *pool, void *addr); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 6dfeb000371c..88cb8ff9e769 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -42,8 +42,9 @@ static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) } struct hyp_page { - unsigned short refcount; - unsigned short order; + u16 refcount; + u8 order; + u8 reserved; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index e691290d3765..a1eb27a1a747 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -32,7 +32,7 @@ u64 __hyp_vmemmap; */ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { phys_addr_t addr = hyp_page_to_phys(p); @@ -51,7 +51,7 @@ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, /* Find a buddy page currently available for allocation */ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { struct hyp_page *buddy = __find_buddy_nocheck(pool, p, order); @@ -94,7 +94,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { phys_addr_t phys = hyp_page_to_phys(p); - unsigned short order = p->order; + u8 order = p->order; struct hyp_page *buddy; memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order); @@ -129,7 +129,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { struct hyp_page *buddy; @@ -183,7 +183,7 @@ void hyp_get_page(struct hyp_pool *pool, void *addr) void hyp_split_page(struct hyp_page *p) { - unsigned short order = p->order; + u8 order = p->order; unsigned int i; p->order = 0; @@ -195,10 +195,10 @@ void hyp_split_page(struct hyp_page *p) } } -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order) +void *hyp_alloc_pages(struct hyp_pool *pool, u8 order) { - unsigned short i = order; struct hyp_page *p; + u8 i = order; hyp_spin_lock(&pool->lock); From patchwork Mon Nov 4 13:31:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ECBD3D132C8 for ; Mon, 4 Nov 2024 13:47:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=zQ2UfOd5hZAeaPLHJ1vPNSgFOo3Tgpu5wQh3N2q+oTo=; b=XEzxPeEZneD138tgeoUxqVWkwS VTf5zYUp0nnS0Vh0MvDbr/+q85eDKovWRAIUmphaibrJP3eq+NmMANnjxol9fz83K9kVDPYDA22US W4hFBzj70ljbIydWRoiVUgMU58hEaDAI3duF7RAFxOJUxYvoPAcasSmpSTlPBHM9TdPZjkVT1EGZ9 0GI6hrtCKXZfPl+Ezdaomf8t/ZQOZvmOkz1h56jgeN2vx/IfDMNbORykqNCITGPeLkpADX2sspmjW PqCEzLTLceCSHetZITSyGM6JVEnkqco/Zy22MMuM/UvWAxeq8YrxU/0Wc2fLOahLK+oKp9dk2x6R/ yJheiksg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7xQW-0000000Dvjd-2Zp6; Mon, 04 Nov 2024 13:47:28 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xBq-0000000DsHr-3jfe for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:20 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-6ea6aa3b68bso45857977b3.3 for ; Mon, 04 Nov 2024 05:32:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727138; x=1731331938; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zQ2UfOd5hZAeaPLHJ1vPNSgFOo3Tgpu5wQh3N2q+oTo=; b=uUNs2bsU1uDcGj7/Endi87MOOXB9RsmRx+lMGOYtPEqPAAJqtFW6ExEjHLPsm/wT75 544Mklu27U2QeY74VOtUBXbfVSy5OmulpKQqwF9K25gZrdEbu7Iz/ZW/7LxeiQVsfPb9 cL9oBhGRhwWfTxiymykGMPPuBb10ariTAb5PlOljzlAW9pvrYZ84VJqUV+rg4Kf+x5cL JcV/KteIHGSU204yor7YRohPZjLAWhCQvvtdGPSJ5Z4SxdxWQSrLSciZFeZvwCkeeFPY 7SfKqvfrHEfMAfTJddCaL+3cedx2YtYRm/xj087W+6VEzdvpa50/XVSvHxvRlkbPhyTU qjpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727138; x=1731331938; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zQ2UfOd5hZAeaPLHJ1vPNSgFOo3Tgpu5wQh3N2q+oTo=; b=OrVbNViedAgX++kxLwR3koJgMc/w0gaskKZwNwMrkDPPuyARUIE3hC7jIDJB4qo66h aimgAL2+u2+Bc4E5bCyPCpHpDbcL25h9GENeN2TK0FPNQPSzuOwNO6mwO3mOQOoN5xxq oDVutdo7BET45KR0pAZkiMyRxQ5OkLCmUhpr7Psd++ohfTXmrgBPswGe44e0Fy+QRID7 sO9zf1OnGLGxqX/Efh4D1zn9PHaguW1m8crSh13KKI+SnYjYVnbpY8HXnow6LCBMacrp pMdN1KbjfA1Dbq1OBR3vXpTlZeUy1W+vI5AhD/05elGlUMOVoQi5AFMPR5ILObHW34Xb NQww== X-Forwarded-Encrypted: i=1; AJvYcCUmnUhJUK6zSM72oR2M9f7UHWTPX3ccm12FoklPvpT4lCoh3TyfpBLai5PVdvTw9iRBnDuDONPzZltk6zoY5/Yo@lists.infradead.org X-Gm-Message-State: AOJu0YyDdnx8XOmonllZl8QD5l0VriM0L67lEupxm9Dt3iLh4yQCRV2X 6iI1ZX0oujBXFdFjJWRQbZFA0v1hNC2dP2nlDCXMC0mWkBTa/ajfZorJa5D2R6HYLtDOvwX/Tm/ tTbrm3w== X-Google-Smtp-Source: AGHT+IF49UTXAe8MpaL0rEMeOO4EMZEqcc7t401Y5lsJ/es+OKsAWaYirm7qQpvT/q6K58FZd8rxMOICH29X X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a05:690c:9a0e:b0:6ea:70f7:2c38 with SMTP id 00721157ae682-6ea70f733ddmr1082897b3.4.1730727137419; Mon, 04 Nov 2024 05:32:17 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:50 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-5-qperret@google.com> Subject: [PATCH 04/18] KVM: arm64: Move host page ownership tracking to the hyp vmemmap From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053218_958258_099B77F2 X-CRM114-Status: GOOD ( 24.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We currently store part of the page-tracking state in PTE software bits for the host, guests and the hypervisor. This is sub-optimal when e.g. sharing pages as this forces to break block mappings purely to support this software tracking. This causes an unnecessarily fragmented stage-2 page-table for the host in particular when it shares pages with Secure, which can lead to measurable regressions. Moreover, having this state stored in the page-table forces us to do multiple costly walks on the page transition path, hence causing overhead. In order to work around these problems, move the host-side page-tracking logic from SW bits in its stage-2 PTEs to the hypervisor's vmemmap. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/memory.h | 6 +- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 94 ++++++++++++++++-------- arch/arm64/kvm/hyp/nvhe/setup.c | 7 +- 3 files changed, 71 insertions(+), 36 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 88cb8ff9e769..08f3a0416d4c 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -8,7 +8,7 @@ #include /* - * SW bits 0-1 are reserved to track the memory ownership state of each page: + * Bits 0-1 are reserved to track the memory ownership state of each page: * 00: The page is owned exclusively by the page-table owner. * 01: The page is owned by the page-table owner, but is shared * with another entity. @@ -44,7 +44,9 @@ static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) struct hyp_page { u16 refcount; u8 order; - u8 reserved; + + /* Host (non-meta) state. Guarded by the host stage-2 lock. */ + enum pkvm_page_state host_state : 8; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index caba3e4bd09e..1595081c4f6b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -201,8 +201,8 @@ static void *guest_s2_zalloc_page(void *mc) memset(addr, 0, PAGE_SIZE); p = hyp_virt_to_page(addr); - memset(p, 0, sizeof(*p)); p->refcount = 1; + p->order = 0; return addr; } @@ -268,6 +268,7 @@ int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd) void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc) { + struct hyp_page *page; void *addr; /* Dump all pgtable pages in the hyp_pool */ @@ -279,7 +280,9 @@ void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc) /* Drain the hyp_pool into the memcache */ addr = hyp_alloc_pages(&vm->pool, 0); while (addr) { - memset(hyp_virt_to_page(addr), 0, sizeof(struct hyp_page)); + page = hyp_virt_to_page(addr); + page->refcount = 0; + page->order = 0; push_hyp_memcache(mc, addr, hyp_virt_to_phys); WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(addr), 1)); addr = hyp_alloc_pages(&vm->pool, 0); @@ -382,19 +385,25 @@ bool addr_is_memory(phys_addr_t phys) return !!find_mem_range(phys, &range); } -static bool addr_is_allowed_memory(phys_addr_t phys) +static bool is_in_mem_range(u64 addr, struct kvm_mem_range *range) +{ + return range->start <= addr && addr < range->end; +} + +static int range_is_allowed_memory(u64 start, u64 end) { struct memblock_region *reg; struct kvm_mem_range range; - reg = find_mem_range(phys, &range); + /* Can't check the state of both MMIO and memory regions at once */ + reg = find_mem_range(start, &range); + if (!is_in_mem_range(end - 1, &range)) + return -EINVAL; - return reg && !(reg->flags & MEMBLOCK_NOMAP); -} + if (!reg || reg->flags & MEMBLOCK_NOMAP) + return -EPERM; -static bool is_in_mem_range(u64 addr, struct kvm_mem_range *range) -{ - return range->start <= addr && addr < range->end; + return 0; } static bool range_is_memory(u64 start, u64 end) @@ -454,8 +463,11 @@ static int host_stage2_adjust_range(u64 addr, struct kvm_mem_range *range) if (kvm_pte_valid(pte)) return -EAGAIN; - if (pte) + if (pte) { + WARN_ON(addr_is_memory(addr) && + !(hyp_phys_to_page(addr)->host_state & PKVM_NOPAGE)); return -EPERM; + } do { u64 granule = kvm_granule_size(level); @@ -477,10 +489,29 @@ int host_stage2_idmap_locked(phys_addr_t addr, u64 size, return host_stage2_try(__host_stage2_idmap, addr, addr + size, prot); } +static void __host_update_page_state(phys_addr_t addr, u64 size, enum pkvm_page_state state) +{ + phys_addr_t end = addr + size; + for (; addr < end; addr += PAGE_SIZE) + hyp_phys_to_page(addr)->host_state = state; +} + int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id) { - return host_stage2_try(kvm_pgtable_stage2_set_owner, &host_mmu.pgt, - addr, size, &host_s2_pool, owner_id); + int ret; + + ret = host_stage2_try(kvm_pgtable_stage2_set_owner, &host_mmu.pgt, + addr, size, &host_s2_pool, owner_id); + if (ret || !addr_is_memory(addr)) + return ret; + + /* Don't forget to update the vmemmap tracking for the host */ + if (owner_id == PKVM_ID_HOST) + __host_update_page_state(addr, size, PKVM_PAGE_OWNED); + else + __host_update_page_state(addr, size, PKVM_NOPAGE); + + return 0; } static bool host_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot prot) @@ -604,35 +635,38 @@ static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size, return kvm_pgtable_walk(pgt, addr, size, &walker); } -static enum pkvm_page_state host_get_page_state(kvm_pte_t pte, u64 addr) -{ - if (!addr_is_allowed_memory(addr)) - return PKVM_NOPAGE; - - if (!kvm_pte_valid(pte) && pte) - return PKVM_NOPAGE; - - return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); -} - static int __host_check_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - struct check_walk_data d = { - .desired = state, - .get_page_state = host_get_page_state, - }; + u64 end = addr + size; + int ret; + + ret = range_is_allowed_memory(addr, end); + if (ret) + return ret; hyp_assert_lock_held(&host_mmu.lock); - return check_page_state_range(&host_mmu.pgt, addr, size, &d); + for (; addr < end; addr += PAGE_SIZE) { + if (hyp_phys_to_page(addr)->host_state != state) + return -EPERM; + } + + return 0; } static int __host_set_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - enum kvm_pgtable_prot prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, state); + if (hyp_phys_to_page(addr)->host_state & PKVM_NOPAGE) { + int ret = host_stage2_idmap_locked(addr, size, PKVM_HOST_MEM_PROT); - return host_stage2_idmap_locked(addr, size, prot); + if (ret) + return ret; + } + + __host_update_page_state(addr, size, state); + + return 0; } static int host_request_owned_transition(u64 *completer_addr, diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 174007f3fadd..c315710f57ad 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -198,7 +198,6 @@ static void hpool_put_page(void *addr) static int fix_host_ownership_walker(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - enum kvm_pgtable_prot prot; enum pkvm_page_state state; phys_addr_t phys; @@ -221,16 +220,16 @@ static int fix_host_ownership_walker(const struct kvm_pgtable_visit_ctx *ctx, case PKVM_PAGE_OWNED: return host_stage2_set_owner_locked(phys, PAGE_SIZE, PKVM_ID_HYP); case PKVM_PAGE_SHARED_OWNED: - prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_BORROWED); + hyp_phys_to_page(phys)->host_state = PKVM_PAGE_SHARED_BORROWED; break; case PKVM_PAGE_SHARED_BORROWED: - prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_OWNED); + hyp_phys_to_page(phys)->host_state = PKVM_PAGE_SHARED_OWNED; break; default: return -EINVAL; } - return host_stage2_idmap_locked(phys, PAGE_SIZE, prot); + return 0; } static int fix_hyp_pgtable_refcnt_walker(const struct kvm_pgtable_visit_ctx *ctx, From patchwork Mon Nov 4 13:31:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861441 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 679B4D132C8 for ; Mon, 4 Nov 2024 13:49:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Zp5y0+Rq0WHrDia4bvFK9Du9WgLZRQc3oKl++fsdJMA=; b=dZAv2w634iuC9FpSuXgN2pJDB/ NnDDKBL9AsTMmLqi86lM9iaUWMx8f4GYgN2UE3GrVE5cNrz8918w+mEJAckzHOeg4IjLQ+QWFJFbo RXl82e/rxVdhtL2mlzZIYPycLgXs9zBszqr301omcyExhWfrwqbCEoj/tg1fI3CmbPb2x/5FE53v3 Z2qvKU//gnn6w4L4AGqPrGWekLvjys92P7/WSgjTIwBYbH72xFGd0P/w1oo4LvdwLfepep+uMzEzT R32L4XzFu8vXpb9gmRfXrtP3vxoZH3gw2A+gSgRfVekyWkbnB/ZmxHxi/PYNGAYb6FR3Yhg2i3u4g WPTyj/3Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7xSC-0000000Dvz9-3OFW; Mon, 04 Nov 2024 13:49:12 +0000 Received: from mail-ej1-x649.google.com ([2a00:1450:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xBu-0000000DsJ4-3sbK for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:25 +0000 Received: by mail-ej1-x649.google.com with SMTP id a640c23a62f3a-a9a157d028aso341744766b.2 for ; Mon, 04 Nov 2024 05:32:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727140; x=1731331940; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Zp5y0+Rq0WHrDia4bvFK9Du9WgLZRQc3oKl++fsdJMA=; b=LNef2bE//T3EQwzTP+pDoTPwWkLKCB/cD2S8+8QSseiMmMg6iEPNZK69r/aDLLogBj xAbPdrlOizxnUcAh06UbPk97+UrTdLEGnTxZpDnygGIvnXpw3MmuWjKF0x3qTyyOt1uI 2d3qoEWGXR7RjtzAFdoJIiUydyNkDohCdgDD1HxX2QTV14ABdAwS2Eerj0WGX+n6v95k m7Sn9CU8EsKB9kuTB2BJM5nCDTu/LJVSrGaR/61y/pJ6A64HS5SLiyhACyMWjkSF9xP1 KpwnjjI+v7l8ZUayHTirDle+d/DwRxYW0AqpBxVH9iZ01gwxjjF0wV2y6O5r9VbmUUtY kBLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727140; x=1731331940; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Zp5y0+Rq0WHrDia4bvFK9Du9WgLZRQc3oKl++fsdJMA=; b=u7LCvwtA9s3uA1jHTsPTcIIVTNc/nyThE04pWPdo0UDw2LS6qtxwnFpuM2zT7BCGfg 8iPTOWgHBo5EH+eAT8PxApzARkTebjLYAhjdtsD6wAhEApBcFNsdhviEL4Jmjj9REqvE NPeqLOBPJgc9BsjGD8gm5muSdTeNk0gjLyNraoUH3gPVWSRO0W3j+0i2zlFROo2+xUEI 1sr0b14WKrPeyznAYKsYreg1b7tXd5aq/ZfoY0+Ba+MsI9taHKeK/3dtOOjOEFx2Iktj YQx7WngX2Tw67T6vPfTyS5MvGU2qyMwQ1NuFap9gw2ardATJ+0Ri6h76XIoCeIQLkcOD 0Rlw== X-Forwarded-Encrypted: i=1; AJvYcCUe0l1InmJU/sjqIMTnm4t+H3SgaU2WIosjGJEJimcbw564px+qgO2io7M7smE2MkFLfUp7nvT70YI5jfCaM+od@lists.infradead.org X-Gm-Message-State: AOJu0Ywxu0SIL/ye+u3wPRYp3D+oR67pC0AQEUjnJSGxeuJPGzARaHLa EaUCtq62So1THbIgOPYfnUGJ/ThiC8FIeM44mmgcIMFgTJJAfA4slcZFQe7I1JgZInv3jdns4CZ UIiQb+Q== X-Google-Smtp-Source: AGHT+IGu3UcssODTNd0NubVJlZtWwwslOS4Fc2l45eu0iucEanXM8xQBrQzJXKGVO4SXDuAu4FLnrjRPKE4G X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a17:906:ca0b:b0:a99:f617:af50 with SMTP id a640c23a62f3a-a9e6581e0ebmr285966b.10.1730727139947; Mon, 04 Nov 2024 05:32:19 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:51 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-6-qperret@google.com> Subject: [PATCH 05/18] KVM: arm64: Pass walk flags to kvm_pgtable_stage2_mkyoung From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053223_003976_52E38F1D X-CRM114-Status: GOOD ( 15.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org kvm_pgtable_stage2_mkyoung currently assumes that it is being called from a 'shared' walker, which will not be true once called from pKVM. To allow for the re-use of that function, make the walk flags one of its parameters. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 4 +++- arch/arm64/kvm/hyp/pgtable.c | 7 +++---- arch/arm64/kvm/mmu.c | 3 ++- 3 files changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 03f4c3d7839c..442a45d38e23 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -669,6 +669,7 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size); * kvm_pgtable_stage2_mkyoung() - Set the access flag in a page-table entry. * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address to identify the page-table entry. + * @flags: Flags to control the page-table walk (ex. a shared walk) * * The offset of @addr within a page is ignored. * @@ -677,7 +678,8 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size); * * Return: The old page-table entry prior to setting the flag, 0 on failure. */ -kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr); +kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr, + enum kvm_pgtable_walk_flags flags); /** * kvm_pgtable_stage2_test_clear_young() - Test and optionally clear the access diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index b11bcebac908..fa25062f0590 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1245,15 +1245,14 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) NULL, NULL, 0); } -kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr) +kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr, + enum kvm_pgtable_walk_flags flags) { kvm_pte_t pte = 0; int ret; ret = stage2_update_leaf_attrs(pgt, addr, 1, KVM_PTE_LEAF_ATTR_LO_S2_AF, 0, - &pte, NULL, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + &pte, NULL, flags); if (!ret) dsb(ishst); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 0f7658aefa1a..27e1b281f402 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1708,6 +1708,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, /* Resolve the access fault by making the page young again. */ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) { + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED; kvm_pte_t pte; struct kvm_s2_mmu *mmu; @@ -1715,7 +1716,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) read_lock(&vcpu->kvm->mmu_lock); mmu = vcpu->arch.hw_mmu; - pte = kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa); + pte = kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa, flags); read_unlock(&vcpu->kvm->mmu_lock); if (kvm_pte_valid(pte)) From patchwork Mon Nov 4 13:31:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861442 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CB993D132C8 for ; Mon, 4 Nov 2024 13:51:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=CVlLw2VsW1NEYgyhjgZsv6w0k6EmJDYLHCvZ5J3b7lU=; b=gB1iM+VJinfLj8fEh5Fpcov5Io lxvQbUBwOtfqRyJCBEYxgT5hdj1gX5qoogE3A2iGDgWTT0KjYJGsEueLWcZuLs1IlGV0s0OiF5ebl F1lVWQeJkREGXzl2BPg+2bDmsIvhYZ9SmW1Q6+fKlXMs2bR6wTMUlOQfdd1jxqanjWwUeQPF6Zva4 0BEytosrPBF62W5efw5pO2Op2m8Tj3jJiigbLyF48ous2ouxlOdc+kPSJRKQN25vR331sru4cdUJS EgKUNCjy8oIeDhLE3AwqS4Wpnc8y9VIis872d4+GDYcldKdT/DYYEhcuGUrdNcQyxxDmoSmAxaUX0 PA/EBKeg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7xTs-0000000DwH0-0q3k; Mon, 04 Nov 2024 13:50:56 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xBv-0000000DsK8-3TZW for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:25 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-e28fea2adb6so5990951276.3 for ; Mon, 04 Nov 2024 05:32:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727142; x=1731331942; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CVlLw2VsW1NEYgyhjgZsv6w0k6EmJDYLHCvZ5J3b7lU=; b=Ce9x2HJy0o00QKxFdcvfS3MOkwu/PPFTaNa3C4xmX0HBQFyr7aIBYwx+ouPXGAPMlE OPWXNSR8bU/DIAntErarLYS3ZpEwX3kU6t7Vl7atHN5NFYkjvhXbAP95Ccc18/aR3un4 vICZLPHns6H7VWDAJ4/39JZXaZUUYmPKX6X/zCkLpw+bwF3yynB9gGY3xx4iBR4SEYXC kCP39msq+HE6TG8/U9Pol5N5ZlLrT/Pc5h3/QMw9RG1kZmP4cQ2ab35CsEi8WG7PpZ+L 1QdnzTJF+KKGtyMCauRGLllHPzChhp8wO4elN5ylyZDqr18xYK96/tWswXhGndcfBsTx ylLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727142; x=1731331942; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CVlLw2VsW1NEYgyhjgZsv6w0k6EmJDYLHCvZ5J3b7lU=; b=IFSQxqcdrUYjrkP4/2C6XErIK4k+N9sKHvyN8uW9wNtNNDs1MKJMMrkUNnloJ5CYd5 alF9mU1OXOPRR9vzdO7W4iDxi2DJ9bb1R1JOmS1ZqU9/cET77OXjJ95Je4RtCTwk5v9d 0EsTqrYFmDs7rbqLpCV8pxjCJEAHAG9NNci2styaIzCQrnmj11jTs8qAd6+jo2LuyKj5 EtdPEPZx1VSI6H/FypOhaguhzDDOVxONBypIn25Fst+9HKsBjoMe6eVvOCLEHWI4dfgZ Swfnq/BP3RiQPdxFP0VCc06d1dAjfA62qBom+CrDBJ/R7h0IH+bsuMohwE8JkP5/6f8Z dsgQ== X-Forwarded-Encrypted: i=1; AJvYcCWSkn94EKg1nH8PrWZFR38usZOPiVgXPuei+m9fWaONCEnQ/tzbGws9vgkCqgNZO7Y3V37ISIkuVQ74S6L5736m@lists.infradead.org X-Gm-Message-State: AOJu0YwQtmxnoheMckHsCcPuU8b38xc68U5BVfuDE5nILmbIzoDFI8KF zpFlb4oeC/vpkmUOrfhQN6UE9Ml/6SNFhuLmKCkpwPOZKEMs5a22+fV3c9TIHx0LxRph4a7hag7 cuEC7ZA== X-Google-Smtp-Source: AGHT+IGaIdjKw90MUAD6f3zhsAl5Ou4eqg/HHU1QzYnDITqGlaaCQnGFAV9PmmYFtFHG3tgEwpSpmlOm/Ebd X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a25:ed07:0:b0:e1d:912e:9350 with SMTP id 3f1490d57ef6-e3087bc765dmr65574276.6.1730727142628; Mon, 04 Nov 2024 05:32:22 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:52 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-7-qperret@google.com> Subject: [PATCH 06/18] KVM: arm64: Pass walk flags to kvm_pgtable_stage2_relax_perms From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053223_985704_BEE46F46 X-CRM114-Status: GOOD ( 13.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org kvm_pgtable_stage2_relax_perms currently assumes that it is being called from a 'shared' walker, which will not be true once called from pKVM. To allow for the re-use of that function, make the walk flags one of its parameters. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 4 +++- arch/arm64/kvm/hyp/pgtable.c | 6 ++---- arch/arm64/kvm/mmu.c | 7 +++---- 3 files changed, 8 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 442a45d38e23..f52fa8158ce6 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -709,6 +709,7 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address to identify the page-table entry. * @prot: Additional permissions to grant for the mapping. + * @flags: Flags to control the page-table walk (ex. a shared walk) * * The offset of @addr within a page is ignored. * @@ -721,7 +722,8 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, * Return: 0 on success, negative error code on failure. */ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, - enum kvm_pgtable_prot prot); + enum kvm_pgtable_prot prot, + enum kvm_pgtable_walk_flags flags); /** * kvm_pgtable_stage2_flush_range() - Clean and invalidate data cache to Point diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index fa25062f0590..ee060438dc77 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1310,7 +1310,7 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, } int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, - enum kvm_pgtable_prot prot) + enum kvm_pgtable_prot prot, enum kvm_pgtable_walk_flags flags) { int ret; s8 level; @@ -1328,9 +1328,7 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, if (prot & KVM_PGTABLE_PROT_X) clr |= KVM_PTE_LEAF_ATTR_HI_S2_XN; - ret = stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + ret = stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level, flags); if (!ret || ret == -EAGAIN) kvm_call_hyp(__kvm_tlb_flush_vmid_ipa_nsh, pgt->mmu, addr, level); return ret; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 27e1b281f402..80dd61038cc7 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1440,6 +1440,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, long vma_pagesize, fault_granule; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED; if (fault_is_perm) fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu); @@ -1683,13 +1684,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * PTE, which will be preserved. */ prot &= ~KVM_NV_GUEST_MAP_SZ; - ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot); + ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot, flags); } else { ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, - memcache, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + memcache, flags); } out_unlock: From patchwork Mon Nov 4 13:31:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861444 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B3F4CD132CA for ; Mon, 4 Nov 2024 13:52:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=NRaBXKWmHCm6FdE1Chl6Fbm7Oz0LIxurr/17hTAqXWI=; b=z1m1z79MkU5GkJl7oZMQR6WLsR uROph6f5fpkanEwvP0obqpnUE2vdelR2iY4gMruK0Y90djTzpKICFg3VY09wyXSlK/xCCharfYkNV NbuT0nGvtxUUBioLW9LEu5lLiTtvvJU6DC5Rt868TNeGy9/n7BKPBcQs6SHxl/YhXyi/T9hA0d9u+ CnIIcle4fPKNl75JZQ2z0MB+ODvEUqhaU+JBqKQN3nbcUakqkOzg2WPrumLAOoxR8uMkHSqYMEdwH pOPD0xgSie80xYvbAXMGVkPxaWRVyAEfLy1zvpyAQrNFd4gVHTis1qoUEJ/MYCTVqDoqcre9gJ9t/ teWchj4A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7xVX-0000000DwYy-3YbJ; Mon, 04 Nov 2024 13:52:39 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xBy-0000000DsL4-0cJb for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:27 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-e2904d0cad0so7677488276.1 for ; Mon, 04 Nov 2024 05:32:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727145; x=1731331945; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NRaBXKWmHCm6FdE1Chl6Fbm7Oz0LIxurr/17hTAqXWI=; b=viVNwvi2teHJ4DHi3I9XuAoiKzggv8eMgv6GOybtaoj7U98+6NBT+/lkwnrtMDQ4mI Xxo4uBz8n05sCmxhR2YbzXRT90G2c2ht/vtt5n6vaFY6Cy1uo9i8TpeKWwu3O3gLnQNs eZk6bZ8qletynPQthENCoQX6HoMBOOEW/i72orP7r5eMeaaVYrfYykMrBiDzwcAzXfb9 f5WUkKTPX81ekaGOCoh8ST+MrHdwYMROARQc/71aSiiEBMOjEjxoz5xu5TzhPpF8ZJZt EGLe6u6tqqbI2zkPBpLZADQnnfgwE9ZGOftIAwNWAynY8bjv0lSxJdMzKObl/j4WmkeZ RfeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727145; x=1731331945; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NRaBXKWmHCm6FdE1Chl6Fbm7Oz0LIxurr/17hTAqXWI=; b=G2R0Juxf/wECqXesL6jiBXLa3p74zi26j7CC6UyprZ5zE9Y8u9sFSNU8ohLbJG5rCW I2/F1RK+TdU2RwvIsNPHxk6CPorXpssXIxmHnhF3YhyBaEiyoXfl1pANyWKjvHFDET1e QAJRkFADibHjocBP0V4mP3IEF9JrFRgWGfnpHj2AyXjONbIeQZpSKaf/lKvBiTdiw6IN vr4I86/UXd7SWC0s83RdYrWvH/wI55P5aJV1nr6Kuu4HYDQUilO8dRg4hXSSDNXQ3CWB 2JwrGbkEIAziR1CBf2r3mk3lrCQMqUK2u3KC8oCLfbl2k6ldydiEKCCJTRnVRkr7T3vM 21iA== X-Forwarded-Encrypted: i=1; AJvYcCVrKkb5Vi6rfALifBqovepeZYYi/dj/S9YV+nmNwsnbx+3SSSfyLRid+rU5QlxA1KXc9wb4P+i6YOXYxNL5XF/B@lists.infradead.org X-Gm-Message-State: AOJu0Yw5N1//G6caodnCq9JOFTeKPP3ZwI8RADjbEFXnjE/zDkXxFsH3 HpDPDwUkhslql++ahRIrL9raQBX6805s+kovMhCasZBnayhTp8T7tIIjV65oNgWAnbHvPGXPfW5 bQ/l7sA== X-Google-Smtp-Source: AGHT+IEIRmWbJCJ91gChrscm24F1xHepSdW9Ht8okQQnwSOPBLJd4Dm+SGDL7D1GGWqTGSqt9NEIHueAq4mp X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a25:aa05:0:b0:e25:d46a:a6b6 with SMTP id 3f1490d57ef6-e330266859dmr11233276.8.1730727145034; Mon, 04 Nov 2024 05:32:25 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:53 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-8-qperret@google.com> Subject: [PATCH 07/18] KVM: arm64: Make kvm_pgtable_stage2_init() a static inline function From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053226_225241_35C15C54 X-CRM114-Status: GOOD ( 10.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Turn kvm_pgtable_stage2_init() into a static inline function instead of a macro. This will allow the usage of typeof() on it later on. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index f52fa8158ce6..047e1c06ae4c 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -526,8 +526,11 @@ int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, enum kvm_pgtable_stage2_flags flags, kvm_pgtable_force_pte_cb_t force_pte_cb); -#define kvm_pgtable_stage2_init(pgt, mmu, mm_ops) \ - __kvm_pgtable_stage2_init(pgt, mmu, mm_ops, 0, NULL) +static inline int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, + struct kvm_pgtable_mm_ops *mm_ops) +{ + return __kvm_pgtable_stage2_init(pgt, mmu, mm_ops, 0, NULL); +} /** * kvm_pgtable_stage2_destroy() - Destroy an unused guest stage-2 page-table. From patchwork Mon Nov 4 13:31:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 734F2D132CA for ; Mon, 4 Nov 2024 13:54:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=uWbtmcYWRp1R8Byo4NXAA6u9713XSDAlNYmvHQ2XPgU=; b=HL5QbmsoQCe/FG5BnfdktvgAUf 8vVnIokNk5GBnL6qfFKsTIqSesxmEaKMq2U2mvUhHL3MOxrixqrxItryJd49uu3kfT53+HqOYa0Ev KFOFnxYcLDIVQwKfCBvj+Z4hgXsw3CaxsgSSsQ4o7f+OIPPobCZ9O4s2lpSj6h4/9kdoZf3Z5TJWv icQLNQmrHTpALvekDmL75R9FtHcCuLOjX4/96V5xq4TEz643gOVn5lh3qSyNDXc1WSyVF4KDOHeYF 1OCF4gxmr1dwJ9YZsek1sCQjLXvcObJMf5uWMgA/Aj82OaGKJ00oUb5VkymwOz/pUp94S2euY1Tf3 gg6RxXnQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7xXE-0000000DwwM-10E2; Mon, 04 Nov 2024 13:54:24 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xC0-0000000DsM6-3dOR for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:30 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-6ea863ecfe9so29764227b3.3 for ; Mon, 04 Nov 2024 05:32:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727147; x=1731331947; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uWbtmcYWRp1R8Byo4NXAA6u9713XSDAlNYmvHQ2XPgU=; b=uGyOMIrGf/xtAisoQVqgFTi6hFaoghaHMjINRQVR/gRMh4ydPO+KdlkQFlTy+SLwNo AlmOXs527G3PnW+94/K5X92a2iZwZWq4BHQkmePz09dcnhT7laLJPnO0VDL9CNZBP9eB iaUKHhLQ1m3X+mtni0VNFh04I1nazG8Q2TISEWT7nsnFo0qY4EyeQjlkMA26k4/UAnbU 75Svl3jSOTzjv3VUnOjt4ghq6GQT0Auuttct0HRE2pyGrXKjHxn+KwcbMcPqPb9LwD+2 JrF7Z8OmYubfyNTdkExBH0yWDi10hL/5vuF1HgBjGL6hbisPQAi7aRN/0jDiEZLSXXJf 33yA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727147; x=1731331947; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uWbtmcYWRp1R8Byo4NXAA6u9713XSDAlNYmvHQ2XPgU=; b=qwS8ikvQWL8PI1x+OlnSb9o5qAeHlQ/DARAhFLhcGMZGxHcjIzGOgLCAMhrUB/YrVO SkALEdxIJ/RRc/j+Tqy0V8gXh6ycVhfwMX+urAfutsjd0VE2MnarHHlqHLyBBySFvAdu PHhE4eTCUzLq5s1RaUGBxh3mq32bnwX9Zdrque+AVz8+Di1GQsEGCfDu/Bxruy/ssqY2 zqkZTjKjkTuPCqvMFs9IP6ApUd9wfiyLigJk+ZjlqZJlnkKzwMN0kDebLJv6dH2PW4ZT avHjQRgonsLTM4rzhSRnbGTV0/xxDsGnvYpp0E+ixay8wRggzSrY7iJ+r1Huyjs2ppEu 3gdA== X-Forwarded-Encrypted: i=1; AJvYcCU4dFF1uE47gzby4Yo/TBuZ9x6OFl+OEr3g7Istz+iN6OR377phgLLk/kPSlFPp8IHKvDrBdrFwxYQMYkqVhLV9@lists.infradead.org X-Gm-Message-State: AOJu0YxkOd6GUo+jKca16Lu3APmNDmNKQ036slbbGRvshZIIs84BRBWZ lSaUC4uz2kFZp+YdUVzr84r35V1qHUasPFDLOkBMT2pEhkdKoreQJyoAG2tPRAl+VX+NQwiaejU h37Q/FA== X-Google-Smtp-Source: AGHT+IERD2fe/MuO+b+uqGh1AqVUFKurSVSZTroF+QuBipkZRMH9HgRguuT3lLzoarC3p1VSq8xD3yXmKNOM X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a81:a8c3:0:b0:6e3:d670:f603 with SMTP id 00721157ae682-6e9d8aada53mr4236897b3.3.1730727147304; Mon, 04 Nov 2024 05:32:27 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:54 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-9-qperret@google.com> Subject: [PATCH 08/18] KVM: arm64: Introduce pkvm_vcpu_{load,put}() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053228_942572_3FA72F7A X-CRM114-Status: GOOD ( 21.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Rather than look-up the hyp vCPU on every run hypercall at EL2, introduce a per-CPU 'loaded_hyp_vcpu' tracking variable which is updated by a pair of load/put hypercalls called directly from kvm_arch_vcpu_{load,put}() when pKVM is enabled. Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_asm.h | 2 ++ arch/arm64/kvm/arm.c | 14 ++++++++ arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 7 ++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 47 ++++++++++++++++++++------ arch/arm64/kvm/hyp/nvhe/pkvm.c | 28 +++++++++++++++ arch/arm64/kvm/vgic/vgic-v3.c | 6 ++-- 6 files changed, 92 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 67afac659231..a1c6dbec1871 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -80,6 +80,8 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_init_vm, __KVM_HOST_SMCCC_FUNC___pkvm_init_vcpu, __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, + __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load, + __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put, }; #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 48cafb65d6ac..2bf168b17a77 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -623,12 +623,26 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_arch_vcpu_load_debug_state_flags(vcpu); + if (is_protected_kvm_enabled()) { + kvm_call_hyp_nvhe(__pkvm_vcpu_load, + vcpu->kvm->arch.pkvm.handle, + vcpu->vcpu_idx, vcpu->arch.hcr_el2); + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, + &vcpu->arch.vgic_cpu.vgic_v3); + } + if (!cpumask_test_cpu(cpu, vcpu->kvm->arch.supported_cpus)) vcpu_set_on_unsupported_cpu(vcpu); } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { + if (is_protected_kvm_enabled()) { + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, + &vcpu->arch.vgic_cpu.vgic_v3); + kvm_call_hyp_nvhe(__pkvm_vcpu_put); + } + kvm_arch_vcpu_put_debug_state_flags(vcpu); kvm_arch_vcpu_put_fp(vcpu); if (has_vhe()) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index 24a9a8330d19..6940eb171a52 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -20,6 +20,12 @@ struct pkvm_hyp_vcpu { /* Backpointer to the host's (untrusted) vCPU instance. */ struct kvm_vcpu *host_vcpu; + + /* + * If this hyp vCPU is loaded, then this is a backpointer to the + * per-cpu pointer tracking us. Otherwise, NULL if not loaded. + */ + struct pkvm_hyp_vcpu **loaded_hyp_vcpu; }; /* @@ -69,5 +75,6 @@ int __pkvm_teardown_vm(pkvm_handle_t handle); struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, unsigned int vcpu_idx); void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); +struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void); #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index fefc89209f9e..6bcdba4fdc76 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -139,16 +139,46 @@ static void sync_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) host_cpu_if->vgic_lr[i] = hyp_cpu_if->vgic_lr[i]; } +static void handle___pkvm_vcpu_load(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(unsigned int, vcpu_idx, host_ctxt, 2); + DECLARE_REG(u64, hcr_el2, host_ctxt, 3); + struct pkvm_hyp_vcpu *hyp_vcpu; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vcpu = pkvm_load_hyp_vcpu(handle, vcpu_idx); + if (!hyp_vcpu) + return; + + if (pkvm_hyp_vcpu_is_protected(hyp_vcpu)) { + /* Propagate WFx trapping flags */ + hyp_vcpu->vcpu.arch.hcr_el2 &= ~(HCR_TWE | HCR_TWI); + hyp_vcpu->vcpu.arch.hcr_el2 |= hcr_el2 & (HCR_TWE | HCR_TWI); + } +} + +static void handle___pkvm_vcpu_put(struct kvm_cpu_context *host_ctxt) +{ + struct pkvm_hyp_vcpu *hyp_vcpu; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); + if (hyp_vcpu) + pkvm_put_hyp_vcpu(hyp_vcpu); +} + static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, host_vcpu, host_ctxt, 1); int ret; - host_vcpu = kern_hyp_va(host_vcpu); - if (unlikely(is_protected_kvm_enabled())) { - struct pkvm_hyp_vcpu *hyp_vcpu; - struct kvm *host_kvm; + struct pkvm_hyp_vcpu *hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); /* * KVM (and pKVM) doesn't support SME guests for now, and @@ -161,9 +191,6 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) goto out; } - host_kvm = kern_hyp_va(host_vcpu->kvm); - hyp_vcpu = pkvm_load_hyp_vcpu(host_kvm->arch.pkvm.handle, - host_vcpu->vcpu_idx); if (!hyp_vcpu) { ret = -EINVAL; goto out; @@ -174,12 +201,10 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) ret = __kvm_vcpu_run(&hyp_vcpu->vcpu); sync_hyp_vcpu(hyp_vcpu); - pkvm_put_hyp_vcpu(hyp_vcpu); } else { /* The host is fully trusted, run its vCPU directly. */ - ret = __kvm_vcpu_run(host_vcpu); + ret = __kvm_vcpu_run(kern_hyp_va(host_vcpu)); } - out: cpu_reg(host_ctxt, 1) = ret; } @@ -415,6 +440,8 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_init_vm), HANDLE_FUNC(__pkvm_init_vcpu), HANDLE_FUNC(__pkvm_teardown_vm), + HANDLE_FUNC(__pkvm_vcpu_load), + HANDLE_FUNC(__pkvm_vcpu_put), }; static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 077d4098548d..9ed2b8a63371 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -20,6 +20,12 @@ unsigned int kvm_arm_vmid_bits; unsigned int kvm_host_sve_max_vl; +/* + * The currently loaded hyp vCPU for each physical CPU. Used only when + * protected KVM is enabled, but for both protected and non-protected VMs. + */ +static DEFINE_PER_CPU(struct pkvm_hyp_vcpu *, loaded_hyp_vcpu); + /* * Set trap register values based on features in ID_AA64PFR0. */ @@ -268,15 +274,30 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, struct pkvm_hyp_vcpu *hyp_vcpu = NULL; struct pkvm_hyp_vm *hyp_vm; + /* Cannot load a new vcpu without putting the old one first. */ + if (__this_cpu_read(loaded_hyp_vcpu)) + return NULL; + hyp_spin_lock(&vm_table_lock); hyp_vm = get_vm_by_handle(handle); if (!hyp_vm || hyp_vm->nr_vcpus <= vcpu_idx) goto unlock; hyp_vcpu = hyp_vm->vcpus[vcpu_idx]; + + /* Ensure vcpu isn't loaded on more than one cpu simultaneously. */ + if (unlikely(hyp_vcpu->loaded_hyp_vcpu)) { + hyp_vcpu = NULL; + goto unlock; + } + + hyp_vcpu->loaded_hyp_vcpu = this_cpu_ptr(&loaded_hyp_vcpu); hyp_page_ref_inc(hyp_virt_to_page(hyp_vm)); unlock: hyp_spin_unlock(&vm_table_lock); + + if (hyp_vcpu) + __this_cpu_write(loaded_hyp_vcpu, hyp_vcpu); return hyp_vcpu; } @@ -285,10 +306,17 @@ void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) struct pkvm_hyp_vm *hyp_vm = pkvm_hyp_vcpu_to_hyp_vm(hyp_vcpu); hyp_spin_lock(&vm_table_lock); + hyp_vcpu->loaded_hyp_vcpu = NULL; + __this_cpu_write(loaded_hyp_vcpu, NULL); hyp_page_ref_dec(hyp_virt_to_page(hyp_vm)); hyp_spin_unlock(&vm_table_lock); } +struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void) +{ + return __this_cpu_read(loaded_hyp_vcpu); +} + static void unpin_host_vcpu(struct kvm_vcpu *host_vcpu) { if (host_vcpu) diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c index b217b256853c..e43a8bb3e6b0 100644 --- a/arch/arm64/kvm/vgic/vgic-v3.c +++ b/arch/arm64/kvm/vgic/vgic-v3.c @@ -734,7 +734,8 @@ void vgic_v3_load(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; - kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); + if (likely(!is_protected_kvm_enabled())) + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); if (has_vhe()) __vgic_v3_activate_traps(cpu_if); @@ -746,7 +747,8 @@ void vgic_v3_put(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; - kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); + if (likely(!is_protected_kvm_enabled())) + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); WARN_ON(vgic_v4_put(vcpu)); if (has_vhe()) From patchwork Mon Nov 4 13:31:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861446 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C770FD132CA for ; Mon, 4 Nov 2024 13:56:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=PBItirLg5Oir0h+eEpcfLT0BceEfQcKCnavoc0LgXGQ=; b=y+DwqvMh/xl8oowwQDfiJiBgu4 SklujRHvDHBiSkha9AaYYZ5g/OBYaMoBbOZHOJpH70yG7YYDidPCQMil0Dc1BxgBk+j2YM/as1onO IlZE2bgAt9i7nkp8gPbFVH77rxdAmpokKGHxaG18CEwnVhvRvOtI2Bah18uQ1IivfAUQA+lKDDGyQ FKC6iM13nmIdQGGDbrAA11Po1vrRWxryF+M7G5fxAvV3S/fJpdNYCVGcNk1mnyPFq6VHpisNc8EJp 10fY92jtsW4jVYj/gKKWy3RHBajbQPjL4KclZ5jZ9o3d40nPiNU3pUZmRk9m29raZ+bz3v39OgaDY PY4bPwOg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7xYx-0000000DxGr-0gfB; Mon, 04 Nov 2024 13:56:11 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xC3-0000000DsN0-2OuP for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:32 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-e293b3e014aso7044437276.3 for ; Mon, 04 Nov 2024 05:32:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727150; x=1731331950; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PBItirLg5Oir0h+eEpcfLT0BceEfQcKCnavoc0LgXGQ=; b=gM5IANTXeRyclKCL7vMZFOahd/o9+omTerslwkkiH3GgRBBLZg8uGwJgkifVJRde30 waJr3lnCZEVpDqyVMtFf9igOrXKBRrshG5o6+imP8tGCxdFhX1r+6SWFTinI/HIUolCH L8zGUzDMXbhYg2DeiNmlagAP8+wcqxvvgLclKPOgdN+0F10rz0HB/akSafkJeV4UjImf 30SZmt93cJ1I7kKeENGxtJBhPnQr1BEqPTAe/63r5CIQ1u93UM7jEDURWE/aRUm585Cu 5Kty0ViFau9X8TejODst19TEgo7K5KKFhuitDLub7l5w7TDqec5BqmQne6eSB/qsls01 uK5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727150; x=1731331950; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PBItirLg5Oir0h+eEpcfLT0BceEfQcKCnavoc0LgXGQ=; b=EA6HOEKC0b7BoVz4U/N3g//h7v8H+7eADVRkSSWUb/KHfh+/L7de9ZKWCXbtUhulMv xEJ1gHTgFQ7MPW4J4cu6OJ3Rh9WjA83FEW53eUdnkUBOeDOca6qQMW3dUG4PUtrocnzr mdjNSBqScgnrG3hCU8qb9i2u8fqgk813eOLg5IcegvOtDX44Ou5jB055vn4nLVN24ucO VlDLltcv9K6fh6jmQk25jxSSMIK0bs4xA0RTcdYNXN6g6pi7glv5WObdcNsmAV6EGkgu 4puDDMxARZ/8zFGgT1eL7Ltudw5X4f3ioqEdvfWY26ofi6p7ZXrvmYcHb03MyfIa3JlJ Evnw== X-Forwarded-Encrypted: i=1; AJvYcCXE5OIrJdYhCwFgZDkTIWhXk/JdKkizNX8w/rmpBypQ2PMeeinf6JOIZUQSzz53FVhtN1DEbtJp9kRGxqYyrgjl@lists.infradead.org X-Gm-Message-State: AOJu0Ywt08w9xQB6sGcnxZuiZ0eZ7J9N9uUMLdlMvaUfNdy2Ef9J++04 a4gkI9orKjjNCVMS4vVuvxiw4bNEjE2ctNVr0FPhrBJoObEGXkWPptpCi8biChVRZ08KS9a1mvm xMFUAnw== X-Google-Smtp-Source: AGHT+IFvrgh3CVf1TWze2me+UFlYP3CJE1J8M2LCwjOfq5XmD1MuK3gwOqhUj0cOzAZs1ZrUlVldZRwY5pmA X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a25:2bc9:0:b0:e27:3e6a:345 with SMTP id 3f1490d57ef6-e33026dcafemr8506276.10.1730727149901; Mon, 04 Nov 2024 05:32:29 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:55 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-10-qperret@google.com> Subject: [PATCH 09/18] KVM: arm64: Introduce {get,put}_pkvm_hyp_vm() helpers From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053231_641555_EA151F68 X-CRM114-Status: GOOD ( 10.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for accessing pkvm_hyp_vm structures at EL2 in a context where we can't always expect a vCPU to be loaded (e.g. MMU notifiers), introduce get/put helpers to gettemporary references to hyp VMs from any context. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 3 +++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 20 ++++++++++++++++++++ 2 files changed, 23 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index 6940eb171a52..be52c5b15e21 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -77,4 +77,7 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void); +struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle); +void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm); + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 9ed2b8a63371..d242da1ec56a 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -317,6 +317,26 @@ struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void) return __this_cpu_read(loaded_hyp_vcpu); } +struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle) +{ + struct pkvm_hyp_vm *hyp_vm; + + hyp_spin_lock(&vm_table_lock); + hyp_vm = get_vm_by_handle(handle); + if (hyp_vm) + hyp_page_ref_inc(hyp_virt_to_page(hyp_vm)); + hyp_spin_unlock(&vm_table_lock); + + return hyp_vm; +} + +void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm) +{ + hyp_spin_lock(&vm_table_lock); + hyp_page_ref_dec(hyp_virt_to_page(hyp_vm)); + hyp_spin_unlock(&vm_table_lock); +} + static void unpin_host_vcpu(struct kvm_vcpu *host_vcpu) { if (host_vcpu) From patchwork Mon Nov 4 13:31:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861448 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A76E0D132CC for ; Mon, 4 Nov 2024 13:58:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=HHLE7T6E8GNWbHqOU0Z7O+qf4/rYeV+4Y5GvCCJXO4g=; b=kvZCVlFmRWaIg2HeC42uegseJ8 Dh/4uKbCrNNW0lpf3yC4MB0QCJF51KSqvjP6pl5Umx+IBftN3R6O90Zps6vf1cI/18ftO2qYj8E7S vPbSthfVSqV1VuRuFDVs/OEi1TJUUTPyaKs/kCOqiEFRG6biBsijFgBYI7l5qhF7iEf12znnO0Mry i5Fs3gQxH2fOfcrfkg8uOfICVYo8lAM7oOW2x8hGei25Egf32Ula5Zk6AN9Wyb1iL8Lhmbii9+Ret CkNVwmoLAdKO5rfFvfeEzfycGQMrPJHJx0yfKVDuEf7+2fedspD9g716jdXJv97m/KPlECi++uXr+ WpTQ9uaA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7xad-0000000DxgN-3j3m; Mon, 04 Nov 2024 13:57:55 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xC5-0000000DsNz-31Dg for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:35 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-e290b8b69f8so7410660276.2 for ; Mon, 04 Nov 2024 05:32:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727152; x=1731331952; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HHLE7T6E8GNWbHqOU0Z7O+qf4/rYeV+4Y5GvCCJXO4g=; b=vBH6GEsinyWR3LcRuvUCHkJEZi7J2ByvhcXpqQGyytgKKpILj+YD8o3Xqu6DLEHsM5 OzeBOBwW2DrVRXAYLVS3rXVIagIOXJMr0nT/uxAf5ZUKzQktABaIvR/KMS9yAVUE33xs HWuHKfr15VqLxWOXBjNV1hmNH6gd0sxctRXOMut0baEJLDbflcCh2Hf67dNozRllka1J MmjhNUkowCfM3a36vc9FtHttZIACNspX2oEcAFWHFYXdR+wYob/iWoZ/IW5cmzAyCPEY 0aBdDleewdeJGJ6qotsEMBvi79KHLeCnDHRAx8vojUhWSIVmIRnQoNTchMk1G4EhNc1c dw3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727152; x=1731331952; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HHLE7T6E8GNWbHqOU0Z7O+qf4/rYeV+4Y5GvCCJXO4g=; b=MWY3W97vetCgbmvsX+klNnLkifcM79L1KfXo5v7+wjQIxw9LF9UUUJCrPxVeHfqIvj s1tm3mXPIOZUpxeA8koJRGUTsuDLFAgI2Ym69Hk0Zep6YJFXBg0Rq/6c6YzdRUfGo7dR qPhXTLhGQo0rxfQKLHAdqdCSkCj8Q0sZfSSoq6xtAj1pQXuIHNEu0PYOk3rgyz9hvbrN 1sXg9OWJeeahVskOMuvk/t3askl4jO70ukjntIXq3ve3ZFwamWDtQVjAxDsOVmLvL8LR 4wWFAA8XV4qqhai0vjhgDTkIkLoCk+70G3mrmnDmnC2IIUBkfZeItsNOImzgBDBfHn23 HW1w== X-Forwarded-Encrypted: i=1; AJvYcCUFDTuC8AdZo9D6w3F38iUDYB2NKAKhJRCR5GvYazhlQks1nX2FNmNp3ZYn9VBBHWMD5NXSKpKxRE7t2p7sHJYV@lists.infradead.org X-Gm-Message-State: AOJu0Yx9huDd2JTaWRaOQTmvZp7oTSDgqf2VpWK148FScUaR8ZL04O6X GgDOsOj1EerJV0+RQMyUFSXHl0/KBm+3vKGO2Rh93k5QypHRu8SVbO7L+uAQGFdI3FsJXRawDCo kBlrS1g== X-Google-Smtp-Source: AGHT+IGc9FWtGQmxCfj3VLTtVoyIcI2cUFtsy+Np9/YxvX43mUKUKvkiVWLEzsztKOQDv2IlUxhnn2bpES5d X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a25:9b42:0:b0:e30:c741:ab06 with SMTP id 3f1490d57ef6-e30cf42ddb2mr12768276.5.1730727152355; Mon, 04 Nov 2024 05:32:32 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:56 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-11-qperret@google.com> Subject: [PATCH 10/18] KVM: arm64: Introduce __pkvm_host_share_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053233_788442_69B371F4 X-CRM114-Status: GOOD ( 17.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for handling guest stage-2 mappings at EL2, introduce a new pKVM hypercall allowing to share pages with non-protected guests. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/include/asm/kvm_host.h | 3 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/include/nvhe/memory.h | 2 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 34 +++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 70 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 7 ++ 7 files changed, 118 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index a1c6dbec1871..b69390108c5a 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -65,6 +65,7 @@ enum __kvm_host_smccc_func { /* Hypercalls available after pKVM finalisation */ __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, + __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index bf64fed9820e..4b02904ec7c0 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -762,6 +762,9 @@ struct kvm_vcpu_arch { /* Cache some mmu pages needed inside spinlock regions */ struct kvm_mmu_memory_cache mmu_page_cache; + /* Pages to be donated to pkvm/EL2 if it runs out */ + struct kvm_hyp_memcache pkvm_memcache; + /* Virtual SError ESR to restore when HCR_EL2.VSE is set */ u64 vsesr_el2; diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 25038ac705d8..a7976e50f556 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -39,6 +39,7 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); +int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 08f3a0416d4c..457318215155 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -47,6 +47,8 @@ struct hyp_page { /* Host (non-meta) state. Guarded by the host stage-2 lock. */ enum pkvm_page_state host_state : 8; + + u32 host_share_guest_count; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 6bcdba4fdc76..32bdf6b27958 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -209,6 +209,39 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = ret; } +static int pkvm_refill_memcache(struct pkvm_hyp_vcpu *hyp_vcpu) +{ + struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu; + + return refill_memcache(&hyp_vcpu->vcpu.arch.pkvm_memcache, + host_vcpu->arch.pkvm_memcache.nr_pages, + &host_vcpu->arch.pkvm_memcache); +} + +static void handle___pkvm_host_share_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, pfn, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 3); + struct pkvm_hyp_vcpu *hyp_vcpu; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret = pkvm_refill_memcache(hyp_vcpu); + if (ret) + goto out; + + ret = __pkvm_host_share_guest(pfn, gfn, hyp_vcpu, prot); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -425,6 +458,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_share_hyp), HANDLE_FUNC(__pkvm_host_unshare_hyp), + HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 1595081c4f6b..a69d7212b64c 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -861,6 +861,27 @@ static int hyp_complete_donation(u64 addr, return pkvm_create_mappings_locked(start, end, prot); } +static enum pkvm_page_state guest_get_page_state(kvm_pte_t pte, u64 addr) +{ + if (!kvm_pte_valid(pte)) + return PKVM_NOPAGE; + + return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); +} + +static int __guest_check_page_state_range(struct pkvm_hyp_vcpu *vcpu, u64 addr, + u64 size, enum pkvm_page_state state) +{ + struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); + struct check_walk_data d = { + .desired = state, + .get_page_state = guest_get_page_state, + }; + + hyp_assert_lock_held(&vm->lock); + return check_page_state_range(&vm->pgt, addr, size, &d); +} + static int check_share(struct pkvm_mem_share *share) { const struct pkvm_mem_transition *tx = &share->tx; @@ -1343,3 +1364,52 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) return ret; } + +int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, + enum kvm_pgtable_prot prot) +{ + struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 phys = hyp_pfn_to_phys(pfn); + u64 ipa = hyp_pfn_to_phys(gfn); + struct hyp_page *page; + int ret; + + if (prot & ~KVM_PGTABLE_PROT_RWX) + return -EINVAL; + + ret = range_is_allowed_memory(phys, phys + PAGE_SIZE); + if (ret) + return ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __guest_check_page_state_range(vcpu, ipa, PAGE_SIZE, PKVM_NOPAGE); + if (ret) + goto unlock; + + page = hyp_phys_to_page(phys); + switch (page->host_state) { + case PKVM_PAGE_OWNED: + WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_OWNED)); + break; + case PKVM_PAGE_SHARED_OWNED: + /* Only host to np-guest multi-sharing is tolerated */ + WARN_ON(!page->host_share_guest_count); + break; + default: + ret = -EPERM; + goto unlock; + } + + WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, PAGE_SIZE, phys, + pkvm_mkstate(prot, PKVM_PAGE_SHARED_BORROWED), + &vcpu->vcpu.arch.pkvm_memcache, 0)); + page->host_share_guest_count++; + +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index d242da1ec56a..bdcfcc20cf66 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -680,6 +680,13 @@ int __pkvm_teardown_vm(pkvm_handle_t handle) /* Push the metadata pages to the teardown memcache */ for (idx = 0; idx < hyp_vm->nr_vcpus; ++idx) { struct pkvm_hyp_vcpu *hyp_vcpu = hyp_vm->vcpus[idx]; + struct kvm_hyp_memcache *vcpu_mc = &hyp_vcpu->vcpu.arch.pkvm_memcache; + + while (vcpu_mc->nr_pages) { + void *addr = pop_hyp_memcache(vcpu_mc, hyp_phys_to_virt); + push_hyp_memcache(mc, addr, hyp_virt_to_phys); + unmap_donated_memory_noclear(addr, PAGE_SIZE); + } teardown_donated_memory(mc, hyp_vcpu, sizeof(*hyp_vcpu)); } From patchwork Mon Nov 4 13:31:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C0321D132CD for ; Mon, 4 Nov 2024 13:59:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=VRegJK0d/KTMFa1b8xxTA9sCjP756MDWGX2tZ8RiT/M=; b=zHYlcmM13raok9voXs+joMJjAc 8lsyNpUJIIuippcQTquMXw/QcIwJd/8ZjmYlQ0tg6zO/rLw3TGq+5flZdP77LeO6x883eofs/uceu 5ArukpHe0Z4mhKm8ikguC0/kWch7MZ28ip6cUD3CX/E+YM72B0MfUlfXaCu6ugOv015FbOO+N/r8O 2SxgMRhH3jhj8yB4u142uSsn3BTUS6H9Cwoa0tqB+6OLSa4s4jb0BwWlfODdjnAOgthfRC0B/0nSN kRsyxsnBi1OTqt+NBsYcpKgG+Lh8Tvk7rEL+USHeEYT7L3TeBDx08ZG3JMTXGqO7kjCrylNWFhLVD cMjUfDcA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7xcJ-0000000Dy8r-1Czp; Mon, 04 Nov 2024 13:59:39 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xC8-0000000DsP0-1ZiA for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:37 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-e2904d0cad0so7677696276.1 for ; Mon, 04 Nov 2024 05:32:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727155; x=1731331955; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VRegJK0d/KTMFa1b8xxTA9sCjP756MDWGX2tZ8RiT/M=; b=SQexZvqd3scLH66iEeXtJhu9a2TtFZLEGuqUd0BodBDYnsBQPHGKRcRzWfXNtc2A9F 2l1Gt4PjfJ1wTR7h6RuyZGii4SlpJXowm40EFfpbwC0AjhLC0dRBPxoRbhNNyLllUsws oU93GOg4PynT5/8aHoB1zUypjQE2N9sVfkNkXWPxTFbTcytZASDLh85I9KDS/VrS7Pjd 3o+2ctVRCq9LT67vzGqgWT3kikYKn7cGVvx7BNvWGkq/bldiGz4ap4Dzk+wiZ86DfMNm wpED3Z2rDp2TuBwmxeZCzQwcoTf1YbdS62bOXOQLV/X+PBja/+oTRYkKbLvMhrUPeyIJ yKnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727155; x=1731331955; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VRegJK0d/KTMFa1b8xxTA9sCjP756MDWGX2tZ8RiT/M=; b=bGXU2YDDc2mruEbRFQGcPnbj6gsB0yn2s6ZD1wGcyijCM22PQmTxRszot6axsqBYQn E51cYJ2Y3qqDU4UP/yUgnjZbnCIqVYRw7Gs8pssIXQqRBcWT4qW08JTQozqMMAvg3UW5 BK/PlHtlkaZhRjVUV3oroUkU86iKyj+Eo2oPmMBlbLlZWirYf6OxEy3LVIBFECU2rCPS cJBP2Foma5CX7zr/3+1Mx374lJBPoEGBKNw0eyMkP/vcE8oKtBgc9RZKwhk9S4pQ5Gr+ KKGDFykOfp0CuS47FPPPiuduY+lX41FehM91SV7b8jC0AEUNwPDiu4E5sRQc1+1v9EEv 5n0g== X-Forwarded-Encrypted: i=1; AJvYcCWHX7Y5l0dqfIUYx6HDb1gRo7ARqmYXLjGlZ/NclNwZma7VfJhB+1zZoCg1Ve7YVxzf1BX8polcX2YtRW285s4q@lists.infradead.org X-Gm-Message-State: AOJu0YwF+8Bl5Sw6p9uyuRr4PK87M5KgF3jOKAxcS9Q08XqOzCptBwCV 0KiLNvhTEVUmTosAgSxRdI66fbTsDbKAb5JgaUzcZ6fLa3HkALcSODHWwvxZZdYi8T+khrw6MK4 QUsUErQ== X-Google-Smtp-Source: AGHT+IGR8bsfJ02p9vHy+seXY+BmxTKO7YUK2HiE8eFkkm9wUjhjecldaQfmnnOt9C+IS5sr/2xmgAJlbcZ8 X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a25:c710:0:b0:e30:d61e:b110 with SMTP id 3f1490d57ef6-e33025662cdmr10816276.5.1730727154895; Mon, 04 Nov 2024 05:32:34 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:57 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-12-qperret@google.com> Subject: [PATCH 11/18] KVM: arm64: Introduce __pkvm_host_unshare_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053236_440575_C0383B82 X-CRM114-Status: GOOD ( 13.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for letting the host unmap pages from non-protected guests, introduce a new hypercall implementing the host-unshare-guest transition. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 5 ++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 24 ++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 78 +++++++++++++++++++ 5 files changed, 109 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index b69390108c5a..e67efee936b6 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -66,6 +66,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index a7976e50f556..e528a42ed60e 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -40,6 +40,7 @@ int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); +int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index be52c5b15e21..5dfc9ece9aa5 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -64,6 +64,11 @@ static inline bool pkvm_hyp_vcpu_is_protected(struct pkvm_hyp_vcpu *hyp_vcpu) return vcpu_is_protected(&hyp_vcpu->vcpu); } +static inline bool pkvm_hyp_vm_is_protected(struct pkvm_hyp_vm *hyp_vm) +{ + return kvm_vm_is_protected(&hyp_vm->kvm); +} + void pkvm_hyp_vm_table_init(void *tbl); int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva, diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 32bdf6b27958..68bbef69d99a 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -242,6 +242,29 @@ static void handle___pkvm_host_share_guest(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_unshare_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + struct pkvm_hyp_vm *hyp_vm; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm = get_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + if (pkvm_hyp_vm_is_protected(hyp_vm)) + goto put_hyp_vm; + + ret = __pkvm_host_unshare_guest(gfn, hyp_vm); +put_hyp_vm: + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -459,6 +482,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_share_hyp), HANDLE_FUNC(__pkvm_host_unshare_hyp), HANDLE_FUNC(__pkvm_host_share_guest), + HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index a69d7212b64c..f7476a29e1a9 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1413,3 +1413,81 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, return ret; } + +static int guest_get_valid_pte(struct pkvm_hyp_vm *vm, u64 *phys, u64 ipa, u8 order, kvm_pte_t *pte) +{ + size_t size = PAGE_SIZE << order; + s8 level; + + if (order && size != PMD_SIZE) + return -EINVAL; + + WARN_ON(kvm_pgtable_get_leaf(&vm->pgt, ipa, pte, &level)); + + if (kvm_granule_size(level) != size) + return -E2BIG; + + if (!kvm_pte_valid(*pte)) + return -ENOENT; + + *phys = kvm_pte_to_phys(*pte); + + return 0; +} + +static int __check_host_unshare_guest(struct pkvm_hyp_vm *vm, u64 *phys, u64 ipa) +{ + enum pkvm_page_state state; + struct hyp_page *page; + kvm_pte_t pte; + int ret; + + ret = guest_get_valid_pte(vm, phys, ipa, 0, &pte); + if (ret) + return ret; + + state = guest_get_page_state(pte, ipa); + if (state != PKVM_PAGE_SHARED_BORROWED) + return -EPERM; + + ret = range_is_allowed_memory(*phys, *phys + PAGE_SIZE); + if (ret) + return ret; + + page = hyp_phys_to_page(*phys); + if (page->host_state != PKVM_PAGE_SHARED_OWNED) + return -EPERM; + WARN_ON(!page->host_share_guest_count); + + return 0; +} + +int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm) +{ + u64 ipa = hyp_pfn_to_phys(gfn); + struct hyp_page *page; + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(hyp_vm); + + ret = __check_host_unshare_guest(hyp_vm, &phys, ipa); + if (ret) + goto unlock; + + ret = kvm_pgtable_stage2_unmap(&hyp_vm->pgt, ipa, PAGE_SIZE); + if (ret) + goto unlock; + + page = hyp_phys_to_page(phys); + page->host_share_guest_count--; + if (!page->host_share_guest_count) + WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_OWNED)); + +unlock: + guest_unlock_component(hyp_vm); + host_unlock_component(); + + return ret; +} From patchwork Mon Nov 4 13:31:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861450 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3E6F6D132CD for ; Mon, 4 Nov 2024 14:01:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=7ZKgkOXED2gTnPKJGzc+iVeFIN1NowIl4QRyN7m6aZA=; b=sJuIcQPHLu1uEhkBTPsptKYEu/ eTO1RHfD8jg/2YzxH37Id4DjtyoBLsC3yL982tfy8aOFMerGlSD7JzTDyR/fdtRtO/YujUfm3PE9o 6nGC4GnGaSdDrPk55sEcM/0sI4MaVCKHqCPecOKc0awvNmwW+UZ9PLUnrk+m5P6y8lrT//NkFIZvN 28jh5lV9UDZsiajdd32Bdu7LRoWxNKj5VUJc7/3GYceFOCL9GwDVELqXsUUy50h4Ms56euM6xFS/W wXVBhn7GQNeDj7JYxmpUWbFn/VWyQc9zjpjDvnZd553d1FUyiMgLlLqXBR38Fjh970HBEPbsUmo4s XrxQhNJw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7xe0-0000000DyaH-1H5o; Mon, 04 Nov 2024 14:01:24 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xCA-0000000DsPy-3zOe for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:40 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-e30df208cadso5630709276.0 for ; Mon, 04 Nov 2024 05:32:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727157; x=1731331957; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7ZKgkOXED2gTnPKJGzc+iVeFIN1NowIl4QRyN7m6aZA=; b=s4wjbD9DRpwvu8GJ5YZUFi7ur+LiFJy321vN/X9W730GPButI1DNA68vwg0KxvfCAZ 93VXdpP0qC2/bT0+YkgqxpBuuWy6l7ksToOGXH8Q97/axFLt0xgljj99eg8VrqUUQ8TU 0l+CZ+hVqUp/oDLm5qL0Dj0OHYMxYsT6KN+aO6kYZsT9fDfiOecFNur8Af02X+J4IgED rOmZtA7sGZqQHJPDUELf5cV5AgCGYSzjtLI4918kxlw2H292gwAoMML1DEc+cS1IR1RN 3QiexhsDmu4N4iGJVy6RSqCtzYISdlAJHgj0dDT92fAfNb86gjpBiYb20Df6DEPf9YaB cCLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727157; x=1731331957; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7ZKgkOXED2gTnPKJGzc+iVeFIN1NowIl4QRyN7m6aZA=; b=tjpM0yeZleLhG5D+oJ76Jq+MzkcSWHIH4JFdDrs/rYVYPAZ+rREZDX5U7XGeFRnxvV CNnS5awcnDx76b0G7HL5uPxpC9wz75su86n0YC1pTvvRFCBdRNOs54No7A4pOlM4vN7d VeCAJWo0sqdjGlzFObcvoAIQx/VkJl2waomKwNi3VIdnm59YHK6txx/YgoXQjRftSBSu BB3qqT8T3Mm/hhZqP/Poi131ewoXwPAD9wfs60zYvAKxpBObCUQhaBj3kpu9KR2mLnQJ QGzCT3P2GYOrkR61qOYhiQbKjcKHaNkEVJliGUOm5NeMAApUpUKTTylIBfVER+8tRUuI fg1A== X-Forwarded-Encrypted: i=1; AJvYcCUamjlAsUqBjqdP97Ne9hI11dVXcc8NV1wP1VOCUXhafR0LDJ8mR/aV8JKmm3j21Ms1PZJDV7YCT91V/T0UeEWx@lists.infradead.org X-Gm-Message-State: AOJu0YzdsOHWxcCeCVJEAuzl6bvLComa0lN+AINMTVgdt1dE+QE+o4tw a2ydRyWr9I3aBA+6V7aB819RTkWi+qTOzTcTpNNgTR2WPCmA4fsTi5/1Sd4S939s4qrY2FnYUUy RNv+W8g== X-Google-Smtp-Source: AGHT+IEdndu9c8L5M9LMgNH5yUWxnOXjnairdpcGEJ+HkfnHM8iJ67/17rm/DHBApYmZlQ7LHV+rdMkopkCC X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a05:6902:1801:b0:e30:b89f:e3d with SMTP id 3f1490d57ef6-e3328a15f4emr19583276.1.1730727157377; Mon, 04 Nov 2024 05:32:37 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:58 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-13-qperret@google.com> Subject: [PATCH 12/18] KVM: arm64: Introduce __pkvm_host_relax_guest_perms() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053239_017154_5093F02C X-CRM114-Status: GOOD ( 13.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a new hypercall allowing the host to relax the stage-2 permissions of mappings in a non-protected guest page-table. It will be used later once we start allowing RO memslots and dirty logging. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 20 +++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 25 +++++++++++++++++++ 4 files changed, 47 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index e67efee936b6..f528656e8359 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -67,6 +67,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_guest_perms, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index e528a42ed60e..db0dd83c2457 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -41,6 +41,7 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_pgtable_prot prot, struct pkvm_hyp_vcpu *vcpu); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 68bbef69d99a..d3210719e247 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -265,6 +265,25 @@ static void handle___pkvm_host_unshare_guest(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_relax_guest_perms(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, gfn, host_ctxt, 1); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 2); + struct pkvm_hyp_vcpu *hyp_vcpu; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret = __pkvm_host_relax_guest_perms(gfn, prot, hyp_vcpu); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -483,6 +502,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_unshare_hyp), HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__pkvm_host_unshare_guest), + HANDLE_FUNC(__pkvm_host_relax_guest_perms), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index f7476a29e1a9..fc6050dcf904 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1491,3 +1491,28 @@ int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm) return ret; } + +int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_pgtable_prot prot, struct pkvm_hyp_vcpu *vcpu) +{ + struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 ipa = hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + if ((prot & KVM_PGTABLE_PROT_RWX) != prot) + return -EPERM; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_unshare_guest(vm, &phys, ipa); + if (ret) + goto unlock; + + ret = kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0); +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} From patchwork Mon Nov 4 13:31:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861454 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AE9D0D132CF for ; Mon, 4 Nov 2024 14:03:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ucFddVYYmKv6sICxH6cRQFB8Ka0EmmzO4w0j05CMWug=; b=TX03/zJNrLTmOHhmkD1bVnPcEe I5xJ7BmTYJ2Wr4rCFUv/dhPIgeWFxG6T17v8lSvii+c4Hdd35SKCfCO+glwfw143j0B8fy1gmJ1s7 aOL3y5qIpFwHTjUsQG1fg6LfVElt+GP1BrLvd4A3KvU68XU7fGuzlVKVUFjtbtYBB5oIzasqMg4OQ 3VvPjOgyKN2PEI/I5gGBnkc7BwPrCWbNJ3Va/9eYT8jqBB6KFrsm6VAQ06rq80+qMLzu/slIRZELg 5Y+d8DZ3Lopz5XStVBuztL93BrfBJfDHNrhNl7xu8dLqsUnV/fJTrx770dFQd7djZVAYyoN2LVRUB QHO0rBDA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7xfe-0000000DyqG-3elE; Mon, 04 Nov 2024 14:03:06 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xCC-0000000DsQU-3lS0 for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:42 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-6ea8901dff1so25677417b3.1 for ; Mon, 04 Nov 2024 05:32:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727159; x=1731331959; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ucFddVYYmKv6sICxH6cRQFB8Ka0EmmzO4w0j05CMWug=; b=YGAJ18ifOCF75TJQwgOPXay0R4GT3OiYOnwjTNJiKq5pVTUL6lUg/QQFGfsGM79ANl qii6diiGQAmcfrBpDeiEkEAz5FXPZWM4lO3KYeeA3tBAR/kc9XGWDKLrBBmo+8jv9IhG obt3ORJLScX6TmzThL9apJ6xTO3lSJine+yVtH/gAd9NWJxvmYSP9saV9kbbZwP2VSVR SP6DQEjkshEmMhT2yNGG4/kgjwS+D6bAyI3gltt14T0K24VJkjWGIZLCzzH2/SLLL7Bz Emi7oWzbutna9VtWNdWEiKKHpc4ITBqQ5goqd59vU0OhVwvofusB/ad3Ta8HgEHRkh6D 5fNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727159; x=1731331959; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ucFddVYYmKv6sICxH6cRQFB8Ka0EmmzO4w0j05CMWug=; b=l8nqux6U1zPexIj+KIZQIT02WkrCluMgsBL4EmYIxgvKrcUU1b8OtkoA+YngqvxR9P YxFAqo/M4cukjX22FxaExfG2MKOq9MunZkfHzPmQAqEUbFBrKdL2Vr0u2ESpeq6CLjXZ GwpLQYNDgLaus1lhza3C4/KqcNTZTXeY3tD/qTNtCB9O2pj1mS87S/+9iJxs3GxnVCfO zWflmSlOXEXeWugA0IpfstCbbSZ52Alf1aEiRAKMI+9LNtWlLaFdZYid9YMHV5qKD/0k JmfmfTh/pmONpRrmWG44p8XSBa30iYhxiaGLW1RDX16i9f31CY4eaEkVd8HMwmBxfJTz H89g== X-Forwarded-Encrypted: i=1; AJvYcCUGNiNcl25f9IviNpgaibQvL5DvYWfcKM6l25FoiYtBd5rCa0VtmoTEDcr+9rpqlYIyWmiIvyO3D1XtS0dEybHZ@lists.infradead.org X-Gm-Message-State: AOJu0YxHRzor6CAVQUKchJOtHp6WIg9j6gcRCu7Az1jjntTCxdZv2I81 JqBzXPg4ZzEtPZz3KNDRLQoJZwKdh1Cz8uNKhilYyU+JP8HFw+otyiu3u+Rst6gxteGH1Sxnsck Gf212XA== X-Google-Smtp-Source: AGHT+IGBa0WaXIVQ6f/knL91GvKmh8psD2Fj6oZFqWjEZD8hAxiH7KoDd3VReE3WYYM+VRLwqJwKNMXlhvth X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a05:690c:64c6:b0:6e3:c503:d1ec with SMTP id 00721157ae682-6ea64c29ea7mr1983467b3.7.1730727159727; Mon, 04 Nov 2024 05:32:39 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:59 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-14-qperret@google.com> Subject: [PATCH 13/18] KVM: arm64: Introduce __pkvm_host_wrprotect_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053240_959159_FB1FF580 X-CRM114-Status: GOOD ( 12.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a new hypercall to remove the write permission from a non-protected guest stage-2 mapping. This will be used for e.g. enabling dirty logging. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 24 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 21 ++++++++++++++++ 4 files changed, 47 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index f528656e8359..3f1f0760c375 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -68,6 +68,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_guest_perms, + __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index db0dd83c2457..8658b5932473 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -42,6 +42,7 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_pgtable_prot prot, struct pkvm_hyp_vcpu *vcpu); +int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index d3210719e247..ce33079072c0 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -284,6 +284,29 @@ static void handle___pkvm_host_relax_guest_perms(struct kvm_cpu_context *host_ct cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_wrprotect_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + struct pkvm_hyp_vm *hyp_vm; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm = get_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + if (pkvm_hyp_vm_is_protected(hyp_vm)) + goto put_hyp_vm; + + ret = __pkvm_host_wrprotect_guest(gfn, hyp_vm); +put_hyp_vm: + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -503,6 +526,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__pkvm_host_relax_guest_perms), + HANDLE_FUNC(__pkvm_host_wrprotect_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index fc6050dcf904..3a8751175fd5 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1516,3 +1516,24 @@ int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_pgtable_prot prot, struct pk return ret; } + +int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) +{ + u64 ipa = hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_unshare_guest(vm, &phys, ipa); + if (ret) + goto unlock; + + ret = kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE); +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} From patchwork Mon Nov 4 13:32:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E1D67D132DE for ; Mon, 4 Nov 2024 15:16:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=BjjoLBvOkIdvqHKae2IJzaas4+AMPqzUZpsNH6gOsjs=; b=2zq2w466NVrcNs2QaTjPeTcm6d UGCpuGTEEcHhBDO0rw/97SUp08pIYqqjBkTLsltB22YracXDOh4v/T1iQRkUtF2pL4Fz0EoSxbd+S GT1SjjsIjBFL9UO0tIqVz+RfLqWiM1MhXnhPQbYaaXgcrYWUY+rKOZn6WuxeBLIXDq1NBHdJ5Jzaq 3LPdjb9Vb6gWAU8Ei1kwflZk2YjuoO1dvEsrxghSNZfZK06QtpbKa7E2t5P/mQFIYnV+FI7eytovh 4jjnyLrQBBUrhNj/lxfx0ksNJmI+Lj189YZropE30esfgo3icij/0tEZ04tgajuGOXZKwwqA5Z8Vv wvctLUZA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7yoL-0000000EAkn-2BNi; Mon, 04 Nov 2024 15:16:09 +0000 Received: from mail-ej1-x64a.google.com ([2a00:1450:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xCH-0000000DsRJ-098Q for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:46 +0000 Received: by mail-ej1-x64a.google.com with SMTP id a640c23a62f3a-a9a273e4251so356478466b.1 for ; Mon, 04 Nov 2024 05:32:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727162; x=1731331962; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BjjoLBvOkIdvqHKae2IJzaas4+AMPqzUZpsNH6gOsjs=; b=hSY1EVaKSkNPdMNgJqoOZhe/wY+wPEZej0JscbdKJxzeDP371CLbT5Zwa+QGIUM995 lCbjOgVjUs9jmlC6DMvhKEpTIlFYnKLPxOUJxynkxl241aBrzSMrxUt/g/H/1wUnCRFM rw4ACZbwLc4Znz2Jkf0xAW5ZZ6yQ2NpVGxJgukg7IfR0RlJMP5T1AkSImzhwxXTENT6g ilO+OA/J/bCS4htB6tq114eU2lFUsUE5SLtBTNztqYo3GK8nbdMsUk133mn00N0ZfIA4 QOC0utXoo6B2ilcLFviyRb0jKJBXC1MeW80pcwkABgB1t9fvePsOb4EnRQIqYYDIbyPH W03g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727162; x=1731331962; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BjjoLBvOkIdvqHKae2IJzaas4+AMPqzUZpsNH6gOsjs=; b=q4Eaa4SOBf1DU+vFzIWt/xQhXDIkLJhtXQTGs+aq7neoHiWO/DSHyrLt/HO8HvJVlU NSBvelCKGR0mlvyhXMn1PpFY5nUtsje2v+qI5R03Hp60OAVFAfAA+/mw2FVkEG3hAsU+ Umoww3dK9Q1MUGxIDYGurclCa2HstQK0lai3eLh2vvMMrK/icR9vgYNaSa8zIOvyxaiL kDsp46hB9h9HE16zA8AnMGgO2Yv3z6Vi/wJTWd3Mugg9ZQrNhtn0IfS/aVJBPM71lEr8 Bl4OA8ipEzMzq8RiZuCR5Q72FMy/80lLCSLhc1OxIw6Dt6++LvrJk2JmN+xK0GCVh4oA wqMg== X-Forwarded-Encrypted: i=1; AJvYcCWK5Mm0p2OqknVXN1M7xFc0iwYtnoG1B/+tznM/jnYohCw3I+53hteYH0j5cNRTaogVZlXr7tKd6Bp8vqulwv0u@lists.infradead.org X-Gm-Message-State: AOJu0Yw9mHjOAqJzFUBh8FrCEtJyO+ee2p3dEEMpMcCjwzV+LoQNfilG uFtsd/+V7fq+jqHxiV+MkI2RNAYeoYQt/NaxrgrImDXoHLfLPaTk528409EYJFBJRIZE3GDYjS0 c2NABXQ== X-Google-Smtp-Source: AGHT+IEbj6NqToXt4e9wVQWd4WMxb63nj/9jGxcZ0csUp26Lka3674D7FmXmXXn3NSLW/560C2CbSgE7ZVUz X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a17:906:ec2:b0:a9a:1e:d858 with SMTP id a640c23a62f3a-a9e6582233cmr306266b.11.1730727161943; Mon, 04 Nov 2024 05:32:41 -0800 (PST) Date: Mon, 4 Nov 2024 13:32:00 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-15-qperret@google.com> Subject: [PATCH 14/18] KVM: arm64: Introduce __pkvm_host_test_clear_young_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053245_109501_0107D34F X-CRM114-Status: GOOD ( 12.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Plumb the kvm_stage2_test_clear_young() callback into pKVM for non-protected guest. It will be later be called from MMU notifiers. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 25 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 21 ++++++++++++++++ 4 files changed, 48 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 3f1f0760c375..acb36762e15f 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -69,6 +69,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_guest_perms, __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_test_clear_young_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 8658b5932473..554ce31882e6 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,6 +43,7 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum k int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_pgtable_prot prot, struct pkvm_hyp_vcpu *vcpu); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index ce33079072c0..21c8a5e74d14 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -307,6 +307,30 @@ static void handle___pkvm_host_wrprotect_guest(struct kvm_cpu_context *host_ctxt cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_test_clear_young_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(bool, mkold, host_ctxt, 3); + struct pkvm_hyp_vm *hyp_vm; + int ret = -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm = get_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + if (pkvm_hyp_vm_is_protected(hyp_vm)) + goto put_hyp_vm; + + ret = __pkvm_host_test_clear_young_guest(gfn, mkold, hyp_vm); +put_hyp_vm: + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -527,6 +551,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__pkvm_host_relax_guest_perms), HANDLE_FUNC(__pkvm_host_wrprotect_guest), + HANDLE_FUNC(__pkvm_host_test_clear_young_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 3a8751175fd5..7c2aca459deb 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1537,3 +1537,24 @@ int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) return ret; } + +int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm) +{ + u64 ipa = hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_unshare_guest(vm, &phys, ipa); + if (ret) + goto unlock; + + ret = kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mkold); +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} From patchwork Mon Nov 4 13:32:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CBF4AD132CF for ; Mon, 4 Nov 2024 14:06:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=0O9w3V8+9zARf3Lj4YZXmAy+l2Op+F1G9lpGS1sACo4=; b=B0rXxg7sxV/qE/2XpJusETO2Ib JsrCFUvxnQYTQbzd4+9LkPnNmA+th7emo0L6GbNZuQvy6iIkCpJZW4KVg0dWPmLsVHmONdI8/WHft XwpaJ1HzLH/QQ5x3KNGOagIJfpK4iTZv4xeG47FH8eJSVQaY8Pqdsl+4NCm+gf6TnQKz+zhYv/ZFh 8M9+7f73cQ5YnOR3K2zab8gHqCnJ/W4Je7aKjjt1Qxq6/UxTI9sB1i3W6PtBTS4t0qtQqHAHR2aEW ZGVYVhmLLqCI2A/W4IzyOlfroO1jBP7yrQRCNuaejBQB5XbElcZKwa/vKFIxJGmGTpTe9TKmWZQFr mPF6rlrQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7xiw-0000000DzCh-1S2g; Mon, 04 Nov 2024 14:06:30 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xCH-0000000DsRX-2X2g for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:47 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-e32ff6f578eso4836213276.1 for ; Mon, 04 Nov 2024 05:32:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727164; x=1731331964; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0O9w3V8+9zARf3Lj4YZXmAy+l2Op+F1G9lpGS1sACo4=; b=bqwURE0vr+L+QIxUjT44dYwWB3N9rsbS0zUJTzlqzWXjkEscF3wHVb4xh5dp4Dl3vA b+ZCBccmSwYRkjvHnDTrhrIZ4DiGNGKbuXeuc5LV7s5Zq5r+kEgg3UiTeMJWVwtmS9C0 agcOEkOrXwgJVFy5511qN3VoFhQhyHwsgspoP2Fg4qxY6JduTHW+wYYy4UCYRvB6CfDO HP1VCOiQEL3wZlRSLfzSWbhDq5f7+1jisVHumt1i2AMWYJw8wiMt+cZIMlKniSdJGwNI taen+MGQNasOryp4ut3VcNuIhLqBabPsDy8DT0BWG1XmbNmv1RwYTmtzQqDVwUpSjQFB b0iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727164; x=1731331964; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0O9w3V8+9zARf3Lj4YZXmAy+l2Op+F1G9lpGS1sACo4=; b=rvgPdJb1Ull6AXWRaDLAtKiunkzhRCZyxavnU2OLTaATUfDPVDn0XXJzW8e1ycrhkd 1Gb87M/8xmVYXfaZJx18WI6DCzhhxdN6VO62bIX6xgrglHos8TqmMZe7nkQID3/+d+0b 3ZYRhbCu4KofRnuKJBMDK8EqB+Prh0HzfDd9PfNhB202wkblZdfICo2hUtikXesIgb4Q 4jMDkst4avyXfKuF6PjqJmjvFvVxkuKFMmAUm0mBEsikQxD8ZcFWWee7Z1/TdKXF8FeX d5DvJGbOjj9VkBlDA3AfTxFIr3MYtLMlcfG88WwSPQzfVJvm0QYbzsrsVygY++qIlcbF GRaQ== X-Forwarded-Encrypted: i=1; AJvYcCXImfFEF2Y2j/Z9XrfUlpkzoab7SYgKwbt3CTRxqGjBu6k8XuRl5Pb3EQAN9kbiwo0NJO9A2fH89rrDmjiPKjtE@lists.infradead.org X-Gm-Message-State: AOJu0YwYK73EvovyPrSJotyhcyfQGKN2o28k4azaSF6QoRJ9uOBDttmX Ai85W2t7dhCIfjJDK2KxeS2p0rfS6rVZkhBuHQDEbnAvRCwXU39VvLbFrFrQ+eNUFJPX3SBkH8D dmppTVA== X-Google-Smtp-Source: AGHT+IEDxrAcP3xldoLSP2+yxUBuV6Ttuyw9HdeltMFShnVNq/HJr7ENXf3J/L5hnJ5D4xci+Np8Djaz59df X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a5b:24f:0:b0:e0b:f6aa:8088 with SMTP id 3f1490d57ef6-e30e8d353edmr30619276.1.1730727164243; Mon, 04 Nov 2024 05:32:44 -0800 (PST) Date: Mon, 4 Nov 2024 13:32:01 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-16-qperret@google.com> Subject: [PATCH 15/18] KVM: arm64: Introduce __pkvm_host_mkyoung_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053245_679320_B982FAA8 X-CRM114-Status: GOOD ( 13.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Plumb the kvm_pgtable_stage2_mkyoung() callback into pKVM for non-protected guests. It will be called later from the fault handling path. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 19 +++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 24 +++++++++++++++++++ 4 files changed, 45 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index acb36762e15f..4b93fb3a9a96 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -70,6 +70,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_guest_perms, __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_test_clear_young_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_mkyoung_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 554ce31882e6..6ec64f1fee3e 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -44,6 +44,7 @@ int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_pgtable_prot prot, struct pkvm_hyp_vcpu *vcpu); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm); +kvm_pte_t __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 21c8a5e74d14..904f6b1edced 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -331,6 +331,24 @@ static void handle___pkvm_host_test_clear_young_guest(struct kvm_cpu_context *ho cpu_reg(host_ctxt, 1) = ret; } +static void handle___pkvm_host_mkyoung_guest(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, gfn, host_ctxt, 1); + struct pkvm_hyp_vcpu *hyp_vcpu; + kvm_pte_t ret = 0; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu = pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret = __pkvm_host_mkyoung_guest(gfn, hyp_vcpu); +out: + cpu_reg(host_ctxt, 1) = ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -552,6 +570,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_host_relax_guest_perms), HANDLE_FUNC(__pkvm_host_wrprotect_guest), HANDLE_FUNC(__pkvm_host_test_clear_young_guest), + HANDLE_FUNC(__pkvm_host_mkyoung_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 7c2aca459deb..a6a47383135b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1558,3 +1558,27 @@ int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm * return ret; } + +kvm_pte_t __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu) +{ + struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 ipa = hyp_pfn_to_phys(gfn); + kvm_pte_t pte = 0; + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = __check_host_unshare_guest(vm, &phys, ipa); + if (ret) + goto unlock; + + pte = kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0); +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return pte; + +} From patchwork Mon Nov 4 13:32:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861456 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C9609D132D0 for ; Mon, 4 Nov 2024 14:08:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=YA9nOaF/CPva03eUOIXNUH1DNhxgj/qlCLjHWmvoHj8=; b=LbFZjnHfQfHGQfGJ/mc6Gd3ijo ckND8c/7ZJ461o7mdZ/TohN6yC66VH7tQT9twGLBp6Ea9GtI0KC7vboxIzQ2JdxJ8pJr0nqMh7n9h s7giYSilpwGLdcOZxCXG8pih59z9fPL5naYflGeyAPHB9rgj+Ib60+Lf1j1zpxVKodyu13v6DvrBc S+5NQg1Jh/RuTuWE0BtpXK65vRzNDVKZdYJ8i82OHdmJvs/37dJxFN74Orr5j21JR5f5pHCf5I2/s xYceDLWklU3b982VFruPiRRLyeJmCYM2ClTPXXq92ZtTiWlkXvTNG7vNItNa5d6ZyE6BhiVrQ2Oh2 Neqg6LYQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7xkc-0000000DzhN-42II; Mon, 04 Nov 2024 14:08:14 +0000 Received: from mail-ej1-x649.google.com ([2a00:1450:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xCK-0000000DsTM-302k for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:49 +0000 Received: by mail-ej1-x649.google.com with SMTP id a640c23a62f3a-a99fc3e2285so280785466b.0 for ; Mon, 04 Nov 2024 05:32:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727166; x=1731331966; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YA9nOaF/CPva03eUOIXNUH1DNhxgj/qlCLjHWmvoHj8=; b=c30CKNQx4kXrSDChg34UFkqB+FlgF+rSLo8mg+0OECjieYi9qRx3W+9wzTG4BLzy/E CvWl7ar/JpOfmIHi5nI0GmKDpmbrySDDtTqq3ZteBRDV906CuY/Ak8eacNAt54HjLj3f XiAfqhex85KMwpKsad9Z/2vJnk1bGGaYAlWXxx1aA/NiGyBkSd1qU7lkZ5HqShTtLt4u 9i9Q4StpzCA9/YkD2nxdDXlBGhhdEZLIiPi0T7q9XYNHACPHWWdiug8BtZ6kD1zttjFA 8iLPTQd1SYjbvklckYgf8mW7d0OjhLeiEeuFaYQUmr2XKIKqSi4ZP3S+KCbxonkqzkg2 p3+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727166; x=1731331966; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YA9nOaF/CPva03eUOIXNUH1DNhxgj/qlCLjHWmvoHj8=; b=iMmATuauDPkMEd+b3R+d3KfzH+dRHPTOOWY3L8+k8CR1DYLDAvP9s//D9FLfeTjRRA a0ySmT0F1P1rJM2siH8MQVxD3DO1Lxc1KLAu7n6zNkwcWxXEgb5DNTH3OoMlEKnqhqJj 5KwVRNagrymyb9yReJmEYnZq6BETvc3Lkow4mhdjLOP9saAJtfVWjrtUw+ZEfJXdWsTA faGxudWpSv91Llc44CiXJPeVDkOojE1mi/PAg0ncNQ6JwW395jepqJLpMTNw0R3MgJ5x jf2AsJQGtJnnlasEL0c0sLRNlT7M1f5TYGe+z4g0PGN/25VUsROTpXFZHukPXVJOkcgT L6TQ== X-Forwarded-Encrypted: i=1; AJvYcCVkteTv7y9vPAd9aiNfZTRCyMeJnUICsLAcYq0Yd/WbdiNudcbjqhNEFUnGUVxK9rg2kWGZRjSxAUNrFK6qZcG5@lists.infradead.org X-Gm-Message-State: AOJu0Yw0kjc7UxPcw0aLVPs/AQ0EvQy8HeNoxwfSwUVC7HfeTxSw+mNn 2qayUlnC6KL0UGdh0oKXV+ut/kmwaNza3y6iR8wcEO1nX2zMa/agFij4ESnVrYHWPDDD8ozLfEW O1OvVmQ== X-Google-Smtp-Source: AGHT+IHKDJEbkEfKj/R2W9nSJpi/mLwlZYh9Ih88dIXHNCkRftaPK88SGfUQc4OzczUSSCQEg5KywaFPE6uv X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a17:906:c20a:b0:a99:fa8a:9772 with SMTP id a640c23a62f3a-a9e50868dd1mr396366b.2.1730727166509; Mon, 04 Nov 2024 05:32:46 -0800 (PST) Date: Mon, 4 Nov 2024 13:32:02 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-17-qperret@google.com> Subject: [PATCH 16/18] KVM: arm64: Introduce __pkvm_tlb_flush_vmid() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053248_787253_932B9355 X-CRM114-Status: GOOD ( 11.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a new hypercall to flush the TLBs of non-protected guests. The host kernel will be responsible for issuing this hypercall after changing stage-2 permissions using the __pkvm_host_relax_guest_perms() or __pkvm_host_wrprotect_guest() paths. This is left under the host's responsibility for performance reasons. Note however that the TLB maintenance for all *unmap* operations still remains entirely under the hypervisor's responsibility for security reasons -- an unmapped page may be donated to another entity, so a stale TLB entry could be used to leak private data. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 17 +++++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 4b93fb3a9a96..1bf7bc51f50f 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -88,6 +88,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put, + __KVM_HOST_SMCCC_FUNC___pkvm_tlb_flush_vmid, }; #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 904f6b1edced..1d8baa14ff1c 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -396,6 +396,22 @@ static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) __kvm_tlb_flush_vmid(kern_hyp_va(mmu)); } +static void handle___pkvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + struct pkvm_hyp_vm *hyp_vm; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vm = get_pkvm_hyp_vm(handle); + if (!hyp_vm) + return; + + __kvm_tlb_flush_vmid(&hyp_vm->kvm.arch.mmu); + put_pkvm_hyp_vm(hyp_vm); +} + static void handle___kvm_flush_cpu_context(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -588,6 +604,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_teardown_vm), HANDLE_FUNC(__pkvm_vcpu_load), HANDLE_FUNC(__pkvm_vcpu_put), + HANDLE_FUNC(__pkvm_tlb_flush_vmid), }; static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) From patchwork Mon Nov 4 13:32:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 50B7AD132D0 for ; Mon, 4 Nov 2024 14:10:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=S9P+6CghAZlP90zl0lmxcimUJWiddvnPxJsTkSWCUgY=; b=me2Ew2S5bejDcQYSNPvfY1aqdt KFz2D739lHI+tfdR3xpTWsnMfC0aABgHVA8mV5Xs7UyGyguItCKotGPpigzMJiGU15kOPJoH4MTaB YolmzGs3NlzYzR45f5c8rEHyevVAffHsCcPkqb4BtuE+YxcgHbtiVb6DPkhFIBcSp+1PjPUTbolE5 DI8nYmNzxZMN5N0W7aNYalYNjhf10u0VzhA57gS1V30DIVtn2Kny5DxAY8MnmoXoBlIu/vzo4A2ui plPKgrypBVkX2/uFzRezuda/U1BY4mmHWvGk8gkvMRwCQbyHdNLa+Kwk05YPqODB5Pasx28C9Vvxy AnWenjMw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7xmI-0000000DzyG-1QkV; Mon, 04 Nov 2024 14:09:58 +0000 Received: from mail-ej1-x649.google.com ([2a00:1450:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xCM-0000000DsUT-2uwK for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:52 +0000 Received: by mail-ej1-x649.google.com with SMTP id a640c23a62f3a-a9a004bfc1cso285583366b.1 for ; Mon, 04 Nov 2024 05:32:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727169; x=1731331969; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=S9P+6CghAZlP90zl0lmxcimUJWiddvnPxJsTkSWCUgY=; b=qRg6fDIeZfT1Pw9tp45SipyG8uuJfbXfxpz6+GXJSZ7vfQ07fIGsKMfLakeG5s0NTT etyYcSdOUU3eHv0nMtDLccB+MeuKJTitzVjtF00hlA3V5mEjObyy69iazH1BflzoUj/e HvBvZcpedAiz5mc1LFajlUkit1+yJSO5uz49q5iytdJ3gCcDK63Q1yokpslCX+45OmaS 06GxD3dZeiXXTyDpq2xlqxvxNqlvdviEj0IlaEUCikc7wse3doRAyO3cLMl+7cCJuwfn JKcRSWXOJ9PQaaTdMYKarY2EGSmhUZEea/AVTlFLn6vGfRCOq/a7RZUEFYic0b6uRhPF ikCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727169; x=1731331969; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=S9P+6CghAZlP90zl0lmxcimUJWiddvnPxJsTkSWCUgY=; b=K4U+mVbOJXKKBoph1n5xjdM6Z9fDQeyQyZ6yrdTywXQ3i6mVEu/ZQKY/KebS/R5wEM PDrDPggz2KRjZ62HiTIvMJ7dOYe/7eDQ9wvRXvlXEbd2TxnavH9I8k7uODP0JG2HLWme tqmqbpNa1pIMF6JCVKkn5yS+dGP+sL50EjxakeWmEXz1HHYFR25nZh+Wbx19AnrbDF4m 4rAYrIpPSHVcrEJmEHfSCxadCtGrMXqGhBawUpmrgsRvrEjz6Hq+f+Q1nMwvdDViwhTM iiJHq4bQkhN/aTo3P7k32FUI+x0wcg1b5yqHXyqHRxMSUeUKQRvdv627Lj1GTF1ODaCw +wbg== X-Forwarded-Encrypted: i=1; AJvYcCUujQenef3739eTJJvOaHgbC1XISKfozIRZrkQazoZ/hdJ83rIlGvJWTyBIRDjt/FK9YOWUgW1HIFYDT0CyrnUb@lists.infradead.org X-Gm-Message-State: AOJu0YyNenJ9XTFsqqqRWcSsCp9Coucvm8G/C6dV8rbIi7E9RxDxQQys GNoG0WhnnTpY/OMOot2AFsKVSddwpzbnOgtH6hpi9at8IaWD9quPzLIiWdzA5EIx0rP28/O5v5w 3UOU+rg== X-Google-Smtp-Source: AGHT+IHUzeHYOCyrvkdBeV51c0cW/76sAnrie0GiITF3JTcKQ/wOV0BVt/eR+Wlt38fknSvzPr+8LNYUrM6G X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a17:906:f24d:b0:a99:4c49:d4af with SMTP id a640c23a62f3a-a9de5cbcb54mr798466b.4.1730727168889; Mon, 04 Nov 2024 05:32:48 -0800 (PST) Date: Mon, 4 Nov 2024 13:32:03 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-18-qperret@google.com> Subject: [PATCH 17/18] KVM: arm64: Introduce the EL1 pKVM MMU From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053250_764757_41A2D944 X-CRM114-Status: GOOD ( 25.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a set of helper functions allowing to manipulate the pKVM guest stage-2 page-tables from EL1 using pKVM's HVC interface. Each helper has an exact one-to-one correspondance with the traditional kvm_pgtable_stage2_*() functions from pgtable.c, with a strictly matching prototype. This will ease plumbing later on in mmu.c. These callbacks track the gfn->pfn mappings in a simple rb_tree indexed by IPA in lieu of a page-table. This rb-tree is kept in sync with pKVM's state and is protected by a new rwlock -- the existing mmu_lock protection does not suffice in the map() path where the tree must be modified while user_mem_abort() only acquires a read_lock. Signed-off-by: Quentin Perret --- The embedded union inside struct kvm_pgtable is arguably a bit horrible currently... I considered making the pgt argument to all kvm_pgtable_*() functions an opaque void * ptr, and moving the definition of struct kvm_pgtable to pgtable.c and the pkvm version into pkvm.c. Given that the allocation of that data-structure is done by the caller, that means we'd need to expose kvm_pgtable_get_pgd_size() or something that each MMU (pgtable.c and pkvm.c) would have to implement and things like that. But that felt like a bigger surgery, so I went with the simpler option. Thoughts welcome :-) Similarly, happy to drop the mappings_lock if we want to teach user_mem_abort() about taking a write lock on the mmu_lock in the pKVM case, but again this implementation is the least invasive into normal KVM so that felt like a reasonable starting point. --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/kvm_pgtable.h | 27 ++-- arch/arm64/include/asm/kvm_pkvm.h | 28 ++++ arch/arm64/kvm/pkvm.c | 194 +++++++++++++++++++++++++++ 4 files changed, 241 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 4b02904ec7c0..2bfb5983f6f1 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -87,6 +87,7 @@ void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu); struct kvm_hyp_memcache { phys_addr_t head; unsigned long nr_pages; + struct pkvm_mapping *mapping; /* only used from EL1 */ }; static inline void push_hyp_memcache(struct kvm_hyp_memcache *mc, diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 047e1c06ae4c..9447193ee630 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -412,15 +412,24 @@ static inline bool kvm_pgtable_walk_lock_held(void) * be used instead of block mappings. */ struct kvm_pgtable { - u32 ia_bits; - s8 start_level; - kvm_pteref_t pgd; - struct kvm_pgtable_mm_ops *mm_ops; - - /* Stage-2 only */ - struct kvm_s2_mmu *mmu; - enum kvm_pgtable_stage2_flags flags; - kvm_pgtable_force_pte_cb_t force_pte_cb; + union { + struct { + u32 ia_bits; + s8 start_level; + kvm_pteref_t pgd; + struct kvm_pgtable_mm_ops *mm_ops; + + /* Stage-2 only */ + struct kvm_s2_mmu *mmu; + enum kvm_pgtable_stage2_flags flags; + kvm_pgtable_force_pte_cb_t force_pte_cb; + }; + struct { + struct kvm *kvm; + struct rb_root mappings; + rwlock_t mappings_lock; + } pkvm; + }; }; /** diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index cd56acd9a842..f3eed6a5fa57 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -11,6 +11,12 @@ #include #include +struct pkvm_mapping { + u64 gfn; + u64 pfn; + struct rb_node node; +}; + /* Maximum number of VMs that can co-exist under pKVM. */ #define KVM_MAX_PVMS 255 @@ -137,4 +143,26 @@ static inline size_t pkvm_host_sve_state_size(void) SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_sve_max_vl))); } +static inline pkvm_handle_t pkvm_pgt_to_handle(struct kvm_pgtable *pgt) +{ + return pgt->pkvm.kvm->arch.pkvm.handle; +} + +int pkvm_pgtable_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, struct kvm_pgtable_mm_ops *mm_ops); +void pkvm_pgtable_destroy(struct kvm_pgtable *pgt); +int pkvm_pgtable_map(struct kvm_pgtable *pgt, u64 addr, u64 size, + u64 phys, enum kvm_pgtable_prot prot, + void *mc, enum kvm_pgtable_walk_flags flags); +int pkvm_pgtable_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); +int pkvm_pgtable_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size); +int pkvm_pgtable_flush(struct kvm_pgtable *pgt, u64 addr, u64 size); +bool pkvm_pgtable_test_clear_young(struct kvm_pgtable *pgt, u64 addr, u64 size, bool mkold); +int pkvm_pgtable_relax_perms(struct kvm_pgtable *pgt, u64 addr, enum kvm_pgtable_prot prot, + enum kvm_pgtable_walk_flags flags); +kvm_pte_t pkvm_pgtable_mkyoung(struct kvm_pgtable *pgt, u64 addr, enum kvm_pgtable_walk_flags flags); +int pkvm_pgtable_split(struct kvm_pgtable *pgt, u64 addr, u64 size, struct kvm_mmu_memory_cache *mc); +void pkvm_pgtable_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, s8 level); +kvm_pte_t *pkvm_pgtable_create_unlinked(struct kvm_pgtable *pgt, u64 phys, s8 level, + enum kvm_pgtable_prot prot, void *mc, bool force_pte); + #endif /* __ARM64_KVM_PKVM_H__ */ diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 85117ea8f351..6d04a1a0fc6b 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -268,3 +269,196 @@ static int __init finalize_pkvm(void) return ret; } device_initcall_sync(finalize_pkvm); + +static int cmp_mappings(struct rb_node *node, const struct rb_node *parent) +{ + struct pkvm_mapping *a = rb_entry(node, struct pkvm_mapping, node); + struct pkvm_mapping *b = rb_entry(parent, struct pkvm_mapping, node); + + if (a->gfn < b->gfn) + return -1; + if (a->gfn > b->gfn) + return 1; + return 0; +} + +static struct rb_node *find_first_mapping_node(struct rb_root *root, u64 gfn) +{ + struct rb_node *node = root->rb_node, *prev = NULL; + struct pkvm_mapping *mapping; + + while (node) { + mapping = rb_entry(node, struct pkvm_mapping, node); + if (mapping->gfn == gfn) + return node; + prev = node; + node = (gfn < mapping->gfn) ? node->rb_left : node->rb_right; + } + + return prev; +} + +#define for_each_mapping_in_range(pgt, start_ipa, end_ipa, mapping, tmp) \ + for (tmp = find_first_mapping_node(&pgt->pkvm.mappings, ((start_ipa) >> PAGE_SHIFT)); \ + tmp && ({ mapping = rb_entry(tmp, struct pkvm_mapping, node); tmp = rb_next(tmp); 1; });) \ + if (mapping->gfn < ((start_ipa) >> PAGE_SHIFT)) \ + continue; \ + else if (mapping->gfn >= ((end_ipa) >> PAGE_SHIFT)) \ + break; \ + else + +int pkvm_pgtable_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, struct kvm_pgtable_mm_ops *mm_ops) +{ + pgt->pkvm.kvm = kvm_s2_mmu_to_kvm(mmu); + pgt->pkvm.mappings = RB_ROOT; + rwlock_init(&pgt->pkvm.mappings_lock); + + return 0; +} + +void pkvm_pgtable_destroy(struct kvm_pgtable *pgt) +{ + pkvm_handle_t handle = pkvm_pgt_to_handle(pgt); + struct pkvm_mapping *mapping; + struct rb_node *node; + + if (!handle) + return; + + node = rb_first(&pgt->pkvm.mappings); + while (node) { + mapping = rb_entry(node, struct pkvm_mapping, node); + kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); + node = rb_next(node); + rb_erase(&mapping->node, &pgt->pkvm.mappings); + kfree(mapping); + } +} + +int pkvm_pgtable_map(struct kvm_pgtable *pgt, u64 addr, u64 size, + u64 phys, enum kvm_pgtable_prot prot, + void *mc, enum kvm_pgtable_walk_flags flags) +{ + struct pkvm_mapping *mapping = NULL; + struct kvm_hyp_memcache *cache = mc; + u64 gfn = addr >> PAGE_SHIFT; + u64 pfn = phys >> PAGE_SHIFT; + int ret; + + if (size != PAGE_SIZE) + return -EINVAL; + + write_lock(&pgt->pkvm.mappings_lock); + ret = kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, prot); + if (ret) { + /* Is the gfn already mapped due to a racing vCPU? */ + if (ret == -EPERM) + ret = -EAGAIN; + goto unlock; + } + + swap(mapping, cache->mapping); + mapping->gfn = gfn; + mapping->pfn = pfn; + WARN_ON(rb_find_add(&mapping->node, &pgt->pkvm.mappings, cmp_mappings)); +unlock: + write_unlock(&pgt->pkvm.mappings_lock); + + return ret; +} + +int pkvm_pgtable_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + pkvm_handle_t handle = pkvm_pgt_to_handle(pgt); + struct pkvm_mapping *mapping; + struct rb_node *tmp; + int ret = 0; + + write_lock(&pgt->pkvm.mappings_lock); + for_each_mapping_in_range(pgt, addr, addr + size, mapping, tmp) { + ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); + if (WARN_ON(ret)) + break; + + rb_erase(&mapping->node, &pgt->pkvm.mappings); + kfree(mapping); + } + write_unlock(&pgt->pkvm.mappings_lock); + + return ret; +} + +int pkvm_pgtable_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + pkvm_handle_t handle = pkvm_pgt_to_handle(pgt); + struct pkvm_mapping *mapping; + struct rb_node *tmp; + int ret = 0; + + read_lock(&pgt->pkvm.mappings_lock); + for_each_mapping_in_range(pgt, addr, addr + size, mapping, tmp) { + ret = kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->gfn); + if (WARN_ON(ret)) + break; + } + read_unlock(&pgt->pkvm.mappings_lock); + + return ret; +} + +int pkvm_pgtable_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct pkvm_mapping *mapping; + struct rb_node *tmp; + + read_lock(&pgt->pkvm.mappings_lock); + for_each_mapping_in_range(pgt, addr, addr + size, mapping, tmp) + __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), PAGE_SIZE); + read_unlock(&pgt->pkvm.mappings_lock); + + return 0; +} + +bool pkvm_pgtable_test_clear_young(struct kvm_pgtable *pgt, u64 addr, u64 size, bool mkold) +{ + pkvm_handle_t handle = pkvm_pgt_to_handle(pgt); + struct pkvm_mapping *mapping; + struct rb_node *tmp; + bool young = false; + + read_lock(&pgt->pkvm.mappings_lock); + for_each_mapping_in_range(pgt, addr, addr + size, mapping, tmp) + young |= kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->gfn, mkold); + read_unlock(&pgt->pkvm.mappings_lock); + + return young; +} + +int pkvm_pgtable_relax_perms(struct kvm_pgtable *pgt, u64 addr, enum kvm_pgtable_prot prot, + enum kvm_pgtable_walk_flags flags) +{ + return kvm_call_hyp_nvhe(__pkvm_host_relax_guest_perms, addr >> PAGE_SHIFT, prot); +} + +kvm_pte_t pkvm_pgtable_mkyoung(struct kvm_pgtable *pgt, u64 addr, enum kvm_pgtable_walk_flags flags) +{ + return kvm_call_hyp_nvhe(__pkvm_host_mkyoung_guest, addr >> PAGE_SHIFT); +} + +void pkvm_pgtable_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, s8 level) +{ + WARN_ON(1); +} + +kvm_pte_t *pkvm_pgtable_create_unlinked(struct kvm_pgtable *pgt, u64 phys, s8 level, + enum kvm_pgtable_prot prot, void *mc, bool force_pte) +{ + WARN_ON(1); + return NULL; +} + +int pkvm_pgtable_split(struct kvm_pgtable *pgt, u64 addr, u64 size, struct kvm_mmu_memory_cache *mc) +{ + WARN_ON(1); + return -EINVAL; +} From patchwork Mon Nov 4 13:32:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13861458 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A195ED132CF for ; Mon, 4 Nov 2024 14:11:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=NXCdoRKt7Rne/pKH5oiyWCxEYoVlk91YPJHINOsPplo=; b=3ei0hN/VyZJimoy08ZKLnyeQVP WHmRG32ryEOTgOJYcEIs5NRCu4VWCcUW1T/QQqrVA2St8AfQvQnUF35oWJTgJ8JC+IlD6AeLl7wLJ ZmObMzT6TLx+/iuB3vWfAxHqFLjp9G/T+e/RAQERMMwWu2BQojRRK7kejBkmwt5Jlt5u/vsLni/f6 oqG0Of5q+Zpnkj6TfTkW7N05P+Bf8qxRbCygnCUCYY7G2O6WX8xXSg3sjXHg5yGCM59qThEtzjl+A QVwnkQkKOiBSgM/dYhfckLOo5t4S8oS0+BW4WbGQgCgHbYmuixlz1Di4BoBTAyqiEAMRK7UWfYdCg YHElHXhg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t7xnw-0000000E09j-1UYz; Mon, 04 Nov 2024 14:11:40 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t7xCO-0000000DsVD-2t1Q for linux-arm-kernel@lists.infradead.org; Mon, 04 Nov 2024 13:32:54 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-e2b9f2c6559so6261042276.2 for ; Mon, 04 Nov 2024 05:32:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727171; x=1731331971; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NXCdoRKt7Rne/pKH5oiyWCxEYoVlk91YPJHINOsPplo=; b=Rnbg6wq/HpdeuMIRTnHjCqE9bnw5iaaBDgE7sMUJfFBJW6ujkidLZLIHq2IW9OYv78 XVCi+IYftsCOtpDm9kdwOUwlE19/AaIjT5fn1MpcL/y/lhZsGQ1FTWnXG/aoOcrbra/w vsb9FhX9TsXH0dXO2h25D/io1CnIc+zBf5WS46ftBJBCbIxlUGX4PxNEFg56GqlduQv9 0wWsOyroIR6MsaiqEeXmnvn5jQNJLprHN33ZrxcG2NMWfPsQJOD1JvH0PHR+nPijyL/J h0dbSTFThxnFvj5u1UlMZOYhQVZgrlprCbtVhuR5fmuKRh/fDVDuJ4undI1CfpV52Dt4 +lGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727171; x=1731331971; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NXCdoRKt7Rne/pKH5oiyWCxEYoVlk91YPJHINOsPplo=; b=SIyaEsBIUrCGz46Uh8Lo7HesuFBGNG87v/7jAOwoVZZJCmnj/qPA9Q54yAdzijwmWT 3lzqkT3Pm9S38ykUdLHkorCb/YGbmid3qIiN2ehGXBgOV77mW5DbPwPYjDwQjKfMtUhM MpS5BwLlpIB0glXXmA6ebxaeN9h03Y8w3vkHJASwpqpElpOJ8T0YbG6p3+fOQYCca2vD y4bBDG6Jh5rW7q/OKMY2pUWmQkL/ogaDwZhlaeu2wxTiiqBlL//xubEYTYW5TOcCmpN3 OwgFcpHAlAeQ331AjcsTNUt4jtYrTjsbK6MGc6T+L3sphXwzeaJ6mAVv3yqa58THKmQ5 t65Q== X-Forwarded-Encrypted: i=1; AJvYcCXVoueeFo+uhEYZBOwj7a6J2sYIaEj8i6n/5zBt+LGES77CwwdULNx9N27eO5oZWAlzGrYnoM+N9dxI/ohVPVbv@lists.infradead.org X-Gm-Message-State: AOJu0YxqcDj7o/c5rB3lsa3H0mqnXNQ1REArr6LOSghoTsanPoPihI1T FaMAF0GmgxK/1ZXj2XLsrrfMlbrlcGs8GknajaFFDN8KWgCnvYNU6DP9OMONoE4EoHIMcPc86n3 3S+n7UQ== X-Google-Smtp-Source: AGHT+IGyq0e2NqBrezYd4WLx5KMstrrmZ9yqIIsqORj6Yky486c6IZYV1KwtX6oHvgtlaeUeDVltT641hEpU X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a25:aa83:0:b0:e30:c235:d79f with SMTP id 3f1490d57ef6-e30e5b282d1mr8312276.8.1730727171460; Mon, 04 Nov 2024 05:32:51 -0800 (PST) Date: Mon, 4 Nov 2024 13:32:04 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-19-qperret@google.com> Subject: [PATCH 18/18] KVM: arm64: Plumb the pKVM MMU in KVM From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241104_053252_759181_CCE11F75 X-CRM114-Status: GOOD ( 20.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce the KVM_PGT_S2() helper macro to allow switching from the traditional pgtable code to the pKVM version easily in mmu.c. The cost of this 'indirection' is expected to be very minimal due to is_protected_kvm_enabled() being backed by a static key. With this, everything is in place to allow the delegation of non-protected guest stage-2 page-tables to pKVM, so let's stop using the host's kvm_s2_mmu from EL2 and enjoy the ride. Signed-off-by: Quentin Perret --- arch/arm64/kvm/arm.c | 9 ++- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 2 - arch/arm64/kvm/mmu.c | 104 +++++++++++++++++++++-------- 3 files changed, 84 insertions(+), 31 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 2bf168b17a77..890c89874c6b 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -506,7 +506,10 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) if (vcpu_has_run_once(vcpu) && unlikely(!irqchip_in_kernel(vcpu->kvm))) static_branch_dec(&userspace_irqchip_in_use); - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + if (!is_protected_kvm_enabled()) + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + else + free_hyp_memcache(&vcpu->arch.pkvm_memcache); kvm_timer_vcpu_terminate(vcpu); kvm_pmu_vcpu_destroy(vcpu); kvm_vgic_vcpu_destroy(vcpu); @@ -578,6 +581,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) struct kvm_s2_mmu *mmu; int *last_ran; + if (is_protected_kvm_enabled()) + goto nommu; + if (vcpu_has_nv(vcpu)) kvm_vcpu_load_hw_mmu(vcpu); @@ -598,6 +604,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) *last_ran = vcpu->vcpu_idx; } +nommu: vcpu->cpu = cpu; kvm_vgic_load(vcpu); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 1d8baa14ff1c..cf0fd83552c9 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -103,8 +103,6 @@ static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) /* Limit guest vector length to the maximum supported by the host. */ hyp_vcpu->vcpu.arch.sve_max_vl = min(host_vcpu->arch.sve_max_vl, kvm_host_sve_max_vl); - hyp_vcpu->vcpu.arch.hw_mmu = host_vcpu->arch.hw_mmu; - hyp_vcpu->vcpu.arch.hcr_el2 = host_vcpu->arch.hcr_el2; hyp_vcpu->vcpu.arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 80dd61038cc7..fcf8fdcccd22 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -31,6 +32,14 @@ static phys_addr_t __ro_after_init hyp_idmap_vector; static unsigned long __ro_after_init io_map_base; +#define KVM_PGT_S2(fn, ...) \ + ({ \ + typeof(kvm_pgtable_stage2_ ## fn) *__fn = kvm_pgtable_stage2_ ## fn; \ + if (is_protected_kvm_enabled()) \ + __fn = pkvm_pgtable_ ## fn; \ + __fn(__VA_ARGS__); \ + }) + static phys_addr_t __stage2_range_addr_end(phys_addr_t addr, phys_addr_t end, phys_addr_t size) { @@ -147,7 +156,7 @@ static int kvm_mmu_split_huge_pages(struct kvm *kvm, phys_addr_t addr, return -EINVAL; next = __stage2_range_addr_end(addr, end, chunk_size); - ret = kvm_pgtable_stage2_split(pgt, addr, next - addr, cache); + ret = KVM_PGT_S2(split, pgt, addr, next - addr, cache); if (ret) break; } while (addr = next, addr != end); @@ -168,15 +177,23 @@ static bool memslot_is_logging(struct kvm_memory_slot *memslot) */ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { - kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); + if (is_protected_kvm_enabled()) + kvm_call_hyp_nvhe(__pkvm_tlb_flush_vmid, kvm->arch.pkvm.handle); + else + kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); return 0; } int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages) { - kvm_tlb_flush_vmid_range(&kvm->arch.mmu, - gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT); + u64 size = nr_pages << PAGE_SHIFT; + u64 addr = gfn << PAGE_SHIFT; + + if (is_protected_kvm_enabled()) + kvm_call_hyp_nvhe(__pkvm_tlb_flush_vmid, kvm->arch.pkvm.handle); + else + kvm_tlb_flush_vmid_range(&kvm->arch.mmu, addr, size); return 0; } @@ -225,7 +242,7 @@ static void stage2_free_unlinked_table_rcu_cb(struct rcu_head *head) void *pgtable = page_to_virt(page); s8 level = page_private(page); - kvm_pgtable_stage2_free_unlinked(&kvm_s2_mm_ops, pgtable, level); + KVM_PGT_S2(free_unlinked, &kvm_s2_mm_ops, pgtable, level); } static void stage2_free_unlinked_table(void *addr, s8 level) @@ -316,6 +333,12 @@ static void invalidate_icache_guest_page(void *va, size_t size) * destroying the VM), otherwise another faulting VCPU may come in and mess * with things behind our backs. */ + +static int kvm_s2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(unmap, pgt, addr, size); +} + static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size, bool may_block) { @@ -324,8 +347,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 lockdep_assert_held_write(&kvm->mmu_lock); WARN_ON(size & ~PAGE_MASK); - WARN_ON(stage2_apply_range(mmu, start, end, kvm_pgtable_stage2_unmap, - may_block)); + WARN_ON(stage2_apply_range(mmu, start, end, kvm_s2_unmap, may_block)); } void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, @@ -334,9 +356,14 @@ void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, __unmap_stage2_range(mmu, start, size, may_block); } +static int kvm_s2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(flush, pgt, addr, size); +} + void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_flush); + stage2_apply_range_resched(mmu, addr, end, kvm_s2_flush); } static void stage2_flush_memslot(struct kvm *kvm, @@ -942,10 +969,14 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t return -ENOMEM; mmu->arch = &kvm->arch; - err = kvm_pgtable_stage2_init(pgt, mmu, &kvm_s2_mm_ops); + err = KVM_PGT_S2(init, pgt, mmu, &kvm_s2_mm_ops); if (err) goto out_free_pgtable; + mmu->pgt = pgt; + if (is_protected_kvm_enabled()) + return 0; + mmu->last_vcpu_ran = alloc_percpu(typeof(*mmu->last_vcpu_ran)); if (!mmu->last_vcpu_ran) { err = -ENOMEM; @@ -959,7 +990,6 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t mmu->split_page_chunk_size = KVM_ARM_EAGER_SPLIT_CHUNK_SIZE_DEFAULT; mmu->split_page_cache.gfp_zero = __GFP_ZERO; - mmu->pgt = pgt; mmu->pgd_phys = __pa(pgt->pgd); if (kvm_is_nested_s2_mmu(kvm, mmu)) @@ -968,7 +998,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t return 0; out_destroy_pgtable: - kvm_pgtable_stage2_destroy(pgt); + KVM_PGT_S2(destroy, pgt); out_free_pgtable: kfree(pgt); return err; @@ -1065,7 +1095,7 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) write_unlock(&kvm->mmu_lock); if (pgt) { - kvm_pgtable_stage2_destroy(pgt); + KVM_PGT_S2(destroy, pgt); kfree(pgt); } } @@ -1082,9 +1112,11 @@ static void *hyp_mc_alloc_fn(void *unused) void free_hyp_memcache(struct kvm_hyp_memcache *mc) { - if (is_protected_kvm_enabled()) - __free_hyp_memcache(mc, hyp_mc_free_fn, - kvm_host_va, NULL); + if (!is_protected_kvm_enabled()) + return; + + kfree(mc->mapping); + __free_hyp_memcache(mc, hyp_mc_free_fn, kvm_host_va, NULL); } int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages) @@ -1092,6 +1124,12 @@ int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages) if (!is_protected_kvm_enabled()) return 0; + if (!mc->mapping) { + mc->mapping = kzalloc(sizeof(struct pkvm_mapping), GFP_KERNEL_ACCOUNT); + if (!mc->mapping) + return -ENOMEM; + } + return __topup_hyp_memcache(mc, min_pages, hyp_mc_alloc_fn, kvm_host_pa, NULL); } @@ -1130,8 +1168,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, break; write_lock(&kvm->mmu_lock); - ret = kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, pa, prot, - &cache, 0); + ret = KVM_PGT_S2(map, pgt, addr, PAGE_SIZE, pa, prot, &cache, 0); write_unlock(&kvm->mmu_lock); if (ret) break; @@ -1143,6 +1180,10 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, return ret; } +static int kvm_s2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(wrprotect, pgt, addr, size); +} /** * kvm_stage2_wp_range() - write protect stage2 memory region range * @mmu: The KVM stage-2 MMU pointer @@ -1151,7 +1192,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, */ void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_wrprotect); + stage2_apply_range_resched(mmu, addr, end, kvm_s2_wrprotect); } /** @@ -1431,9 +1472,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, unsigned long mmu_seq; phys_addr_t ipa = fault_ipa; struct kvm *kvm = vcpu->kvm; - struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; struct vm_area_struct *vma; short vma_shift; + void *memcache; gfn_t gfn; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); @@ -1460,8 +1501,15 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * and a write fault needs to collapse a block entry into a table. */ if (!fault_is_perm || (logging_active && write_fault)) { - ret = kvm_mmu_topup_memory_cache(memcache, - kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu)); + int min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); + + if (!is_protected_kvm_enabled()) { + memcache = &vcpu->arch.mmu_page_cache; + ret = kvm_mmu_topup_memory_cache(memcache, min_pages); + } else { + memcache = &vcpu->arch.pkvm_memcache; + ret = topup_hyp_memcache(memcache, min_pages); + } if (ret) return ret; } @@ -1482,7 +1530,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * logging_active is guaranteed to never be true for VM_PFNMAP * memslots. */ - if (logging_active) { + if (logging_active || is_protected_kvm_enabled()) { force_pte = true; vma_shift = PAGE_SHIFT; } else { @@ -1684,9 +1732,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * PTE, which will be preserved. */ prot &= ~KVM_NV_GUEST_MAP_SZ; - ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot, flags); + ret = KVM_PGT_S2(relax_perms, pgt, fault_ipa, prot, flags); } else { - ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, + ret = KVM_PGT_S2(map, pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, memcache, flags); } @@ -1715,7 +1763,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) read_lock(&vcpu->kvm->mmu_lock); mmu = vcpu->arch.hw_mmu; - pte = kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa, flags); + pte = KVM_PGT_S2(mkyoung, mmu->pgt, fault_ipa, flags); read_unlock(&vcpu->kvm->mmu_lock); if (kvm_pte_valid(pte)) @@ -1758,7 +1806,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) } /* Falls between the IPA range and the PARange? */ - if (fault_ipa >= BIT_ULL(vcpu->arch.hw_mmu->pgt->ia_bits)) { + if (fault_ipa >= BIT_ULL(VTCR_EL2_IPA(vcpu->arch.hw_mmu->vtcr))) { fault_ipa |= kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0); if (is_iabt) @@ -1924,7 +1972,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (!kvm->arch.mmu.pgt) return false; - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + return KVM_PGT_S2(test_clear_young, kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, size, true); /* @@ -1940,7 +1988,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (!kvm->arch.mmu.pgt) return false; - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + return KVM_PGT_S2(test_clear_young, kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, size, false); }