From patchwork Wed Sep 20 19:27:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13393326 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DBBD5C04FF1 for ; Wed, 20 Sep 2023 19:28:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Y4oKLBi+QH75pujZmkhyH29PB9dlJUFeyxWDNRag+8g=; b=J6JkC+xpg1YbIv xTSvfpFInCnf4Br16AjTC0cpL9QXzO+2dwxvYbcHRPQAqoUzSeUdrMC9QAnhquIuSkdbODhpIR8Lx zMnUImvvMDZIrFm+lE3BFQnCHZA7/J6Appsd2kcabu1eVkOpwIEahPXvdNAK6bjHVtSaB+yQgAc7T O6/TQz+R7nP8mBKy4OMskSz4/YL2aLezOzZPDj0ScJcLS2N/lX22DhpApWDuOmNAO571hAp7HNVYe 1RjH5POMe2hsPJdT+DPKyKkPkkz3z60sAURKXQtIx6dTFMl9VJIbp/+YudWAnn82lZhvL0xdKn19l LfdzxPBs3s5ZnInQ89LQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qj2rp-003zxk-0g; Wed, 20 Sep 2023 19:28:09 +0000 Received: from smtp-fw-9102.amazon.com ([207.171.184.29]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qj2rl-003zwB-31 for linux-arm-kernel@lists.infradead.org; Wed, 20 Sep 2023 19:28:07 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1695238086; x=1726774086; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3EuVlNZSA5R6HEGbmtLno+CIP17KRWTgo1O4bolnZTY=; b=GVyvB54wzYVN4QuWev12Rbs9aS3pHYQHAnvan+DnUt24WuNeuGZORlhV r/JPhPVicMagYX3wjrW/AWVFh3y+j0Mi2LEk+40w6AagIOtzUzZKeO25t smOKMKCULfpVI16zS1hrsnYWouamR+GsJXNucfRANrZ77Q+qm+QIN1aje Q=; X-IronPort-AV: E=Sophos;i="6.03,162,1694736000"; d="scan'208";a="364769464" Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-5eae960a.us-west-2.amazon.com) ([10.25.36.214]) by smtp-border-fw-9102.sea19.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Sep 2023 19:28:01 +0000 Received: from EX19MTAUWB002.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-pdx-2c-m6i4x-5eae960a.us-west-2.amazon.com (Postfix) with ESMTPS id A2EDA40E6A; Wed, 20 Sep 2023 19:28:00 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX19MTAUWB002.ant.amazon.com (10.250.64.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.37; Wed, 20 Sep 2023 19:27:47 +0000 Received: from u1e958862c3245e.ant.amazon.com (10.111.86.147) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.37; Wed, 20 Sep 2023 19:27:47 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , , , , Will Deacon , Quentin Perret , Marc Zyngier , Suraj Jitindar Singh Subject: [PATCH stable 6.1.y 2/2] KVM: arm64: Prevent unconditional donation of unmapped regions from the host Date: Wed, 20 Sep 2023 12:27:29 -0700 Message-ID: <20230920192729.694309-2-surajjs@amazon.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230920192729.694309-1-surajjs@amazon.com> References: <20230920192729.694309-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.111.86.147] X-ClientProxiedBy: EX19D041UWB003.ant.amazon.com (10.13.139.176) To EX19D030UWB002.ant.amazon.com (10.13.139.182) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230920_122806_046151_338CE1A7 X-CRM114-Status: GOOD ( 12.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon commit 09cce60bddd6461a93a5bf434265a47827d1bc6f upstream. Since host stage-2 mappings are created lazily, we cannot rely solely on the pte in order to recover the target physical address when checking a host-initiated memory transition as this permits donation of unmapped regions corresponding to MMIO or "no-map" memory. Instead of inspecting the pte, move the addr_is_allowed_memory() check into the host callback function where it is passed the physical address directly from the walker. Cc: Quentin Perret Fixes: e82edcc75c4e ("KVM: arm64: Implement do_share() helper for sharing memory") Signed-off-by: Will Deacon Signed-off-by: Marc Zyngier Link: https://lore.kernel.org/r/20230518095844.1178-1-will@kernel.org [ bp: s/ctx->addr/addr in __check_page_state_visitor due to missing commit "KVM: arm64: Combine visitor arguments into a context structure" in stable. ] Signed-off-by: Suraj Jitindar Singh --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 0f6c053686c7..0faa330a41ed 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -424,7 +424,7 @@ struct pkvm_mem_share { struct check_walk_data { enum pkvm_page_state desired; - enum pkvm_page_state (*get_page_state)(kvm_pte_t pte); + enum pkvm_page_state (*get_page_state)(kvm_pte_t pte, u64 addr); }; static int __check_page_state_visitor(u64 addr, u64 end, u32 level, @@ -435,10 +435,7 @@ static int __check_page_state_visitor(u64 addr, u64 end, u32 level, struct check_walk_data *d = arg; kvm_pte_t pte = *ptep; - if (kvm_pte_valid(pte) && !addr_is_allowed_memory(kvm_pte_to_phys(pte))) - return -EINVAL; - - return d->get_page_state(pte) == d->desired ? 0 : -EPERM; + return d->get_page_state(pte, addr) == d->desired ? 0 : -EPERM; } static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size, @@ -453,8 +450,11 @@ static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size, return kvm_pgtable_walk(pgt, addr, size, &walker); } -static enum pkvm_page_state host_get_page_state(kvm_pte_t pte) +static enum pkvm_page_state host_get_page_state(kvm_pte_t pte, u64 addr) { + if (!addr_is_allowed_memory(addr)) + return PKVM_NOPAGE; + if (!kvm_pte_valid(pte) && pte) return PKVM_NOPAGE; @@ -521,7 +521,7 @@ static int host_initiate_unshare(u64 *completer_addr, return __host_set_page_state_range(addr, size, PKVM_PAGE_OWNED); } -static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte) +static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte, u64 addr) { if (!kvm_pte_valid(pte)) return PKVM_NOPAGE;