From patchwork Thu Jun 30 13:57:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901900 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32E49C43334 for ; Thu, 30 Jun 2022 14:13:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237067AbiF3ONz (ORCPT ); Thu, 30 Jun 2022 10:13:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237508AbiF3ONY (ORCPT ); Thu, 30 Jun 2022 10:13:24 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2D5519290 for ; Thu, 30 Jun 2022 06:58:28 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4DC17620E4 for ; Thu, 30 Jun 2022 13:58:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 66962C34115; Thu, 30 Jun 2022 13:58:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597507; bh=vQhtWqNapQ5J/g7Q7VK4YV9y0KdMgU3VBqeX83rHXNI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SoEKnFL3hgkDfN/MGe8myg97zESuSdsnvp9F8LErpnaS1LVdLuod7/7AZfVX6OAdI l0piGZBd7Q2oPlR3d3a4sSflQpEnoPYgox6Q+kWqjBsjiGqXVxOCNTvqzii+uKF5HU pT6IUoEKKNH/0eNM2XYE/V6Bzs85DiNNNKDnIroz/Fu8rOevTdiEKrrnaLDbFnosbD lpL3mKpMYjbw5Bje8W1lt8X/ZkR8wKJHkoVuMKgxWORQzB4XeHkG/YJWaNECBQ9CYM cpuWeC+yCZLi1PBmDw57lP21NWAbXX/sIdRC8BkHsA+efxKwWzVFMADP2Rb+OCoXWf Cn3x8CEqoCGDQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 08/24] KVM: arm64: Prevent the donation of no-map pages Date: Thu, 30 Jun 2022 14:57:31 +0100 Message-Id: <20220630135747.26983-9-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Quentin Perret Memory regions marked as no-map in DT routinely include TrustZone carevouts and such. Although donating such pages to the hypervisor may not breach confidentiality, it may be used to corrupt its state in uncontrollable ways. To prevent this, let's block host-initiated memory transitions targeting no-map pages altogether in nVHE protected mode as there should be no valid reason to do this currently. Thankfully, the pKVM EL2 hypervisor has a full copy of the host's list of memblock regions, hence allowing to check for the presence of the MEMBLOCK_NOMAP flag on any given region at EL2 easily. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index f475d554c9fd..e7015bbefbea 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -193,7 +193,7 @@ struct kvm_mem_range { u64 end; }; -static bool find_mem_range(phys_addr_t addr, struct kvm_mem_range *range) +static struct memblock_region *find_mem_range(phys_addr_t addr, struct kvm_mem_range *range) { int cur, left = 0, right = hyp_memblock_nr; struct memblock_region *reg; @@ -216,18 +216,28 @@ static bool find_mem_range(phys_addr_t addr, struct kvm_mem_range *range) } else { range->start = reg->base; range->end = end; - return true; + return reg; } } - return false; + return NULL; } bool addr_is_memory(phys_addr_t phys) { struct kvm_mem_range range; - return find_mem_range(phys, &range); + return !!find_mem_range(phys, &range); +} + +static bool addr_is_allowed_memory(phys_addr_t phys) +{ + struct memblock_region *reg; + struct kvm_mem_range range; + + reg = find_mem_range(phys, &range); + + return reg && !(reg->flags & MEMBLOCK_NOMAP); } static bool is_in_mem_range(u64 addr, struct kvm_mem_range *range) @@ -350,7 +360,7 @@ static bool host_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot pr static int host_stage2_idmap(u64 addr) { struct kvm_mem_range range; - bool is_memory = find_mem_range(addr, &range); + bool is_memory = !!find_mem_range(addr, &range); enum kvm_pgtable_prot prot; int ret; @@ -428,7 +438,7 @@ static int __check_page_state_visitor(u64 addr, u64 end, u32 level, struct check_walk_data *d = arg; kvm_pte_t pte = *ptep; - if (kvm_pte_valid(pte) && !addr_is_memory(kvm_pte_to_phys(pte))) + if (kvm_pte_valid(pte) && !addr_is_allowed_memory(kvm_pte_to_phys(pte))) return -EINVAL; return d->get_page_state(pte) == d->desired ? 0 : -EPERM;