From patchwork Thu Jun 30 13:57:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12901854 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9B6F1C43334 for ; Thu, 30 Jun 2022 14:04:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4Epr5U+y8rXXxGKehiTJGpB5DtsFNbe4zFMnDGf4+AU=; b=SDmWg1CeTdElR3 kGR08nc5T9AAuWFP2Zd5Hbeelvdhy5fqgFAAiQ6HDO4jPUWIUTZsF49a016vNTwoSO3wJUR2dphgh lA6jl8ZRUX9/pa4UxKirxUHWDbT23VcSPUUlhbYQX01JqJJ4twByoRBxrlVzetzTxN8saA6UFKKlf PpE+0j7bpz5w+MZj7vJIXFA7MVPcwSpRHoR1oYM/IkFIg+BSORsween96B7Nc044FMPDK08uBVeHj 1M3w6or3yUV49jHJ8TEwklWOBHzIbb5diLSDmDL8vMm87xEQb8erB8Z9PjMkum9EKu8L0edvoazJ0 6BHWtOFsxxUBfBbtfG4w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ul4-00HUZK-RH; Thu, 30 Jun 2022 14:03:03 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6ugg-00HSei-Bj for linux-arm-kernel@lists.infradead.org; Thu, 30 Jun 2022 13:58:32 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 045BAB82AEF; Thu, 30 Jun 2022 13:58:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 66962C34115; Thu, 30 Jun 2022 13:58:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656597507; bh=vQhtWqNapQ5J/g7Q7VK4YV9y0KdMgU3VBqeX83rHXNI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SoEKnFL3hgkDfN/MGe8myg97zESuSdsnvp9F8LErpnaS1LVdLuod7/7AZfVX6OAdI l0piGZBd7Q2oPlR3d3a4sSflQpEnoPYgox6Q+kWqjBsjiGqXVxOCNTvqzii+uKF5HU pT6IUoEKKNH/0eNM2XYE/V6Bzs85DiNNNKDnIroz/Fu8rOevTdiEKrrnaLDbFnosbD lpL3mKpMYjbw5Bje8W1lt8X/ZkR8wKJHkoVuMKgxWORQzB4XeHkG/YJWaNECBQ9CYM cpuWeC+yCZLi1PBmDw57lP21NWAbXX/sIdRC8BkHsA+efxKwWzVFMADP2Rb+OCoXWf Cn3x8CEqoCGDQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 08/24] KVM: arm64: Prevent the donation of no-map pages Date: Thu, 30 Jun 2022 14:57:31 +0100 Message-Id: <20220630135747.26983-9-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220630135747.26983-1-will@kernel.org> References: <20220630135747.26983-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220630_065830_734984_FF8416B0 X-CRM114-Status: GOOD ( 15.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret Memory regions marked as no-map in DT routinely include TrustZone carevouts and such. Although donating such pages to the hypervisor may not breach confidentiality, it may be used to corrupt its state in uncontrollable ways. To prevent this, let's block host-initiated memory transitions targeting no-map pages altogether in nVHE protected mode as there should be no valid reason to do this currently. Thankfully, the pKVM EL2 hypervisor has a full copy of the host's list of memblock regions, hence allowing to check for the presence of the MEMBLOCK_NOMAP flag on any given region at EL2 easily. Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index f475d554c9fd..e7015bbefbea 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -193,7 +193,7 @@ struct kvm_mem_range { u64 end; }; -static bool find_mem_range(phys_addr_t addr, struct kvm_mem_range *range) +static struct memblock_region *find_mem_range(phys_addr_t addr, struct kvm_mem_range *range) { int cur, left = 0, right = hyp_memblock_nr; struct memblock_region *reg; @@ -216,18 +216,28 @@ static bool find_mem_range(phys_addr_t addr, struct kvm_mem_range *range) } else { range->start = reg->base; range->end = end; - return true; + return reg; } } - return false; + return NULL; } bool addr_is_memory(phys_addr_t phys) { struct kvm_mem_range range; - return find_mem_range(phys, &range); + return !!find_mem_range(phys, &range); +} + +static bool addr_is_allowed_memory(phys_addr_t phys) +{ + struct memblock_region *reg; + struct kvm_mem_range range; + + reg = find_mem_range(phys, &range); + + return reg && !(reg->flags & MEMBLOCK_NOMAP); } static bool is_in_mem_range(u64 addr, struct kvm_mem_range *range) @@ -350,7 +360,7 @@ static bool host_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot pr static int host_stage2_idmap(u64 addr) { struct kvm_mem_range range; - bool is_memory = find_mem_range(addr, &range); + bool is_memory = !!find_mem_range(addr, &range); enum kvm_pgtable_prot prot; int ret; @@ -428,7 +438,7 @@ static int __check_page_state_visitor(u64 addr, u64 end, u32 level, struct check_walk_data *d = arg; kvm_pte_t pte = *ptep; - if (kvm_pte_valid(pte) && !addr_is_memory(kvm_pte_to_phys(pte))) + if (kvm_pte_valid(pte) && !addr_is_allowed_memory(kvm_pte_to_phys(pte))) return -EINVAL; return d->get_page_state(pte) == d->desired ? 0 : -EPERM;