From patchwork Thu May 19 13:40:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855095 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D0F3FC433EF for ; Thu, 19 May 2022 13:54:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dlMVUQ8Ljj7ZjPQBpZi+A6OqenZvgKLiO3JVBFvnBCA=; b=JVuG10oo9DpA2o 3KV9X4F8sCcWSoM3xbKCjYp4J1GPrI4HTG9jBxQkrVQYQOVVa1bCLpu/2NEB3u6zkCDTU2kt1tKzV 81+eVl7SuEsmX0GkTpiwe6Vnuwgutu2yzGowoKIRsvAlPzEHGGJ1MkbHPjYje5XgkqL3wIMpuEQ8t h1TAwoAAL31cJG6wKZ0JaHqLIqxbR2G8vgbrGsV/2g1+jABDp/uZkUdNgik/rrzua/E1IUMo0P6pD bdoJjE0TnfeHSq9Z3TFfioRkC6+k4kSPQ7sbWeTZxSOCweh0a3511mzTT7VrNlhW/3UYDNHW6g+eh NeYHUbdELblAgJXVEPmA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgaD-0079ky-05; Thu, 19 May 2022 13:52:53 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgRF-0075La-Oi for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:43:41 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 47AB4617A0; Thu, 19 May 2022 13:43:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 36E9EC34117; Thu, 19 May 2022 13:43:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967816; bh=85shQP61BCSRjbXAkRjEzDdqJZbFWFzFv7BZf/OaUmU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TgnhwW8HZyvKsH5oQyPqeOqAEt7QEwzyfAEDHLdPSe3emDrRl2fCYuS0Dldl2ItKa FWCA3edEkx7BVHqHb8PI81qO4XPXgJFtFKSFK3LqXV0aBpSguRmJ9xT69Pgw/N40P7 vD7sa/6qrCRvinCefLpUO8F0PxUqhzGq8+OEuZ2AXgIuAoOM+/7j7CtPU+SeFAoypv wc5ZzTlEI/vOnowknkF6zPAUqrocTuJk79n272SUWCvDDfFA2B4g91lYg7e/CCm04G NqHnMAVmvWvUSrrWip4vAtPpiC/5wmxWTONRAIBtMaQoKK2v9fr7GKcdC894hMWaya J1EOqwsWnPPbQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 18/89] KVM: arm64: Factor out private range VA allocation Date: Thu, 19 May 2022 14:40:53 +0100 Message-Id: <20220519134204.5379-19-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064337_884923_B6CFB3AA X-CRM114-Status: GOOD ( 15.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret __pkvm_create_private_mapping() is currently responsible for allocating VA space in the hypervisor's "private" range and creating stage-1 mappings. In order to allow reusing the VA space allocation logic from other places, let's factor it out in a standalone function. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/mm.c | 28 +++++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index 168e7fbe9a3c..4377b067dc0e 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -37,6 +37,22 @@ static int __pkvm_create_mappings(unsigned long start, unsigned long size, return err; } +static unsigned long hyp_alloc_private_va_range(size_t size) +{ + unsigned long addr = __io_map_base; + + hyp_assert_lock_held(&pkvm_pgd_lock); + __io_map_base += PAGE_ALIGN(size); + + /* Are we overflowing on the vmemmap ? */ + if (__io_map_base > __hyp_vmemmap) { + __io_map_base = addr; + addr = (unsigned long)ERR_PTR(-ENOMEM); + } + + return addr; +} + unsigned long __pkvm_create_private_mapping(phys_addr_t phys, size_t size, enum kvm_pgtable_prot prot) { @@ -45,16 +61,10 @@ unsigned long __pkvm_create_private_mapping(phys_addr_t phys, size_t size, hyp_spin_lock(&pkvm_pgd_lock); - size = PAGE_ALIGN(size + offset_in_page(phys)); - addr = __io_map_base; - __io_map_base += size; - - /* Are we overflowing on the vmemmap ? */ - if (__io_map_base > __hyp_vmemmap) { - __io_map_base -= size; - addr = (unsigned long)ERR_PTR(-ENOMEM); + size = size + offset_in_page(phys); + addr = hyp_alloc_private_va_range(size); + if (IS_ERR((void *)addr)) goto out; - } err = kvm_pgtable_hyp_map(&pkvm_pgtable, addr, size, phys, prot); if (err) {