From patchwork Mon Mar 14 20:01:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12780744 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A380EC433EF for ; Mon, 14 Mar 2022 20:04:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=98R9rrRfZcZDPT0MhXQG+cEMh/X4JH04y+reUwDz6f8=; b=IeqkP1XjNu4gfV hizqQY0eh8lKJd8MLCuW0sH96kRdmoKutltyRCkkt07muWpCsgrmAnUkAfvNxqCIeisl5YaW5lkgV Lj4Qw5qC4Si6ePCW6+c5LIMqSKPnxfXy6MV6s312K6GUvUuT8Nt4PZkInYONCsGx/X5VqecaDOEaJ Cpl9ZnKP6R52lEr0mkB+NOy9f3ep14l9nwXmN8sYK1bpITN+Zvu17oSSK1pPlzMNjcMRGTwJ1Ar03 n97QHZfZDOa9VjPX6WD73HuYsXGtWiF6bTEtURR4XowHgYdRG5FitGQ7gcKwoTbNdcCPM9FPuTul6 Tj+vg2AbJWptrsUuU6Og==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTqu2-006h94-Ek; Mon, 14 Mar 2022 20:02:50 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTqtx-006h76-3M for linux-arm-kernel@lists.infradead.org; Mon, 14 Mar 2022 20:02:46 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-2e584d5159eso1680017b3.23 for ; Mon, 14 Mar 2022 13:02:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=fM7xdaekwmagyuKy0kdx3p5v2FfzYu7Y7vGXarjRfKM=; b=Jju9oT+vQW6Extb2//d+2hPcrzdqRTRz8LLvYdwkCzvVoHbTYTsLeq6DDN9pwSg4ex dJuDeCwtJBErKNwSBqyIMYZ6AFZs+yiF2LMOhtVE+Cb37YqZS3LzXs37tEsjiA8u3Vsn RMHL0VeQfR4FN1Xux9ZdmyC65BaUdLmGfAI+3qxiGPwG/xUCjE9QBN701VL5QjJIAUK2 sZcfFSMlrS9+17Y5ylq6szGPggUXnm/2kkUQ32XTCsVEKMh+gx80kgkhvJ6ikwn8kqNU GAoNNxigrbH/N2vf6cWg6NpJhPXDn4FL/pVYIKx5ca8jpSu3DjX0MUTI1xeaBoH/PlkP kahg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=fM7xdaekwmagyuKy0kdx3p5v2FfzYu7Y7vGXarjRfKM=; b=YhEBrnvGJ1UvIeh1FjLcxTqY8iu+BJw4dNw8PjFKBO1Vrex9dCu5sABFYuO9kwTDYb CqbmB3FJf1VjbFQ/gMI+TAxhgqN4iGhW+h8y+uo6QVM/CcHLQ8Tb3vsQ2gCUAS6ATtZL XAM2EY2LOAlaQjuYVlNxhZq+YGKmUMirPfZYkYY2D/e95brn5aO4tp/n4jJHR7FHqdti QKwdQKHvaZh2NkSPAf6AC6TSWBEfO4bz+TlD0U0dP213y57dNDs1th3jzluz/fQc1gPX JVLtSH2cIShRTJ6DxM4Pvw9As34zq5hbo1vMtfKbi92JdtQlIsD+ZnZh8nAWS47kqARH Fu6g== X-Gm-Message-State: AOAM530cExuWVsjS1nqbBAgnUO7plEyGpzl8VlUPJrhX1dyPdldnonCN BwJQzgqb9PDW7VIewe5UKPnSw6v8w+Sw/g2MMQ== X-Google-Smtp-Source: ABdhPJx9FWdZStCzmeuHXnB38FrYj2l6xi5QsttqJvp/Oho17IbEa6feRFDkIv3tMXYmLFL0VOkW/PwFarS6dq/e3Q== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:500a:9ac2:e73e:69e3]) (user=kaleshsingh job=sendgmr) by 2002:a0d:d546:0:b0:2dc:5667:8642 with SMTP id x67-20020a0dd546000000b002dc56678642mr20779116ywd.111.1647288163318; Mon, 14 Mar 2022 13:02:43 -0700 (PDT) Date: Mon, 14 Mar 2022 13:01:10 -0700 In-Reply-To: <20220314200148.2695206-1-kaleshsingh@google.com> Message-Id: <20220314200148.2695206-2-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220314200148.2695206-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v6 1/8] KVM: arm64: Introduce hyp_alloc_private_va_range() From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Mark Rutland , Mark Brown , Masami Hiramatsu , Peter Collingbourne , "Madhavan T. Venkataraman" , Stephen Boyd , Andrew Scull , Andrew Jones , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220314_130245_171025_44407112 X-CRM114-Status: GOOD ( 19.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org hyp_alloc_private_va_range() can be used to reserve private VA ranges in the nVHE hypervisor. Allocations are aligned based on the order of the requested size. This will be used to implement stack guard pages for KVM nVHE hypervisor (nVHE Hyp mode / not pKVM), in a subsequent patch in the series. Signed-off-by: Kalesh Singh Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba --- Changes in v6: - Update kernel-doc for hyp_alloc_private_va_range() and add return description, per Stephen - Update hyp_alloc_private_va_range() to return an int error code, per Stephen - Replace IS_ERR() checks with IS_ERR_VALUE() check, per Stephen - Clean up goto, per Stephen Changes in v5: - Align private allocations based on the order of their size, per Marc Changes in v4: - Handle null ptr in hyp_alloc_private_va_range() and replace IS_ERR_OR_NULL checks in callers with IS_ERR checks, per Fuad - Fix kernel-doc comments format, per Fuad Changes in v3: - Handle null ptr in IS_ERR_OR_NULL checks, per Mark arch/arm64/include/asm/kvm_mmu.h | 1 + arch/arm64/kvm/mmu.c | 66 +++++++++++++++++++++----------- 2 files changed, 45 insertions(+), 22 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 81839e9a8a24..3cc9aa25f510 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -153,6 +153,7 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v) int kvm_share_hyp(void *from, void *to); void kvm_unshare_hyp(void *from, void *to); int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot); +int hyp_alloc_private_va_range(size_t size, unsigned long *haddr); int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size, void __iomem **kaddr, void __iomem **haddr); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index bc2aba953299..7326d683c500 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -457,23 +457,22 @@ int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot) return 0; } -static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, - unsigned long *haddr, - enum kvm_pgtable_prot prot) + +/** + * hyp_alloc_private_va_range - Allocates a private VA range. + * @size: The size of the VA range to reserve. + * @haddr: The hypervisor virtual start address of the allocation. + * + * The private virtual address (VA) range is allocated below io_map_base + * and aligned based on the order of @size. + * + * Return: 0 on success or negative error code on failure. + */ +int hyp_alloc_private_va_range(size_t size, unsigned long *haddr) { unsigned long base; int ret = 0; - if (!kvm_host_owns_hyp_mappings()) { - base = kvm_call_hyp_nvhe(__pkvm_create_private_mapping, - phys_addr, size, prot); - if (IS_ERR_OR_NULL((void *)base)) - return PTR_ERR((void *)base); - *haddr = base; - - return 0; - } - mutex_lock(&kvm_hyp_pgd_mutex); /* @@ -484,30 +483,53 @@ static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, * * The allocated size is always a multiple of PAGE_SIZE. */ - size = PAGE_ALIGN(size + offset_in_page(phys_addr)); - base = io_map_base - size; + base = io_map_base - PAGE_ALIGN(size); + + /* Align the allocation based on the order of its size */ + base = ALIGN_DOWN(base, PAGE_SIZE << get_order(size)); /* * Verify that BIT(VA_BITS - 1) hasn't been flipped by * allocating the new area, as it would indicate we've * overflowed the idmap/IO address range. */ - if ((base ^ io_map_base) & BIT(VA_BITS - 1)) + if (!base || (base ^ io_map_base) & BIT(VA_BITS - 1)) ret = -ENOMEM; else - io_map_base = base; + *haddr = io_map_base = base; mutex_unlock(&kvm_hyp_pgd_mutex); + return ret; +} + +static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, + unsigned long *haddr, + enum kvm_pgtable_prot prot) +{ + unsigned long addr; + int ret = 0; + + if (!kvm_host_owns_hyp_mappings()) { + addr = kvm_call_hyp_nvhe(__pkvm_create_private_mapping, + phys_addr, size, prot); + if (IS_ERR_VALUE(addr)) + return addr; + *haddr = addr; + + return 0; + } + + size += offset_in_page(phys_addr); + ret = hyp_alloc_private_va_range(size, &addr); if (ret) - goto out; + return ret; - ret = __create_hyp_mappings(base, size, phys_addr, prot); + ret = __create_hyp_mappings(addr, size, phys_addr, prot); if (ret) - goto out; + return ret; - *haddr = base + offset_in_page(phys_addr); -out: + *haddr = addr + offset_in_page(phys_addr); return ret; }