From patchwork Mon Mar 7 18:48:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12772234 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 96F11C433FE for ; Mon, 7 Mar 2022 18:51:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hL5gP6xqWupHrN0nV/9d2TPJOuQ7GV8mSxyDRPGOuFc=; b=Wha1CSFhyExo9M ow6Ub5cXOWsCuuQ2VbBgmWOqt8tIDRu8Gxi8nAGcNY09w4i3HwQEx704gp41SLxlNmNsXHiQBkJPt rZi/Nka7hCIK14E2hLB5UQnR5ahRb0ooJoaaKfRIF5id9wfgQpa1vzOvuJOlqw/lba+U3q7zfeTpx NdoUOls1ltpY+/tCvbQwXpnwt8w/pENMi2ZiK+0eMrrdYg54LZSncUY0G4YIdlqHO+iPQXj2TtBpn DCTtBGJOuPrpakpHdPnp6qSB/Yb16Ar6sXiwP5y0mz/QFDZWJWMKWf/kV+axB3fPrODLV9NQICjuX ToeGnKnqhGCOio1AL2zQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRIR3-001DOn-6g; Mon, 07 Mar 2022 18:50:21 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRIQz-001DNZ-Uv for linux-arm-kernel@lists.infradead.org; Mon, 07 Mar 2022 18:50:19 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id h16-20020a056902009000b00628a70584b2so13838054ybs.6 for ; Mon, 07 Mar 2022 10:50:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=U0KXh0E1UfE42rPLXuz3Ms4NzPqEuYXKZZtL9fLcEYA=; b=nU4RTA3/xhDWfnM8dMSLJKKLbRrMNT3c0ORnHVae1dtm01ActWIdwKtb+4XdGdlWfL dy/1kzFpo7DIt5F3glPY4tvNFMHaZDbwC6PVk3ZAvatX334u4JalLNIq5m6iVWhtnW2x Yq0g1D3peSs9qAeVur7zzMcTc+qGYOmgoE8BSAckpbzGceepfCq0A/+HiyL+pWj7iiTH RJZrzy/96BWG6GCv4E/FRHkMnAQPrcnha/N52RvX5vzsYktYfTJyB7to6Xna/wvIyiJN XSayW5jkZJapoyll58iQBJPo9ZrZoBmxddKGdcAmUPZZVz2croqlDmZGSoFPcjUOlDNA Kabw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=U0KXh0E1UfE42rPLXuz3Ms4NzPqEuYXKZZtL9fLcEYA=; b=ayn7IA24BPBqqXG9fICwGvm5XxdSM8UKlZ0jEi7FeOBSp4DZCl5lHCKYez0+aIRyDK 8C0BJeW/8AfVq9MDy9farvEIWXQW9Gs39heemM0Z1jMvGd+sSCeGb9RKHh5BA4Ek+Jqr cN+OiZEzwO2Zaps5Y0AuAN/7lWCbMg2gQ3Lpn1NFQcjibDMIH3Nw+NrVgLFqfXvYgZhh +z4aRlQ8N2pmj0BX9ZxrcSFjt67S/U0dgmJ/WCoug95hZXsKLP8I8IuiweNwCeprvwKt o3S+tmnj5oxVf+taITw+EfmEuNkeuxRoJMgieCwgMC4ssvWkjxAzlRln9+4S0RofnLUp PWMA== X-Gm-Message-State: AOAM532l1yoHZq+WBpsue/asfnBV2KdLY8SARI5MhbvuQOYIVVditaiw vPWYzdT1Z4a7MrUMIUCEU2bfzGOP0QLShX1KNA== X-Google-Smtp-Source: ABdhPJz9iBWuUeegmCOa9Cbkl3sUJD62w7+nQrFfm6gUEqY19/bTdAmd6dRquKjJLU6HAR39kZtea75lirYYlPQxOw== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:dd66:1e7d:1858:4587]) (user=kaleshsingh job=sendgmr) by 2002:a05:690c:16:b0:2db:cfed:de0e with SMTP id bc22-20020a05690c001600b002dbcfedde0emr6331197ywb.271.1646679015317; Mon, 07 Mar 2022 10:50:15 -0800 (PST) Date: Mon, 7 Mar 2022 10:48:59 -0800 In-Reply-To: <20220307184935.1704614-1-kaleshsingh@google.com> Message-Id: <20220307184935.1704614-2-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220307184935.1704614-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH v5 1/8] KVM: arm64: Introduce hyp_alloc_private_va_range() From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Mark Rutland , Mark Brown , Masami Hiramatsu , Peter Collingbourne , "Madhavan T. Venkataraman" , Stephen Boyd , Andrew Scull , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220307_105018_026525_9F2AEDF5 X-CRM114-Status: GOOD ( 18.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org hyp_alloc_private_va_range() can be used to reserve private VA ranges in the nVHE hypervisor. Allocations are aligned based on the order of the requested size. This will be used to implement stack guard pages for KVM nVHE hypervisor (nVHE Hyp mode / not pKVM), in a subsequent patch in the series. Signed-off-by: Kalesh Singh --- Changes in v5: - Align private allocations based on the order of their size, per Marc Changes in v4: - Handle null ptr in hyp_alloc_private_va_range() and replace IS_ERR_OR_NULL checks in callers with IS_ERR checks, per Fuad - Fix kernel-doc comments format, per Fuad Changes in v3: - Handle null ptr in IS_ERR_OR_NULL checks, per Mark arch/arm64/include/asm/kvm_mmu.h | 1 + arch/arm64/kvm/mmu.c | 63 +++++++++++++++++++++----------- 2 files changed, 42 insertions(+), 22 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 81839e9a8a24..514cfee76597 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -153,6 +153,7 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v) int kvm_share_hyp(void *from, void *to); void kvm_unshare_hyp(void *from, void *to); int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot); +unsigned long hyp_alloc_private_va_range(size_t size); int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size, void __iomem **kaddr, void __iomem **haddr); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index bc2aba953299..ccb2847ee2f4 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -457,22 +457,17 @@ int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot) return 0; } -static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, - unsigned long *haddr, - enum kvm_pgtable_prot prot) + +/** + * hyp_alloc_private_va_range - Allocates a private VA range. + * @size: The size of the VA range to reserve. + * + * The private VA range is allocated below io_map_base and + * aligned based on the order of @size. + */ +unsigned long hyp_alloc_private_va_range(size_t size) { unsigned long base; - int ret = 0; - - if (!kvm_host_owns_hyp_mappings()) { - base = kvm_call_hyp_nvhe(__pkvm_create_private_mapping, - phys_addr, size, prot); - if (IS_ERR_OR_NULL((void *)base)) - return PTR_ERR((void *)base); - *haddr = base; - - return 0; - } mutex_lock(&kvm_hyp_pgd_mutex); @@ -484,29 +479,53 @@ static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, * * The allocated size is always a multiple of PAGE_SIZE. */ - size = PAGE_ALIGN(size + offset_in_page(phys_addr)); - base = io_map_base - size; + base = io_map_base - PAGE_ALIGN(size); + + /* Align the allocation based on the order of its size */ + base = ALIGN_DOWN(base, PAGE_SIZE << get_order(size)); /* * Verify that BIT(VA_BITS - 1) hasn't been flipped by * allocating the new area, as it would indicate we've * overflowed the idmap/IO address range. */ - if ((base ^ io_map_base) & BIT(VA_BITS - 1)) - ret = -ENOMEM; + if (!base || (base ^ io_map_base) & BIT(VA_BITS - 1)) + base = (unsigned long)ERR_PTR(-ENOMEM); else io_map_base = base; mutex_unlock(&kvm_hyp_pgd_mutex); - if (ret) - goto out; + return base; +} + +static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, + unsigned long *haddr, + enum kvm_pgtable_prot prot) +{ + unsigned long addr; + int ret = 0; + + if (!kvm_host_owns_hyp_mappings()) { + addr = kvm_call_hyp_nvhe(__pkvm_create_private_mapping, + phys_addr, size, prot); + if (IS_ERR((void *)addr)) + return PTR_ERR((void *)addr); + *haddr = addr; + + return 0; + } + + size += offset_in_page(phys_addr); + addr = hyp_alloc_private_va_range(size); + if (IS_ERR((void *)addr)) + return PTR_ERR((void *)addr); - ret = __create_hyp_mappings(base, size, phys_addr, prot); + ret = __create_hyp_mappings(addr, size, phys_addr, prot); if (ret) goto out; - *haddr = base + offset_in_page(phys_addr); + *haddr = addr + offset_in_page(phys_addr); out: return ret; } From patchwork Mon Mar 7 18:49:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12772235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A72DBC433EF for ; Mon, 7 Mar 2022 18:52:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AWsfqcjiy/fB7s1Hqo28ZoYUA6Bys+ceA4dfIIDYBow=; b=pb1GqrZ+TDGV17 mxz+M6EF4ySDGfQFOkeCP67AuUzpwRhkKKpoMw0UzPbuqLftfPDvI7z82t1M0Ajky7BFLWca+GAiw 1UHO2vjniARalYoSETcLJIS2G5MWgehE1U/cMEnv/xfRb0O/WLYHDIoMsW8zg4IPLTCBTFtlbDcuX c4LLVh+pWaBidr+DDp+c+D9YgqUlp3QVIyycSuXuGhLNQ6xaUkNPraS4bE32z3OYLKm7cBJuBFob/ FzQL+pOCrWDe/519GRrq3Bg2IJJYVz+0tqBbcc6W1oD6Fa3+ZnRgfdmC56WAqfUZtkzLOb3THN5JA p9EAIsrbPYZ9wXGJvE0A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRIRg-001Db7-9J; Mon, 07 Mar 2022 18:51:00 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRIRV-001DX3-AF for linux-arm-kernel@lists.infradead.org; Mon, 07 Mar 2022 18:50:51 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id b11-20020a5b008b000000b00624ea481d55so14286070ybp.19 for ; Mon, 07 Mar 2022 10:50:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=QBbWsCMipbp44l60Hdnv4XMYOtaR1GdEoMf3jzNsWgs=; b=gM0pv/8FCbYZxhqR275beD5eFCkqcNqsMmJ/PCIO9/74PJRIq4bvKL+TExhgPsSV9H 8IZ+Lx+fTJxGshS8UCfhsyQH8OHdgtqaelSPoEUQ/ZZgA5JZ2XXIk2wDBkSsS09ci/Vm nc14x3Fm7A06nDWxKHmLzedW5ynRc/Fk8ZUlE3hox6qPSjvxPdxrqiYDWcslJKOvGQeA OLEi1Q/pjI0xTGX5fvu9dT3uMRuEvspchfT3X08k4XOIdO31s3fgNXbgWMurSlR1u5Kp TvjBEZJQrm+rwVWO0Is4qUz55KsBhuIJ6FUZJDI0qGpEwtwoJlvpoRx3mLo1+HYZSXBn 46LQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=QBbWsCMipbp44l60Hdnv4XMYOtaR1GdEoMf3jzNsWgs=; b=aZ2Kd3NE9TXVmDlZBU927xGv+6oFP0nL2lXXGCmhSyLnv5r/usyErNSi6iG8M7pYLw 9rbqELg/Pz8A52UHuYHo14JSV7ZgL9ZywdGOv3CARPUTUmCXJXEKXfqwf+lZPnUz8PhO +qAsFRqHdI4HCGIRPIHSwH5r9BTwHV1TCG14SkjEAukQmSduVnKSrrRET8dNjBBbICTV 04007c55qAcJlHOnWIvvAkd5OuglAiDH/AkMmRNA/2u5vGHvaIXOgAdnjqY42hOwJiBN UOhYZSonyjwd8Htkt3bWgIe4wgPPBvzyyUqr2HuPq6OjS0SI4PAb0zRhWaA2DRU1L7Vy VaWQ== X-Gm-Message-State: AOAM531IAzC4Dr9iN8CE0yWTSKFOQIq8b8FTYX9/RnscYZZ4PAXbi7Es de6BhOlLviT5byEj0oJ37qpKgZ9zm/ixJsObKQ== X-Google-Smtp-Source: ABdhPJxC3SgNVhIeELDphJF5+sDCMc2sK08i+u2Eh85wtEFHg9LLbSQ7mTnrKZENjGb8LjSuT4xNjhZG48kUobrwoQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:dd66:1e7d:1858:4587]) (user=kaleshsingh job=sendgmr) by 2002:a25:e7c4:0:b0:629:415b:40b5 with SMTP id e187-20020a25e7c4000000b00629415b40b5mr4859841ybh.315.1646679047782; Mon, 07 Mar 2022 10:50:47 -0800 (PST) Date: Mon, 7 Mar 2022 10:49:00 -0800 In-Reply-To: <20220307184935.1704614-1-kaleshsingh@google.com> Message-Id: <20220307184935.1704614-3-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220307184935.1704614-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH v5 2/8] KVM: arm64: Introduce pkvm_alloc_private_va_range() From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Mark Rutland , Mark Brown , Masami Hiramatsu , Peter Collingbourne , "Madhavan T. Venkataraman" , Andrew Scull , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220307_105049_422369_FE461E4E X-CRM114-Status: GOOD ( 18.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org pkvm_hyp_alloc_private_va_range() can be used to reserve private VA ranges in the pKVM nVHE hypervisor. Allocations are aligned based on the order of the requested size. This will be used to implement stack guard pages for pKVM nVHE hypervisor (in a subsequent patch in the series). Credits to Quentin Perret for the idea of moving private VA allocation out of __pkvm_create_private_mapping() Signed-off-by: Kalesh Singh --- Changes in v5: - Align private allocations based on the order of their size, per Marc Changes in v4: - Handle null ptr in pkvm_alloc_private_va_range() and replace IS_ERR_OR_NULL checks in callers with IS_ERR checks, per Fuad - Fix kernel-doc comments format, per Fuad - Format __pkvm_create_private_mapping() prototype args (< 80 col), per Fuad Changes in v3: - Handle null ptr in IS_ERR_OR_NULL checks, per Mark Changes in v2: - Allow specifying an alignment for the private VA allocations, per Marc arch/arm64/kvm/hyp/include/nvhe/mm.h | 1 + arch/arm64/kvm/hyp/nvhe/mm.c | 56 ++++++++++++++++++---------- 2 files changed, 37 insertions(+), 20 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h index 2d08510c6cc1..4489c3c849de 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -21,6 +21,7 @@ int pkvm_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot); int pkvm_create_mappings_locked(void *from, void *to, enum kvm_pgtable_prot prot); unsigned long __pkvm_create_private_mapping(phys_addr_t phys, size_t size, enum kvm_pgtable_prot prot); +unsigned long pkvm_alloc_private_va_range(size_t size); static inline void hyp_vmemmap_range(phys_addr_t phys, unsigned long size, unsigned long *start, unsigned long *end) diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index 526a7d6fa86f..c0943e541a8d 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -37,38 +37,54 @@ static int __pkvm_create_mappings(unsigned long start, unsigned long size, return err; } -unsigned long __pkvm_create_private_mapping(phys_addr_t phys, size_t size, - enum kvm_pgtable_prot prot) +/** + * pkvm_alloc_private_va_range - Allocates a private VA range. + * @size: The size of the VA range to reserve. + * + * The private VA range is allocated above __io_map_base and + * aligned based on the order of @size. + */ +unsigned long pkvm_alloc_private_va_range(size_t size) { - unsigned long addr; - int err; + unsigned long base, addr; hyp_spin_lock(&pkvm_pgd_lock); - size = PAGE_ALIGN(size + offset_in_page(phys)); - addr = __io_map_base; - __io_map_base += size; + /* Align the allocation based on the order of its size */ + addr = ALIGN(__io_map_base, PAGE_SIZE << get_order(size)); + + /* The allocated size is always a multiple of PAGE_SIZE */ + base = addr + PAGE_ALIGN(size); /* Are we overflowing on the vmemmap ? */ - if (__io_map_base > __hyp_vmemmap) { - __io_map_base -= size; + if (!addr || base > __hyp_vmemmap) addr = (unsigned long)ERR_PTR(-ENOMEM); - goto out; - } + else + __io_map_base = base; - err = kvm_pgtable_hyp_map(&pkvm_pgtable, addr, size, phys, prot); - if (err) { - addr = (unsigned long)ERR_PTR(err); - goto out; - } - - addr = addr + offset_in_page(phys); -out: hyp_spin_unlock(&pkvm_pgd_lock); return addr; } +unsigned long __pkvm_create_private_mapping(phys_addr_t phys, size_t size, + enum kvm_pgtable_prot prot) +{ + unsigned long addr; + int err; + + size += offset_in_page(phys); + addr = pkvm_alloc_private_va_range(size); + if (IS_ERR((void *)addr)) + return addr; + + err = __pkvm_create_mappings(addr, size, phys, prot); + if (err) + return (unsigned long)ERR_PTR(err); + + return addr + offset_in_page(phys); +} + int pkvm_create_mappings_locked(void *from, void *to, enum kvm_pgtable_prot prot) { unsigned long start = (unsigned long)from; @@ -155,7 +171,7 @@ int hyp_map_vectors(void) bp_base = (void *)__pkvm_create_private_mapping(phys, __BP_HARDEN_HYP_VECS_SZ, PAGE_HYP_EXEC); - if (IS_ERR_OR_NULL(bp_base)) + if (IS_ERR(bp_base)) return PTR_ERR(bp_base); __hyp_bp_vect_base = bp_base; From patchwork Mon Mar 7 18:49:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12772236 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8BFFBC433EF for ; Mon, 7 Mar 2022 18:52:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Rp6PH268xeWyodraqT60nhQXcUmNQy07JI4KuV0dHZ0=; b=Nph6+gOkjeztKV ZpmeA4abKvgTT8CpyA8jSlfmEbPMhx1WQaJOGk4nCXOb+PfGUILPnoVGiJYibQjC2yH5GgJE2uGT9 vcUzhKkGD5DJ9YXVS+3po8V4ILn1UyO34Yt0Y0cPjowR2Nm+FRicGbB3R0EtoQxMm9gbS6m95jkzw a7VsHfdwvJeYMuOKkDc2ZOwe1WIJSvOjBn7CorAHnfxqOlofmEYR7vHL09V8ymP2CcjWaPVd6no0W sRKv6TlDy5vlhQ254KNJf3PLWj4pd1IQ0ubUk5R6hRx1pY957YQdXG1EWrolPwblvDxGAxx1hdjwQ ylpwVKWr+x+zP33RVXWA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRIS6-001Dlt-ET; Mon, 07 Mar 2022 18:51:27 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRIS2-001Djo-7Z for linux-arm-kernel@lists.infradead.org; Mon, 07 Mar 2022 18:51:23 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id z15-20020a25bb0f000000b00613388c7d99so14314432ybg.8 for ; Mon, 07 Mar 2022 10:51:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=PouKLLR1qtfDqJBB+NkFAo5RJJpYZHlkNUkobibAveQ=; b=aJTRdfEPXajjtz7jpjusbLUwZHazPtaAdcjDseH62JaWa29UuajDbAb71dvJHhmKqT qDiEGjeGQ4zUeX1iZ3TO4pqzbbBPJSr65U+uI9jgrHLjrzrHfOacA3jGMrixx0OKeWw/ RZREPe+YYXjsHh9sKXqWOuy0autI6OXkU0BqTNyuCWB5XU9T49hoGPRmWqE8UgfejZQd lcIGj6FSH3p02df0kKhGSZ5izAjpIfMfx0+LzeUc2uxAhgEI3OyQrNk/sYfnaMqBZH8s Sy1AAqCHZortJ4WqKOZ77ZdIBVlnzmrr8Fhl/QsWQxE7Bl7ms2oB3fxyn9Va3qqO4cB7 ukcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=PouKLLR1qtfDqJBB+NkFAo5RJJpYZHlkNUkobibAveQ=; b=Gl/5UsLyDydoNa2o2BJOBjAD76BVOv6/ahibM6RMBR5hFVStNPojXhOcGLfMdXvFSd xzSh5IIBedq+S/CXlWFFZKGyoQk29sCHp9k8rwqV5hZWiWnQfZzcpxYLz0OGlFF1KbmW Z2FZAgoNyCrMKJ01tQP/GVRi1DBS+UkZaSUquPqAXfwKrW9NmNCPn/ePAD5bBppUZR5f IK7doC4NsELsMHEHkLNdzR7oQhE8sxqWU0i37YgEEwS1DFL9VueNQGmXKPTxsJgt4DyK tmiAl8SnVr2NJk9c7hELAlub/VXjJMFiTrmk6tVlUV7ved3a+rd/KBlEJlT5Cwky4fqG bDOw== X-Gm-Message-State: AOAM531O3atobeGW8TAnmkUXFZx72LyV9rza2hPZxYgcDMhrpe0wsQoY +UYfKrFFaAX5EThEyqJs+7lWS9eeE3/cq2ucTw== X-Google-Smtp-Source: ABdhPJzwweEi/4c/qw4x+iRcu1VtA4JV/mfDwmW4TcHLmFds7IQbOIZRF8CIqoQFlW80Oc/0DwqmAr/yYYmQ7dz5NQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:dd66:1e7d:1858:4587]) (user=kaleshsingh job=sendgmr) by 2002:a25:8c10:0:b0:61d:b17e:703d with SMTP id k16-20020a258c10000000b0061db17e703dmr8988845ybl.154.1646679080527; Mon, 07 Mar 2022 10:51:20 -0800 (PST) Date: Mon, 7 Mar 2022 10:49:01 -0800 In-Reply-To: <20220307184935.1704614-1-kaleshsingh@google.com> Message-Id: <20220307184935.1704614-4-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220307184935.1704614-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH v5 3/8] KVM: arm64: Add guard pages for KVM nVHE hypervisor stack From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Mark Rutland , Mark Brown , Masami Hiramatsu , Peter Collingbourne , "Madhavan T. Venkataraman" , Andrew Walbran , Andrew Scull , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220307_105122_322085_1127971B X-CRM114-Status: GOOD ( 20.88 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Map the stack pages in the flexible private VA range and allocate guard pages below the stack as unbacked VA space. The stack is aligned so that any valid stack address has PAGE_SHIFT bit as 1 - this is used for overflow detection (implemented in a subsequent patch in the series). Signed-off-by: Kalesh Singh --- Changes in v5: - Use a single allocation for stack and guard pages to ensure they are contiguous, per Marc Changes in v4: - Replace IS_ERR_OR_NULL check with IS_ERR check now that hyp_alloc_private_va_range() returns an error for null pointer, per Fuad - Format comments to < 80 cols, per Fuad Changes in v3: - Handle null ptr in IS_ERR_OR_NULL checks, per Mark arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/include/asm/kvm_mmu.h | 3 +++ arch/arm64/kvm/arm.c | 40 +++++++++++++++++++++++++++++--- arch/arm64/kvm/mmu.c | 4 ++-- 4 files changed, 43 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index d5b0386ef765..2e277f2ed671 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -169,6 +169,7 @@ struct kvm_nvhe_init_params { unsigned long tcr_el2; unsigned long tpidr_el2; unsigned long stack_hyp_va; + unsigned long stack_pa; phys_addr_t pgd_pa; unsigned long hcr_el2; unsigned long vttbr; diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 514cfee76597..fe40cebe2be2 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -116,6 +116,9 @@ alternative_cb_end #include #include +extern struct kvm_pgtable *hyp_pgtable; +extern struct mutex kvm_hyp_pgd_mutex; + void kvm_update_va_mask(struct alt_instr *alt, __le32 *origptr, __le32 *updptr, int nr_inst); void kvm_compute_layout(void); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index ecc5958e27fe..cc712e421c5a 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1541,7 +1541,6 @@ static void cpu_prepare_hyp_mode(int cpu) tcr |= (idmap_t0sz & GENMASK(TCR_TxSZ_WIDTH - 1, 0)) << TCR_T0SZ_OFFSET; params->tcr_el2 = tcr; - params->stack_hyp_va = kern_hyp_va(per_cpu(kvm_arm_hyp_stack_page, cpu) + PAGE_SIZE); params->pgd_pa = kvm_mmu_get_httbr(); if (is_protected_kvm_enabled()) params->hcr_el2 = HCR_HOST_NVHE_PROTECTED_FLAGS; @@ -1990,14 +1989,49 @@ static int init_hyp_mode(void) * Map the Hyp stack pages */ for_each_possible_cpu(cpu) { + struct kvm_nvhe_init_params *params = per_cpu_ptr_nvhe_sym(kvm_init_params, cpu); char *stack_page = (char *)per_cpu(kvm_arm_hyp_stack_page, cpu); - err = create_hyp_mappings(stack_page, stack_page + PAGE_SIZE, - PAGE_HYP); + unsigned long hyp_addr; + /* + * Allocate a contiguous HYP private VA range for the stack + * and guard page. The allocation is also aligned based on + * the order of its size. + */ + hyp_addr = hyp_alloc_private_va_range(PAGE_SIZE * 2); + if (IS_ERR((void *)hyp_addr)) { + err = PTR_ERR((void *)hyp_addr); + kvm_err("Cannot allocate hyp stack guard page\n"); + goto out_err; + } + + /* + * Since the stack grows downwards, map the stack to the page + * at the higher address and leave the lower guard page + * unbacked. + * + * Any valid stack address now has the PAGE_SHIFT bit as 1 + * and addresses corresponding to the guard page have the + * PAGE_SHIFT bit as 0 - this is used for overflow detection. + */ + mutex_lock(&kvm_hyp_pgd_mutex); + err = kvm_pgtable_hyp_map(hyp_pgtable, hyp_addr + PAGE_SIZE, + PAGE_SIZE, __pa(stack_page), PAGE_HYP); + mutex_unlock(&kvm_hyp_pgd_mutex); if (err) { kvm_err("Cannot map hyp stack\n"); goto out_err; } + + /* + * Save the stack PA in nvhe_init_params. This will be needed + * to recreate the stack mapping in protected nVHE mode. + * __hyp_pa() won't do the right thing there, since the stack + * has been mapped in the flexible private VA space. + */ + params->stack_pa = __pa(stack_page); + + params->stack_hyp_va = hyp_addr + (2 * PAGE_SIZE); } for_each_possible_cpu(cpu) { diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index ccb2847ee2f4..dfdd8c21ed74 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -22,8 +22,8 @@ #include "trace.h" -static struct kvm_pgtable *hyp_pgtable; -static DEFINE_MUTEX(kvm_hyp_pgd_mutex); +struct kvm_pgtable *hyp_pgtable; +DEFINE_MUTEX(kvm_hyp_pgd_mutex); static unsigned long hyp_idmap_start; static unsigned long hyp_idmap_end; From patchwork Mon Mar 7 18:49:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12772237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 96304C433EF for ; Mon, 7 Mar 2022 18:53:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=SrvdURr00teFhJ9bd+8y5kNWt6H3basUZvK7ZDG7qmk=; b=beZXqkuppa1SHb t4+sEHk2JW+DBHj6j+Lld9Uk246SUeDZ/y4sCgf4fXXhFvg02MSXfdQtSeXuDQJOEqBEQQnP62/GZ Oc/9a4dsh0hJKlOAtJF8ipjxLmKCpkIWeRlG73XUMh+Yv7xDIKLWlDngwrp+bcRT+fHGsZa5cAv2l n1iZc7A2/Ma37HW2jpwocFWDfF78Yl1fd5hX5424qDBnKvVbY8vpaAL9cN/QAA+f552aLEHTObUEv HOEucqkYg4jzTJaGnLTMArmjJIlvdtR/oZQv3rVnismkGsTFGyDCq6BoT2Ter8qQi2QxBqMu/Kr63 axuBYqeWU/Z+Wc/Pkl5g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRISg-001E33-Lk; Mon, 07 Mar 2022 18:52:02 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRISZ-001Dzm-Ix for linux-arm-kernel@lists.infradead.org; Mon, 07 Mar 2022 18:51:56 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2dbfe49a4aaso140003767b3.5 for ; Mon, 07 Mar 2022 10:51:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=zd+Pz2hCyohdOr/dxmLz4Gh63tEi7O3x6Rne/mgLJEk=; b=YGw1K4bIRd5auwTqZymINsBHkIli4RccZzk4Jm4KA6dGEHBvQ2DZkRbLMuzrXmWA6v wJCzEQUUIF+eGpEJ8hv6nwf3Lha6QMIKGx1FK5EEn/ZcEJu0zVQ8bzkc4nj40tDoytCH VcGZa0Mjq+KpMF/lKcJV/wTFDar81JQFg+kHt+eu+YW4mWeOrYqrAOBqwQQUei3FQjJR J+voN8JFvSSvBwwfv+nH5y0mDzZsARC45NKcXqFzEiQsO4ZB/J6CiEd/epZvdg5l2mV2 02rHt+WO60nCKAbGjCzA/BCJDOeM5MvBBZvDRQ9mekCrplwHhg/USKkd1U8/hGrNXAkY FXCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=zd+Pz2hCyohdOr/dxmLz4Gh63tEi7O3x6Rne/mgLJEk=; b=jAX7kwMYUXa14Ih5bm/xKsbZu8Y3KWg6vAVOVfVdy03mMP9eBPG4Dx/WvZJz0q1ICR LgnZNkCPRNcIBhCD5hLuV3c3lIkUVEWngb1kGliNBAuYIXDr3Ue5yPuI7txmIN+hYP91 Uwfz1Sl6Vc09dGj3Eq4KuY6AD9V/2nFLB1LHf+QJkieMEv3iBL9AsMsMWNHWt4mazkk+ tCzVHkbRB/l/ikYj+sQbHsFYxMSFgRL3r4MMRCpnzYOb0gJcGa/2INPqxEddY/3Gw3j+ l2synBZXmvXcbbcLwl9HWFI2P4ZaY6LPE1FMhK7+95XuVviFws1IHg/fuCAalVVS/ePV 4CCA== X-Gm-Message-State: AOAM531h5fjodKSYT9CeZJiFSC7eJ+o0B7OU/VUdT2PUbE+oTyyZb/JQ wbfSvoWje/xNTvvk2qgAFX39D/+ZjHjPpbvRmQ== X-Google-Smtp-Source: ABdhPJwlyi6jAHh/rkaohhtiaZqpeV4EIkMP+HMGa+Uf4dvpDuGYBVdyLkTaivMLbQ5TDo5YhM9lzpuhTr29cWEcnA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:dd66:1e7d:1858:4587]) (user=kaleshsingh job=sendgmr) by 2002:a5b:f0b:0:b0:624:b2f2:cc91 with SMTP id x11-20020a5b0f0b000000b00624b2f2cc91mr9033829ybr.324.1646679113524; Mon, 07 Mar 2022 10:51:53 -0800 (PST) Date: Mon, 7 Mar 2022 10:49:02 -0800 In-Reply-To: <20220307184935.1704614-1-kaleshsingh@google.com> Message-Id: <20220307184935.1704614-5-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220307184935.1704614-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH v5 4/8] KVM: arm64: Add guard pages for pKVM (protected nVHE) hypervisor stack From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Mark Rutland , Mark Brown , Masami Hiramatsu , Peter Collingbourne , "Madhavan T. Venkataraman" , Stephen Boyd , Andrew Walbran , Andrew Scull , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220307_105155_664465_C08082D3 X-CRM114-Status: GOOD ( 18.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Map the stack pages in the flexible private VA range and allocate guard pages below the stack as unbacked VA space. The stack is aligned so that any valid stack address has PAGE_SHIFT bit as 1 - this is used for overflow detection (implemented in a subsequent patch in the series) Signed-off-by: Kalesh Singh --- Changes in v5: - Use a single allocation for stack and guard pages to ensure they are contiguous, per Marc Changes in v4: - Replace IS_ERR_OR_NULL check with IS_ERR check now that pkvm_alloc_private_va_range() returns an error for null pointer, per Fuad Changes in v3: - Handle null ptr in IS_ERR_OR_NULL checks, per Mark arch/arm64/kvm/hyp/nvhe/setup.c | 31 ++++++++++++++++++++++++++++--- 1 file changed, 28 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 27af337f9fea..4bec3069b234 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -99,17 +99,42 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, return ret; for (i = 0; i < hyp_nr_cpus; i++) { + struct kvm_nvhe_init_params *params = per_cpu_ptr(&kvm_init_params, i); + unsigned long hyp_addr; + start = (void *)kern_hyp_va(per_cpu_base[i]); end = start + PAGE_ALIGN(hyp_percpu_size); ret = pkvm_create_mappings(start, end, PAGE_HYP); if (ret) return ret; - end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va; - start = end - PAGE_SIZE; - ret = pkvm_create_mappings(start, end, PAGE_HYP); + /* + * Allocate a contiguous HYP private VA range for the stack + * and guard page. The allocation is also aligned based on + * the order of its size. + */ + hyp_addr = pkvm_alloc_private_va_range(PAGE_SIZE * 2); + if (IS_ERR((void *)hyp_addr)) + return PTR_ERR((void *)hyp_addr); + + /* + * Since the stack grows downwards, map the stack to the page + * at the higher address and leave the lower guard page + * unbacked. + * + * Any valid stack address now has the PAGE_SHIFT bit as 1 + * and addresses corresponding to the guard page have the + * PAGE_SHIFT bit as 0 - this is used for overflow detection. + */ + hyp_spin_lock(&pkvm_pgd_lock); + ret = kvm_pgtable_hyp_map(&pkvm_pgtable, hyp_addr + PAGE_SIZE, + PAGE_SIZE, params->stack_pa, PAGE_HYP); + hyp_spin_unlock(&pkvm_pgd_lock); if (ret) return ret; + + /* Update stack_hyp_va to end of the stack's private VA range */ + params->stack_hyp_va = hyp_addr + (2 * PAGE_SIZE); } /* From patchwork Mon Mar 7 18:49:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12772238 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 37B01C433EF for ; Mon, 7 Mar 2022 18:54:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=epKzLuDM38bPDqBK8RzuBpLrIA/om6XWdlUvn56xkbQ=; b=OobqZOReI+Ajnc YGp5+EajkM1Tu+rGqcsRydwhhR9MDDabHvOS8l2zuRBhV2lofhrL5ycjh+8jLJ4aqcNlzqQmddTZU gKRTWAr15KN0eoTZnvoxQ8ZkkJr5ZAB2ahvIAJAIszJhOQhV4TdG1rT3c07zodgnpUvJXUKA3dJ7I qzWGA2zDSaCzm/ilpYeFiUgzzamM6IWtvdd6X4p1+9KEGugry/2kpp6776ZyQ69WbArxemEkblSbN xXkkh3LPYITXrtAFHRoz9EyylW+OCKoNSH0aWsCvgVBN7aV/fcwfCI4coSxqjVBRaRW1XjyD0JIgd xWySkdo5dn/X2aZ+SVng==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRITE-001EI6-Ju; Mon, 07 Mar 2022 18:52:37 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRIT6-001EFo-Cz for linux-arm-kernel@lists.infradead.org; Mon, 07 Mar 2022 18:52:30 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2dcd6a5e4b6so24547127b3.4 for ; Mon, 07 Mar 2022 10:52:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=yZ2vExUCvgYgPgkkDALK76cPa7bHwbf59G/oTB4pBR8=; b=tk+eKcAIWUtxuJV96Cey1nH9mhAtDG3PMwLZ3ckk+KogZwGYwmT7/dpZ8WnXGuL22u eADem4jTAgtnEQ129ENQmGTGNMXyKBN+bsp+MtvSf5I41fA109y++CcPOiJEFM3zFYWW P7bPc45ODC1vAfwl+Nxb9vwBiXAcNaXbmU/GrpoQfc8qNppvw8Lhpe1GrqodiDHJ8NrQ Fe/YdllV09MlZX9bHbyenmKS2gO5nUHPm9ZEBRXneJWXheKDUVEqxYLFDY2buftxXBeV KP9/c/iZQS1OxPxidZOLCQSpImk2efRt6A53QpPKxdVSzthfCNH7onHJj2gFd3lv96GX 0pGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=yZ2vExUCvgYgPgkkDALK76cPa7bHwbf59G/oTB4pBR8=; b=SpjUePM6y42ZWffrGh3MrzGV6uICJELVZxe7pHr/NgOFReGinHlcseR/CRFxpqBKVX PsczuhHOaHU5c/olI66vCbjiYvCMTqgvpTEtY8mIMe7HMOq2nLAkzd2SO1aUrCUu4Kwk ay6l99ig4q49TG5JX19WMcvXEVFi8UDc/rS6Iqoo6FYi0Sw2YkB2wbXk5QHFCjwPYZg5 T5sv7xZBR1IQFlBfilfwRl2KAG1ua7XmpEePuinpjaURkvg4JFGmf66B8i38oQ7kfXek W+S0rwHq7oYOkRCRVOX8TMdWXR5m9IxwgDp7bUhPyZSN0OfCPqlr1sy3fhNj3svfaU5C SP1Q== X-Gm-Message-State: AOAM5315dhN5O4yQESwXke++GJMfsWRPD5ZPh3R7tuqsF0qvy+rVJF3H hkpm9tBKIEaiju/Qo3yK4FdVfq6Yo5B+wj1/Kw== X-Google-Smtp-Source: ABdhPJza/ySL4L3/K0sQ2wfta66Q9QT4UcSS3GTQhbvCuTd7cyJyumYfPDOSUIOj66wx+X5jOFzu3bWVNhegfDMp2g== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:dd66:1e7d:1858:4587]) (user=kaleshsingh job=sendgmr) by 2002:a81:83cd:0:b0:2dc:2dc6:d695 with SMTP id t196-20020a8183cd000000b002dc2dc6d695mr9931415ywf.167.1646679146640; Mon, 07 Mar 2022 10:52:26 -0800 (PST) Date: Mon, 7 Mar 2022 10:49:03 -0800 In-Reply-To: <20220307184935.1704614-1-kaleshsingh@google.com> Message-Id: <20220307184935.1704614-6-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220307184935.1704614-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH v5 5/8] KVM: arm64: Detect and handle hypervisor stack overflows From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Mark Rutland , Mark Brown , Masami Hiramatsu , Peter Collingbourne , "Madhavan T. Venkataraman" , Stephen Boyd , Andrew Scull , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220307_105228_502013_E2A8B1B8 X-CRM114-Status: GOOD ( 15.39 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hypervisor stacks (for both nVHE Hyp mode and nVHE protected mode) are aligned such that any valid stack address has PAGE_SHIFT bit as 1. This allows us to conveniently check for overflow in the exception entry without corrupting any GPRs. We won't recover from a stack overflow so panic the hypervisor. Signed-off-by: Kalesh Singh --- Changes in v5: - Valid stack addresses now have PAGE_SHIFT bit as 1 instead of 0 Changes in v3: - Remove test_sp_overflow macro, per Mark - Add asmlinkage attribute for hyp_panic, hyp_panic_bad_stack, per Ard arch/arm64/kvm/hyp/nvhe/host.S | 24 ++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/switch.c | 7 ++++++- 2 files changed, 30 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S index 3d613e721a75..be6d844279b1 100644 --- a/arch/arm64/kvm/hyp/nvhe/host.S +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -153,6 +153,18 @@ SYM_FUNC_END(__host_hvc) .macro invalid_host_el2_vect .align 7 + + /* + * Test whether the SP has overflowed, without corrupting a GPR. + * nVHE hypervisor stacks are aligned so that the PAGE_SHIFT bit + * of SP should always be 1. + */ + add sp, sp, x0 // sp' = sp + x0 + sub x0, sp, x0 // x0' = sp' - x0 = (sp + x0) - x0 = sp + tbz x0, #PAGE_SHIFT, .L__hyp_sp_overflow\@ + sub x0, sp, x0 // x0'' = sp' - x0' = (sp + x0) - sp = x0 + sub sp, sp, x0 // sp'' = sp' - x0 = (sp + x0) - x0 = sp + /* If a guest is loaded, panic out of it. */ stp x0, x1, [sp, #-16]! get_loaded_vcpu x0, x1 @@ -165,6 +177,18 @@ SYM_FUNC_END(__host_hvc) * been partially clobbered by __host_enter. */ b hyp_panic + +.L__hyp_sp_overflow\@: + /* + * Reset SP to the top of the stack, to allow handling the hyp_panic. + * This corrupts the stack but is ok, since we won't be attempting + * any unwinding here. + */ + ldr_this_cpu x0, kvm_init_params + NVHE_INIT_STACK_HYP_VA, x1 + mov sp, x0 + + bl hyp_panic_bad_stack + ASM_BUG() .endm .macro invalid_host_el1_vect diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 6410d21d8695..703a5d3f611b 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -347,7 +347,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) return exit_code; } -void __noreturn hyp_panic(void) +asmlinkage void __noreturn hyp_panic(void) { u64 spsr = read_sysreg_el2(SYS_SPSR); u64 elr = read_sysreg_el2(SYS_ELR); @@ -369,6 +369,11 @@ void __noreturn hyp_panic(void) unreachable(); } +asmlinkage void __noreturn hyp_panic_bad_stack(void) +{ + hyp_panic(); +} + asmlinkage void kvm_unexpected_el2_exception(void) { return __kvm_unexpected_el2_exception(); From patchwork Mon Mar 7 18:49:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12772243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BBE4AC433EF for ; Mon, 7 Mar 2022 18:54:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6TZUSWdclwESL9Ow4NnrGa4c44gUps1KSApRrwZ/1gs=; b=Osa6ZVaHwMYrhG CPm6T/iPcEIitQ38g82C+PGVmnkFLgzFm3AhWnj6tEXHQIzWQDy8wc/Pc49N4i8XgUXwtPAOhOHQN OfiCW0HI8KxoSYV7RIHWFTApumIPNi25dv7kHZNDlrE00m+XVVmC7G8wnjqP2lphkVsqJYqDB7bF0 v+b4bfJlGUdwg6OyguAqLVn++kKIxUQqBb2BT4Tj5EA6sNhy/nsJr5Xuk9VZy36KHXQ1m5mmH1F1U YxRVmGLav6CUqcxhHsHNxUsnyo7Alq2FX4syEp9EbaDxU8mnTwARO3g2WTl2JcHVwa8UfJ6Mn4frT LLntFLpeWb2JMgyKXG6w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRITj-001EXg-3P; Mon, 07 Mar 2022 18:53:07 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRITe-001EVW-Rg for linux-arm-kernel@lists.infradead.org; Mon, 07 Mar 2022 18:53:04 +0000 Received: by mail-yb1-xb49.google.com with SMTP id n66-20020a254045000000b0062883b59ddbso14302677yba.12 for ; Mon, 07 Mar 2022 10:53:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=UdruCL33ZZYFGiCR380CmESQcj657VbKKdTgZ1Wg8kc=; b=bTmhJBEGbQH5zpBf4K3Tji6NNSug2HBwU66gLQY4V5NmqKRyZhideePdARpKiCOF1c OOXrO4Me1fvF4pmpx411lsWI36L53KeZD78//0S7lZr7sWRneIo6LIZnHduanDKHirrp cW/4+mBILBdF49NaYQQ0ozFhlZCcywn/DUEVKJhd4GnfmL1mzgkqZ3czm1wYzJZkNXS4 qW0Xc4qp7wwzIJVMA4t3smwcBSiVYJXN3TcTP/hWgqOHXc334UibH9+IWo75N0qlPwVC YmStcjlKcadCn3EjrZpBFiXMXsvKRboTwre/t2KGu5JqT/eDWY7BZogvQhmzrK9+elF8 Tk5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=UdruCL33ZZYFGiCR380CmESQcj657VbKKdTgZ1Wg8kc=; b=Wc0mPsRI9cMOwsjUv8ouMqGUqfI12Et8PnPKxWeoYl7EMG7QSJMWufRpZDcqheLqAW LRjpq/CQv8Hd8vPNBvlncudGUgyNgKkZMx5j4x0MhFcbHzXG1e4pjpRZBEVWIwfYgirj 30WI7ez1389lqOglP5jVxG3WXsKulioyPK3B7C/VfL8BXLx1Z2b30xdXd38cxlmXEkxO vtV2I4juq6tPpiFTHaE65qxhy+RjUgn5gPU/+I7jfiIZtGdZ41bJwoeNKn0AEwBHHCAU 7kuo/G+Qd8bmHlW2TYUCG+50kma3BHSd0dK4dN2P+MD84/EFJatmTCkSqsNOke9aVs67 N/Ig== X-Gm-Message-State: AOAM533ASKqNWyUXknKe8eCKUeslbhkLj88P5r9XBGQtsF3w4hVLodVN iRpDObd1G9UaYC4IswympP4g7JEkkrwc+WY/mQ== X-Google-Smtp-Source: ABdhPJwG45Vn5BAyOQ+RCPGdUX9ElV2nKbX47V20HlQVVZhmpw653i9aEL2BOEqCy0uZFwnil4o/gkljjwS9kfExXA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:dd66:1e7d:1858:4587]) (user=kaleshsingh job=sendgmr) by 2002:a25:be85:0:b0:61d:7838:c9a2 with SMTP id i5-20020a25be85000000b0061d7838c9a2mr9060538ybk.364.1646679180921; Mon, 07 Mar 2022 10:53:00 -0800 (PST) Date: Mon, 7 Mar 2022 10:49:04 -0800 In-Reply-To: <20220307184935.1704614-1-kaleshsingh@google.com> Message-Id: <20220307184935.1704614-7-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220307184935.1704614-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH v5 6/8] KVM: arm64: Add hypervisor overflow stack From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Mark Rutland , Mark Brown , Masami Hiramatsu , Peter Collingbourne , "Madhavan T. Venkataraman" , Stephen Boyd , Andrew Walbran , Andrew Scull , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220307_105302_941264_A32F2DC5 X-CRM114-Status: GOOD ( 14.77 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Allocate and switch to 16-byte aligned secondary stack on overflow. This provides us stack space to better handle overflows; and is used in a subsequent patch to dump the hypervisor stacktrace. The overflow stack is only allocated if CONFIG_NVHE_EL2_DEBUG is enabled, as hypervisor stacktraces is a debug feature dependent on CONFIG_NVHE_EL2_DEBUG. Signed-off-by: Kalesh Singh --- Changes in v4: - Update comment to clarify resetting the SP to the top of the stack only happens if CONFIG_NVHE_EL2_DEBUG is disabled, per Fuad arch/arm64/kvm/hyp/nvhe/host.S | 11 ++++++++--- arch/arm64/kvm/hyp/nvhe/switch.c | 5 +++++ 2 files changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S index be6d844279b1..a0c4b4f1549f 100644 --- a/arch/arm64/kvm/hyp/nvhe/host.S +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -179,13 +179,18 @@ SYM_FUNC_END(__host_hvc) b hyp_panic .L__hyp_sp_overflow\@: +#ifdef CONFIG_NVHE_EL2_DEBUG + /* Switch to the overflow stack */ + adr_this_cpu sp, hyp_overflow_stack + PAGE_SIZE, x0 +#else /* - * Reset SP to the top of the stack, to allow handling the hyp_panic. - * This corrupts the stack but is ok, since we won't be attempting - * any unwinding here. + * If !CONFIG_NVHE_EL2_DEBUG, reset SP to the top of the stack, to + * allow handling the hyp_panic. This corrupts the stack but is ok, + * since we won't be attempting any unwinding here. */ ldr_this_cpu x0, kvm_init_params + NVHE_INIT_STACK_HYP_VA, x1 mov sp, x0 +#endif bl hyp_panic_bad_stack ASM_BUG() diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 703a5d3f611b..efc20273a352 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -34,6 +34,11 @@ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); +#ifdef CONFIG_NVHE_EL2_DEBUG +DEFINE_PER_CPU(unsigned long [PAGE_SIZE/sizeof(long)], hyp_overflow_stack) + __aligned(16); +#endif + static void __activate_traps(struct kvm_vcpu *vcpu) { u64 val; From patchwork Mon Mar 7 18:49:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12772244 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 29D5DC433EF for ; Mon, 7 Mar 2022 18:55:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=MwBdi1pg4Aa8ond3OcNTapsYDEDAnapXHIbd8Nh2bCo=; b=V+7dAZXp7YXvwN yISa3w4oqJYPlr7a02TN51ymi0F2m8PkProytAUO19rO3Pw17K4pV42LLyjl9NNXwkMSMTM7xec1Z 07LcoX9jLIfeQNfDFUqvQRRkoGDKruJHAFRP0nvGnEtgr+VO0zVwI5eywNQF6+uK9LkfO2b9JTF+6 njXJqtKigxLlmOYHiiemvBag1knEtSDJ+GQZBc/OB6yjtTs47i/Ajem+FAI1yMQw8LZABA9JMX+AY k9egvJeHgX2oibrecXk+MJrd3wJrK3mPckrJa3lHwXlaE4gV69HhbsLPj8w2kEk7K1H2MNzMg/RJy LRd+4vXQNUdp/1XbizFA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRIUV-001Ev7-Ov; Mon, 07 Mar 2022 18:53:56 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRIUN-001Ero-4V for linux-arm-kernel@lists.infradead.org; Mon, 07 Mar 2022 18:53:49 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id q76-20020a25d94f000000b00628bdf8d1a9so12322785ybg.17 for ; Mon, 07 Mar 2022 10:53:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=gp49bwxlp/PxlJHwkVMdcAzBdelX+CGPSzAgbUQS+yI=; b=MkNnkKNY3SMuIzh9I1+CaqPXibe4Y6SFu/RHev5wMtDxmx46hQwiewmvu8xe0hyHkj 3Bwe/aZ2+z3wTzDsqRdcIN16SE7ijKHRLsfVr+D6wf8c6X2SJEeO8k8LHZmiP0KKMLPm ysAK6Dn+wHkoQ1cs8sTlOwa1RDFg5btXjup/IY85L+jqZMAcTK808psi8NP8lZfEJKI8 Kxa+hpV/S63fVJ6MGBvSTj0LtJCqKyfkgPw3GSuoDJG7Hl/ygJ5w8B0G1VL+srlrGLIl V3iLGV7B9buCj117dAfbFliPrlvFsLE/rar5VISR3m2QTVh/z+a9vccpJdcv5VTbOav3 tEKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=gp49bwxlp/PxlJHwkVMdcAzBdelX+CGPSzAgbUQS+yI=; b=xGrYxLkvtgG9mHxFF+XsyXVFEka7RWEcSCH2nWtzIO7xK3MIGYjG48JwjIzf9AiYib s5EJsLlB8E0FmpmlGoo4NMAOUv5FXQoMD3MX47TBDbkx2BxzqZIt1tNrtiUIIarv5Zvu xGTOeL/nahmDe96C0oXXnkY37h/v3EmhEqhRX2GZqHXJpzC0ja991T6BZUl7IrWrtdBJ 9dxwPF2bPENyrv4vJXJNfOvWooatZV2UArgJRQ081sZgnpuzBuk/Bbf9QqYgREsGaNzf Ps+BVm1jL2RRMrEEbSHZ8B+SLBhBqux1KzPdeKABsAm6dJ/093FVNzhfz8MRmJLQKK+r I1PA== X-Gm-Message-State: AOAM531B9z3Icn4cRmCGR5X7fJmIDRaYgTou6XMmeJhCdSxaAUiGex2W eOKcs3YX8riXD6EMLJFeOnDU3p+ofS0GBisu/g== X-Google-Smtp-Source: ABdhPJzbU5qgK/xJ2/og1vwuImYwxkGHt49Sk4nprmPRXYFGNvbr/4dATjcU/VbMtNefMKAhz5O1nAC7mORp7a+Waw== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:dd66:1e7d:1858:4587]) (user=kaleshsingh job=sendgmr) by 2002:a25:8684:0:b0:629:917:c5c with SMTP id z4-20020a258684000000b0062909170c5cmr8734183ybk.403.1646679225768; Mon, 07 Mar 2022 10:53:45 -0800 (PST) Date: Mon, 7 Mar 2022 10:49:05 -0800 In-Reply-To: <20220307184935.1704614-1-kaleshsingh@google.com> Message-Id: <20220307184935.1704614-8-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220307184935.1704614-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH v5 7/8] KVM: arm64: Unwind and dump nVHE HYP stacktrace From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Mark Rutland , Mark Brown , Masami Hiramatsu , Peter Collingbourne , "Madhavan T. Venkataraman" , Andrew Walbran , Andrew Scull , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220307_105347_243321_DAD7DCDD X-CRM114-Status: GOOD ( 30.18 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Unwind the stack in EL1, when CONFIG_NVHE_EL2_DEBUG is enabled. This is possible because CONFIG_NVHE_EL2_DEBUG disables the host stage-2 protection on hyp_panic(), allowing the host to access the hypervisor stack pages in EL1. A simple stack overflow test produces the following output: [ 580.376051][ T412] kvm: nVHE hyp panic at: ffffffc0116145c4! [ 580.378034][ T412] kvm [412]: nVHE HYP call trace: [ 580.378591][ T412] kvm [412]: [] [ 580.378993][ T412] kvm [412]: [] [ 580.379386][ T412] kvm [412]: [] // Non-terminating recursive call [ 580.379772][ T412] kvm [412]: [] [ 580.380158][ T412] kvm [412]: [] [ 580.380544][ T412] kvm [412]: [] [ 580.380928][ T412] kvm [412]: [] . . . Since nVHE hyp symbols are not included by kallsyms to avoid issues with aliasing, we fallback to the vmlinux addresses. Symbolizing the addresses is handled in the next patch in this series. Signed-off-by: Kalesh Singh --- Changes in v4: - Update commit text and struct kvm_nvhe_panic_info kernel-doc comment to clarify that CONFIG_NVHE_EL2_DEBUG only disables the host stage-2 protection on hyp_panic(), per Fuad - Update NVHE_EL2_DEBUG Kconfig description to clarify that the hypervisor stack trace is printed when hyp_panic() is called, per Fuad Changes in v3: - The nvhe hyp stack unwinder now makes use of the core logic from the regular kernel unwinder to avoid duplication, per Mark Changes in v2: - Add cpu_prepare_nvhe_panic_info() - Move updating the panic info to hyp_panic(), so that unwinding also works for conventional nVHE Hyp-mode. arch/arm64/include/asm/kvm_asm.h | 20 +++ arch/arm64/include/asm/stacktrace.h | 12 ++ arch/arm64/kernel/stacktrace.c | 210 +++++++++++++++++++++++++--- arch/arm64/kvm/Kconfig | 5 +- arch/arm64/kvm/arm.c | 2 +- arch/arm64/kvm/handle_exit.c | 3 + arch/arm64/kvm/hyp/nvhe/switch.c | 18 +++ 7 files changed, 244 insertions(+), 26 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 2e277f2ed671..4abcf93c6662 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -176,6 +176,26 @@ struct kvm_nvhe_init_params { unsigned long vtcr; }; +#ifdef CONFIG_NVHE_EL2_DEBUG +/** + * struct kvm_nvhe_panic_info - nVHE hypervisor panic info. + * @hyp_stack_base: hyp VA of the hyp_stack base. + * @hyp_overflow_stack_base: hyp VA of the hyp_overflow_stack base. + * @fp: hyp FP where the backtrace begins. + * @pc: hyp PC where the backtrace begins. + * + * Used by the host in EL1 to dump the nVHE hypervisor backtrace on + * hyp_panic. This is possible because CONFIG_NVHE_EL2_DEBUG disables + * the host stage 2 protection on hyp_panic(). See: __hyp_do_panic() + */ +struct kvm_nvhe_panic_info { + unsigned long hyp_stack_base; + unsigned long hyp_overflow_stack_base; + unsigned long fp; + unsigned long pc; +}; +#endif /* CONFIG_NVHE_EL2_DEBUG */ + /* Translate a kernel address @ptr into its equivalent linear mapping */ #define kvm_ksym_ref(ptr) \ ({ \ diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index e77cdef9ca29..18611a51cf14 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -22,6 +22,10 @@ enum stack_type { STACK_TYPE_OVERFLOW, STACK_TYPE_SDEI_NORMAL, STACK_TYPE_SDEI_CRITICAL, +#ifdef CONFIG_NVHE_EL2_DEBUG + STACK_TYPE_KVM_NVHE_HYP, + STACK_TYPE_KVM_NVHE_OVERFLOW, +#endif /* CONFIG_NVHE_EL2_DEBUG */ __NR_STACK_TYPES }; @@ -147,4 +151,12 @@ static inline bool on_accessible_stack(const struct task_struct *tsk, return false; } +#ifdef CONFIG_NVHE_EL2_DEBUG +void kvm_nvhe_dump_backtrace(unsigned long hyp_offset); +#else +static inline void kvm_nvhe_dump_backtrace(unsigned long hyp_offset) +{ +} +#endif /* CONFIG_NVHE_EL2_DEBUG */ + #endif /* __ASM_STACKTRACE_H */ diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index e4103e085681..6ec85cb69b1f 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -15,6 +15,8 @@ #include #include +#include +#include #include #include @@ -64,26 +66,15 @@ NOKPROBE_SYMBOL(start_backtrace); * records (e.g. a cycle), determined based on the location and fp value of A * and the location (but not the fp value) of B. */ -static int notrace unwind_frame(struct task_struct *tsk, - struct stackframe *frame) +static int notrace __unwind_frame(struct stackframe *frame, struct stack_info *info, + unsigned long (*translate_fp)(unsigned long, enum stack_type)) { unsigned long fp = frame->fp; - struct stack_info info; - - if (!tsk) - tsk = current; - - /* Final frame; nothing to unwind */ - if (fp == (unsigned long)task_pt_regs(tsk)->stackframe) - return -ENOENT; if (fp & 0x7) return -EINVAL; - if (!on_accessible_stack(tsk, fp, 16, &info)) - return -EINVAL; - - if (test_bit(info.type, frame->stacks_done)) + if (test_bit(info->type, frame->stacks_done)) return -EINVAL; /* @@ -94,28 +85,62 @@ static int notrace unwind_frame(struct task_struct *tsk, * * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW + * KVM_NVHE_HYP -> KVM_NVHE_OVERFLOW * * ... but the nesting itself is strict. Once we transition from one * stack to another, it's never valid to unwind back to that first * stack. */ - if (info.type == frame->prev_type) { + if (info->type == frame->prev_type) { if (fp <= frame->prev_fp) return -EINVAL; } else { set_bit(frame->prev_type, frame->stacks_done); } + /* Record fp as prev_fp before attempting to get the next fp */ + frame->prev_fp = fp; + + /* + * If fp is not from the current address space perform the + * necessary translation before dereferencing it to get next fp. + */ + if (translate_fp) + fp = translate_fp(fp, info->type); + if (!fp) + return -EINVAL; + /* * Record this frame record's values and location. The prev_fp and - * prev_type are only meaningful to the next unwind_frame() invocation. + * prev_type are only meaningful to the next __unwind_frame() invocation. */ frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp)); frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 8)); - frame->prev_fp = fp; - frame->prev_type = info.type; - frame->pc = ptrauth_strip_insn_pac(frame->pc); + frame->prev_type = info->type; + + return 0; +} + +static int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) +{ + unsigned long fp = frame->fp; + struct stack_info info; + int err; + + if (!tsk) + tsk = current; + + /* Final frame; nothing to unwind */ + if (fp == (unsigned long)task_pt_regs(tsk)->stackframe) + return -ENOENT; + + if (!on_accessible_stack(tsk, fp, 16, &info)) + return -EINVAL; + + err = __unwind_frame(frame, &info, NULL); + if (err) + return err; #ifdef CONFIG_FUNCTION_GRAPH_TRACER if (tsk->ret_stack && @@ -143,20 +168,27 @@ static int notrace unwind_frame(struct task_struct *tsk, } NOKPROBE_SYMBOL(unwind_frame); -static void notrace walk_stackframe(struct task_struct *tsk, - struct stackframe *frame, - bool (*fn)(void *, unsigned long), void *data) +static void notrace __walk_stackframe(struct task_struct *tsk, struct stackframe *frame, + bool (*fn)(void *, unsigned long), void *data, + int (*unwind_frame_fn)(struct task_struct *tsk, struct stackframe *frame)) { while (1) { int ret; if (!fn(data, frame->pc)) break; - ret = unwind_frame(tsk, frame); + ret = unwind_frame_fn(tsk, frame); if (ret < 0) break; } } + +static void notrace walk_stackframe(struct task_struct *tsk, + struct stackframe *frame, + bool (*fn)(void *, unsigned long), void *data) +{ + __walk_stackframe(tsk, frame, fn, data, unwind_frame); +} NOKPROBE_SYMBOL(walk_stackframe); static bool dump_backtrace_entry(void *arg, unsigned long where) @@ -210,3 +242,135 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, walk_stackframe(task, &frame, consume_entry, cookie); } + +#ifdef CONFIG_NVHE_EL2_DEBUG +DECLARE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); +DECLARE_KVM_NVHE_PER_CPU(unsigned long [PAGE_SIZE/sizeof(long)], hyp_overflow_stack); +DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_panic_info, kvm_panic_info); + +static inline bool kvm_nvhe_on_overflow_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + struct kvm_nvhe_panic_info *panic_info = this_cpu_ptr_nvhe_sym(kvm_panic_info); + unsigned long low = (unsigned long)panic_info->hyp_overflow_stack_base; + unsigned long high = low + PAGE_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_KVM_NVHE_OVERFLOW, info); +} + +static inline bool kvm_nvhe_on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + struct kvm_nvhe_panic_info *panic_info = this_cpu_ptr_nvhe_sym(kvm_panic_info); + unsigned long low = (unsigned long)panic_info->hyp_stack_base; + unsigned long high = low + PAGE_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_KVM_NVHE_HYP, info); +} + +static inline bool kvm_nvhe_on_accessible_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + if (info) + info->type = STACK_TYPE_UNKNOWN; + + if (kvm_nvhe_on_hyp_stack(sp, size, info)) + return true; + if (kvm_nvhe_on_overflow_stack(sp, size, info)) + return true; + + return false; +} + +static unsigned long kvm_nvhe_hyp_stack_kern_va(unsigned long addr) +{ + struct kvm_nvhe_panic_info *panic_info = this_cpu_ptr_nvhe_sym(kvm_panic_info); + unsigned long hyp_base, kern_base, hyp_offset; + + hyp_base = (unsigned long)panic_info->hyp_stack_base; + hyp_offset = addr - hyp_base; + + kern_base = (unsigned long)*this_cpu_ptr(&kvm_arm_hyp_stack_page); + + return kern_base + hyp_offset; +} + +static unsigned long kvm_nvhe_overflow_stack_kern_va(unsigned long addr) +{ + struct kvm_nvhe_panic_info *panic_info = this_cpu_ptr_nvhe_sym(kvm_panic_info); + unsigned long hyp_base, kern_base, hyp_offset; + + hyp_base = (unsigned long)panic_info->hyp_overflow_stack_base; + hyp_offset = addr - hyp_base; + + kern_base = (unsigned long)this_cpu_ptr_nvhe_sym(hyp_overflow_stack); + + return kern_base + hyp_offset; +} + +/* + * Convert KVM nVHE hypervisor stack VA to a kernel VA. + * + * The nVHE hypervisor stack is mapped in the flexible 'private' VA range, to allow + * for guard pages below the stack. Consequently, the fixed offset address + * translation macros won't work here. + * + * The kernel VA is calculated as an offset from the kernel VA of the hypervisor + * stack base. See: kvm_nvhe_hyp_stack_kern_va(), kvm_nvhe_overflow_stack_kern_va() + */ +static unsigned long kvm_nvhe_stack_kern_va(unsigned long addr, + enum stack_type type) +{ + switch (type) { + case STACK_TYPE_KVM_NVHE_HYP: + return kvm_nvhe_hyp_stack_kern_va(addr); + case STACK_TYPE_KVM_NVHE_OVERFLOW: + return kvm_nvhe_overflow_stack_kern_va(addr); + default: + return 0UL; + } +} + +static int notrace kvm_nvhe_unwind_frame(struct task_struct *tsk, + struct stackframe *frame) +{ + struct stack_info info; + + if (!kvm_nvhe_on_accessible_stack(frame->fp, 16, &info)) + return -EINVAL; + + return __unwind_frame(frame, &info, kvm_nvhe_stack_kern_va); +} + +static bool kvm_nvhe_dump_backtrace_entry(void *arg, unsigned long where) +{ + unsigned long va_mask = GENMASK_ULL(vabits_actual - 1, 0); + unsigned long hyp_offset = (unsigned long)arg; + + where &= va_mask; /* Mask tags */ + where += hyp_offset; /* Convert to kern addr */ + + kvm_err("[<%016lx>] %pB\n", where, (void *)where); + + return true; +} + +static void notrace kvm_nvhe_walk_stackframe(struct task_struct *tsk, + struct stackframe *frame, + bool (*fn)(void *, unsigned long), void *data) +{ + __walk_stackframe(tsk, frame, fn, data, kvm_nvhe_unwind_frame); +} + +void kvm_nvhe_dump_backtrace(unsigned long hyp_offset) +{ + struct kvm_nvhe_panic_info *panic_info = this_cpu_ptr_nvhe_sym(kvm_panic_info); + struct stackframe frame; + + start_backtrace(&frame, panic_info->fp, panic_info->pc); + pr_err("nVHE HYP call trace:\n"); + kvm_nvhe_walk_stackframe(NULL, &frame, kvm_nvhe_dump_backtrace_entry, + (void *)hyp_offset); + pr_err("---- end of nVHE HYP call trace ----\n"); +} +#endif /* CONFIG_NVHE_EL2_DEBUG */ diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 8a5fbbf084df..a7be4ef35fbf 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -51,8 +51,9 @@ config NVHE_EL2_DEBUG depends on KVM help Say Y here to enable the debug mode for the non-VHE KVM EL2 object. - Failure reports will BUG() in the hypervisor. This is intended for - local EL2 hypervisor development. + Failure reports will BUG() in the hypervisor; and calls to hyp_panic() + will result in printing the hypervisor call stack. + This is intended for local EL2 hypervisor development. If unsure, say N. diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index cc712e421c5a..3d9efcf4fbb5 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -49,7 +49,7 @@ DEFINE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector); -static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); +DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); unsigned long kvm_arm_hyp_percpu_base[NR_CPUS]; DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index e3140abd2e2e..ff69dff33700 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -326,6 +327,8 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, kvm_err("nVHE hyp panic at: %016llx!\n", elr_virt + hyp_offset); } + kvm_nvhe_dump_backtrace(hyp_offset); + /* * Hyp has panicked and we're going to handle that by panicking the * kernel. The kernel offset will be revealed in the panic so we're diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index efc20273a352..b8ecffc47424 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -37,6 +37,22 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); #ifdef CONFIG_NVHE_EL2_DEBUG DEFINE_PER_CPU(unsigned long [PAGE_SIZE/sizeof(long)], hyp_overflow_stack) __aligned(16); +DEFINE_PER_CPU(struct kvm_nvhe_panic_info, kvm_panic_info); + +static inline void cpu_prepare_nvhe_panic_info(void) +{ + struct kvm_nvhe_panic_info *panic_info = this_cpu_ptr(&kvm_panic_info); + struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params); + + panic_info->hyp_stack_base = (unsigned long)(params->stack_hyp_va - PAGE_SIZE); + panic_info->hyp_overflow_stack_base = (unsigned long)this_cpu_ptr(hyp_overflow_stack); + panic_info->fp = (unsigned long)__builtin_frame_address(0); + panic_info->pc = _THIS_IP_; +} + #else +static inline void cpu_prepare_nvhe_panic_info(void) +{ +} #endif static void __activate_traps(struct kvm_vcpu *vcpu) @@ -360,6 +376,8 @@ asmlinkage void __noreturn hyp_panic(void) struct kvm_cpu_context *host_ctxt; struct kvm_vcpu *vcpu; + cpu_prepare_nvhe_panic_info(); + host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; vcpu = host_ctxt->__hyp_running_vcpu; From patchwork Mon Mar 7 18:49:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12772245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 40B77C433EF for ; Mon, 7 Mar 2022 18:56:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=//oeQTPUvhvU4IeY9nFeBGIRK/JLaYybRJFp/+6yYmY=; b=Qt3VjYzB2bUDCh 8yO5tV497sCmMij5ZFyCUtwWexog18lfG4nQp+8oUnWTGU/mvnmnmzsJTpKzzky67yWWhBPiNdzfo ZMj7B2VKNJLpD4HiSL0V2fPQyS9f/3PO505d8G1qSytf5WFnACVUr9d33fwNCEyxoTeDWuKob2MSZ gIAHcaE5aET8LzrcSFpZHVnuAnUjJcJDt/LIossVY5bywsSPyf18f8uSF9W0H6EYiHBQucnfRO1wo Z4Cv+SW0g1gCm2iT8bus3HQPI22EBxk/IPxMhdsrwQk7FOxV+Q31C4xJ6hnnxZeSUxnQJ0nfjZ8bk i/jKrd1vDgfBm8KtDOrQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRIVJ-001FFZ-20; Mon, 07 Mar 2022 18:54:45 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRIVD-001FDg-GU for linux-arm-kernel@lists.infradead.org; Mon, 07 Mar 2022 18:54:40 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id a3-20020a5b0ac3000000b006288f395b25so14295026ybr.18 for ; Mon, 07 Mar 2022 10:54:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=5MawQ8g6R2i10b51W0z02NV7yDlOvfHZu22ScLFX8rE=; b=KkokMzrhcRsY2qzoTWuOOklP96NxRoolRvtK9KPJY9QJ83TLPW1WnAtmCnfEaDZgi0 YMGiVTkVwy8tknBUp4z0R5onqX5eSzgvtDlnX2CkBZLaNPSxvqMd0b46QKlfyD9tZGTN VYIb8vXneeKJaKFmFMke1HAmrhbijc32etv1W1OwaFqo6VI/glMfqo0bACziWtvlRIwm ylVudKrgfBzJWCY4bpNPdjtxxIsQ6NvzTVH3uIUnEVPB4Z3Obo/9FZuYKOZqaCHRpUBp JHenofaBd0kbSanvAbp2NuJxX4GqFhjo8vxtAqXW9HaB1rhG8NwpRhdCrygsB65igMSS toeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=5MawQ8g6R2i10b51W0z02NV7yDlOvfHZu22ScLFX8rE=; b=6mcKfSSFJIBAHulMTv++XH/5EJxEAld34wxEpj5ObJV3+GPgPbzo5/egOrdYLA3dpJ OqOBJOMgidFTQpbv6Y3ArqdO082iDDIbkWbz29VXERpAIMqknPcpspvCo69CHgDHFdIM 7L87gxLkxuD79Nc1m6IjmQ/YaOtJjD8DKYjq8J18fVfCRgwNFZR83OrwfKoH1P3gEdIE h1uzWYnfTDmlA+HmBI9Bf4urRKFJ8phrVSunGcOSN8I3KosaykU5KWrMe/Ut4M72IzlJ 2wr2PyTMZbps1dVCuWp6TT4T2H3fM0D9zB5zpgCHMfhWel1mgSyjzswFxk9fRO1HHqLh IWaQ== X-Gm-Message-State: AOAM530jn7/0OmoXa7gM8OjAVOWQhnbqEHziWmIAyEiQ2ZqeTmpcfvMG 9yo3FC8zk03CIW+OmoSGmtua/WL4N5AgG9WCAw== X-Google-Smtp-Source: ABdhPJzM7vN8xhO/vDTteEgRgqDMEMBQIjp4uJ9oN/GV7rPMNTtXmSQqGvrwN5Vtw5btnqJLalgjqDyWQ0sR/nErGQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:dd66:1e7d:1858:4587]) (user=kaleshsingh job=sendgmr) by 2002:a25:8d07:0:b0:61d:b996:44b3 with SMTP id n7-20020a258d07000000b0061db99644b3mr8806866ybl.111.1646679278060; Mon, 07 Mar 2022 10:54:38 -0800 (PST) Date: Mon, 7 Mar 2022 10:49:06 -0800 In-Reply-To: <20220307184935.1704614-1-kaleshsingh@google.com> Message-Id: <20220307184935.1704614-9-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220307184935.1704614-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.616.g0bdcbb4464-goog Subject: [PATCH v5 8/8] KVM: arm64: Symbolize the nVHE HYP backtrace From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Mark Rutland , Mark Brown , Masami Hiramatsu , Peter Collingbourne , "Madhavan T. Venkataraman" , Andrew Walbran , Andrew Scull , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220307_105439_593018_F08EEAFD X-CRM114-Status: GOOD ( 15.09 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Reintroduce the __kvm_nvhe_ symbols in kallsyms, ignoring the local symbols in this namespace. The local symbols are not informative and can cause aliasing issues when symbolizing the addresses. With the necessary symbols now in kallsyms we can symbolize nVHE stacktrace addresses using the %pB print format specifier. Example call trace: [ 98.916444][ T426] kvm [426]: nVHE hyp panic at: [] __kvm_nvhe_overflow_stack+0x8/0x34! [ 98.918360][ T426] nVHE HYP call trace: [ 98.918692][ T426] kvm [426]: [] __kvm_nvhe_cpu_prepare_nvhe_panic_info+0x4c/0x68 [ 98.919545][ T426] kvm [426]: [] __kvm_nvhe_hyp_panic+0x2c/0xe8 [ 98.920107][ T426] kvm [426]: [] __kvm_nvhe_hyp_panic_bad_stack+0x10/0x10 [ 98.920665][ T426] kvm [426]: [] __kvm_nvhe___kvm_hyp_host_vector+0x24c/0x794 [ 98.921292][ T426] kvm [426]: [] __kvm_nvhe_overflow_stack+0x24/0x34 . . . [ 98.973382][ T426] kvm [426]: [] __kvm_nvhe_overflow_stack+0x24/0x34 [ 98.973816][ T426] kvm [426]: [] __kvm_nvhe___kvm_vcpu_run+0x38/0x438 [ 98.974255][ T426] kvm [426]: [] __kvm_nvhe_handle___kvm_vcpu_run+0x1c4/0x364 [ 98.974719][ T426] kvm [426]: [] __kvm_nvhe_handle_trap+0xa8/0x130 [ 98.975152][ T426] kvm [426]: [] __kvm_nvhe___host_exit+0x64/0x64 [ 98.975588][ T426] ---- end of nVHE HYP call trace ---- Signed-off-by: Kalesh Singh --- Changes in v2: - Fix printk warnings - %p expects (void *) arch/arm64/kvm/handle_exit.c | 13 +++++-------- scripts/kallsyms.c | 2 +- 2 files changed, 6 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index ff69dff33700..3a5c32017c6b 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -296,13 +296,8 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr_in_kimg = __phys_to_kimg(elr_phys); u64 hyp_offset = elr_in_kimg - kaslr_offset() - elr_virt; u64 mode = spsr & PSR_MODE_MASK; + u64 panic_addr = elr_virt + hyp_offset; - /* - * The nVHE hyp symbols are not included by kallsyms to avoid issues - * with aliasing. That means that the symbols cannot be printed with the - * "%pS" format specifier, so fall back to the vmlinux address if - * there's no better option. - */ if (mode != PSR_MODE_EL2t && mode != PSR_MODE_EL2h) { kvm_err("Invalid host exception to nVHE hyp!\n"); } else if (ESR_ELx_EC(esr) == ESR_ELx_EC_BRK64 && @@ -322,9 +317,11 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, if (file) kvm_err("nVHE hyp BUG at: %s:%u!\n", file, line); else - kvm_err("nVHE hyp BUG at: %016llx!\n", elr_virt + hyp_offset); + kvm_err("nVHE hyp BUG at: [<%016llx>] %pB!\n", panic_addr, + (void *)panic_addr); } else { - kvm_err("nVHE hyp panic at: %016llx!\n", elr_virt + hyp_offset); + kvm_err("nVHE hyp panic at: [<%016llx>] %pB!\n", panic_addr, + (void *)panic_addr); } kvm_nvhe_dump_backtrace(hyp_offset); diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c index 54ad86d13784..19aba43d9da4 100644 --- a/scripts/kallsyms.c +++ b/scripts/kallsyms.c @@ -111,7 +111,7 @@ static bool is_ignored_symbol(const char *name, char type) ".LASANPC", /* s390 kasan local symbols */ "__crc_", /* modversions */ "__efistub_", /* arm64 EFI stub namespace */ - "__kvm_nvhe_", /* arm64 non-VHE KVM namespace */ + "__kvm_nvhe_$", /* arm64 local symbols in non-VHE KVM namespace */ "__AArch64ADRPThunk_", /* arm64 lld */ "__ARMV5PILongThunk_", /* arm lld */ "__ARMV7PILongThunk_",