From patchwork Thu Jul 21 05:57:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12924751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D95A5C43334 for ; Thu, 21 Jul 2022 06:05:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=tQMtlWVoUug+DczQB5HxobBKjDpjCtxWkp5Y45JxTd4=; b=s72A9QY7/rmwL8FsvYi6fZ/n38 4xfbzL0/7cwQCKzwFJdGOoLhwmmd6ElgPHkE7JL20EcAeth+OTrJJAsaTRF17Y45zBHmKEijgKmUq 8oBrrUoNtmYu3oystx9lvDhXwd9h/4cMbBmF1dTSWnzxhtt8jrAlzkUD9ACRtfloTfTu5cU00Zgu/ MWrBuZTBphjJb1eNo4riPnd9A6dU6OzKKbv/kNCir+krl6r3Y/hZQKGgBXZL7t7exSB8RYP8N/d5J repqdEudGsdeB03R6RXil8biPnjZpR2BTFG8LP5i5Acxlq7xFG8SXP9AtndVg0Lltxemgoe3hLC/Q g6OV/x4A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEPHv-000e7K-AD; Thu, 21 Jul 2022 06:03:56 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEPCJ-000WtZ-II for linux-arm-kernel@lists.infradead.org; Thu, 21 Jul 2022 05:58:09 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-31e55e88567so6599167b3.15 for ; Wed, 20 Jul 2022 22:58:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Z7mVY9an7BKlr+DFm/CycdkNTS81eX0sj7VPn5GOYs4=; b=dTsslHtfVleq+Z/6WWji/9BuS9+0br4q7QTZcxrUN5Y9B4rlzmrmN32Bb1B5qyJ/U1 qBpAZGUG2A62Hm2QuQFjTCti5rbkdpDyTAybiwdN4xKtXHzcxrROPg9V11lvHuN/twya a2UyEZ/owQaKO0PtjZowc5DbISURjVIpDe8dEi2N7GQOpy9pYT8mVWsgs4N9nBbqUvg2 QH9mu9bYGpxdDj8JIvDwQld9c2w0EwXo08sJjQ+RmLN2wX01jPqMQrv+STC/QoW3bpXx mjKRNDCEFZxF3BqOf/VxGO2RpXhB+AoiIXZzPCTbceBAr/YG5qiWLsDGLe0S/hvlsMvl oT5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Z7mVY9an7BKlr+DFm/CycdkNTS81eX0sj7VPn5GOYs4=; b=YmMxNbVeECs7Es1KH/TrcTtyGzt5mtrpsuI7dy6vVlw5rXYPrfqP0MhIcGjcryyXIo pS0V2VsCg1eRp/331BFSgRcc6WHBH5eM2xae4Oo8xLOLFIQvGFxOtrFSAwCNVkMzNh1A /dKR39F4xiBTPpuW4TrGRHq857i+pkHj5uLk+F5aA7WuYSXEtHSweeSwMkFOOj6I4REv 5LLuKswrQGb342PAMRHxJv/y/JsMDcS6Tcb+0NjdnSOwqlqvcVC2KH3mdmAMMRmD8l64 UsCfL17Jqfu5Xoo467YznNRwS2N97uSpct1628m3QSpZ8Eu4Z0p/I/wTf1um3RJMBPWP 2R7w== X-Gm-Message-State: AJIora+/Ly3E5XDLUaS1ORs4EtudW10lKd1kHpGGe67q4yQCgj6GJDVC 7dlheaNQe3DTtgu+hpWjfdo28DSVnbc/gPTDsw== X-Google-Smtp-Source: AGRyM1t4hxWZlu80p3JVhXO15OVhTOsZC/c7QTwiHwbKpKmDeCswr0PSSvYfUF/aTjYWxM+ufbCB0TJqzm/vgw1mYQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a05:6902:20a:b0:670:c563:9180 with SMTP id j10-20020a056902020a00b00670c5639180mr317132ybs.401.1658383085938; Wed, 20 Jul 2022 22:58:05 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:24 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-14-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 13/17] KVM: arm64: Prepare non-protected nVHE hypervisor stacktrace From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220720_225807_664303_B71881A2 X-CRM114-Status: GOOD ( 17.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In non-protected nVHE mode (non-pKVM) the host can directly access hypervisor memory; and unwinding of the hypervisor stacktrace is done from EL1 to save on memory for shared buffers. To unwind the hypervisor stack from EL1 the host needs to know the starting point for the unwind and information that will allow it to translate hypervisor stack addresses to the corresponding kernel addresses. This patch sets up this book keeping. It is made use of later in the series. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba --- Changes in v5: - Use regular comments instead of doc comments, per Fuad arch/arm64/include/asm/kvm_asm.h | 16 ++++++++++++++++ arch/arm64/include/asm/stacktrace/nvhe.h | 4 ++++ arch/arm64/kvm/hyp/nvhe/stacktrace.c | 24 ++++++++++++++++++++++++ 3 files changed, 44 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 2e277f2ed671..53035763e48e 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -176,6 +176,22 @@ struct kvm_nvhe_init_params { unsigned long vtcr; }; +/* + * Used by the host in EL1 to dump the nVHE hypervisor backtrace on + * hyp_panic() in non-protected mode. + * + * @stack_base: hyp VA of the hyp_stack base. + * @overflow_stack_base: hyp VA of the hyp_overflow_stack base. + * @fp: hyp FP where the backtrace begins. + * @pc: hyp PC where the backtrace begins. + */ +struct kvm_nvhe_stacktrace_info { + unsigned long stack_base; + unsigned long overflow_stack_base; + unsigned long fp; + unsigned long pc; +}; + /* Translate a kernel address @ptr into its equivalent linear mapping */ #define kvm_ksym_ref(ptr) \ ({ \ diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/asm/stacktrace/nvhe.h index 05d7e03e0a8c..8f02803a005f 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -19,6 +19,7 @@ #ifndef __ASM_STACKTRACE_NVHE_H #define __ASM_STACKTRACE_NVHE_H +#include #include /* @@ -52,6 +53,9 @@ static inline bool on_accessible_stack(const struct task_struct *tsk, * In protected mode, the unwinding is done by the hypervisor in EL2. */ +DECLARE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack); +DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); + #ifdef CONFIG_PROTECTED_NVHE_STACKTRACE static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info) diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe/stacktrace.c index 60461c033a04..cbd365f4f26a 100644 --- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c +++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c @@ -9,6 +9,28 @@ DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack) __aligned(16); +DEFINE_PER_CPU(struct kvm_nvhe_stacktrace_info, kvm_stacktrace_info); + +/* + * hyp_prepare_backtrace - Prepare non-protected nVHE backtrace. + * + * @fp : frame pointer at which to start the unwinding. + * @pc : program counter at which to start the unwinding. + * + * Save the information needed by the host to unwind the non-protected + * nVHE hypervisor stack in EL1. + */ +static void hyp_prepare_backtrace(unsigned long fp, unsigned long pc) +{ + struct kvm_nvhe_stacktrace_info *stacktrace_info = this_cpu_ptr(&kvm_stacktrace_info); + struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params); + + stacktrace_info->stack_base = (unsigned long)(params->stack_hyp_va - PAGE_SIZE); + stacktrace_info->overflow_stack_base = (unsigned long)this_cpu_ptr(overflow_stack); + stacktrace_info->fp = fp; + stacktrace_info->pc = pc; +} + #ifdef CONFIG_PROTECTED_NVHE_STACKTRACE DEFINE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_stacktrace); @@ -89,4 +111,6 @@ void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc) { if (is_protected_kvm_enabled()) pkvm_save_backtrace(fp, pc); + else + hyp_prepare_backtrace(fp, pc); }