From patchwork Wed Apr 27 18:46:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12829337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AB7E3C433EF for ; Wed, 27 Apr 2022 18:49:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VoVCUGsrlVOUUKm+amYt3mfYm4cTqqgM1x7RAd22tM4=; b=IvXqPAo2Jz+U+M 4dSu3uFZsgNfpIyZaAY4m08RlgIfMbwr57ABpU9ocVYnNI1ZSF4Gi0wQZe0PNFsb4kOeELG3MicDo c01g2GraI9/gw7YUBDk0aJqpH6Ib7KijHG2Wqq9MFqOaQURZ2YOwUB6+b7IUSrojwYejJaGQMSZvF VdTdRSy7l32+jgRkiY5bKdwvXkiSfTR3pFpXzDQq+eKb+ysVkwKlAvBGR3arcyjOjo8A/ViE17yLt vIwG9A1B6X1Bl5qQxQWlUs+Nw//l4W2VekD/i/9CnMQhSy+CiF+3m5qxiPJ8IbWWjkdQEyZo8c1NS wpOd88DqOUfQ9HZbF0fA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1njmhp-0037XV-Lf; Wed, 27 Apr 2022 18:48:05 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1njmhl-0037Ts-Md for linux-arm-kernel@lists.infradead.org; Wed, 27 Apr 2022 18:48:03 +0000 Received: by mail-yb1-xb49.google.com with SMTP id d188-20020a25cdc5000000b00648429e5ab9so2446432ybf.13 for ; Wed, 27 Apr 2022 11:47:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=fd6rMU9yqWWeAt9OZ2FqZhtlChWn8Za3hhMR5JrXjl8=; b=g/8e3IN0KagfolpJ4ruEAWQvaAufar6vLj47xotGnSf8mMZjVOlNIEEiZSYg/TyXli y1cTgzqP1K6Pdoe+qSvha0OKdiUMQpWYTcmuiOKOP3mu4F3LxJYx/xXtG/TGYioaaUQz GZzpV+hBQT6uppGReMBCQnIjFqVZS5a+FuC5ZhuuteRQ31C9I1Pp4d3dqvgjOaXst0Lf NY2LD9CeD57+6skYrVb5cJRCoZr8AzISEIPPfFUIwP5b4nCpqxsIkRbXegw4YzfDSgan PIE637wzT91aGH/VdAk/EhJSxBY1c4k4WJICtOwh4RBJbTly7BRw4GB72vCvovLouaDo JKUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=fd6rMU9yqWWeAt9OZ2FqZhtlChWn8Za3hhMR5JrXjl8=; b=5oxSjhDEMXscbPPBvHs+Jg/CXaEvZS/VnhD8ikU8foyIjfKRGV1olvb+T+D4JvfZd4 g60dSvl7cX2iCuzq2UnDfAwbKZElZk/zr9Xh2e47Pf4z0oyl4po+/XXRoDyP0/THVC6Y gNWhaxRenOej8Qz60hjtHsy+i6NdeJwAgjzTHWTi+MS5/WNBtZEUWExilI7Iz00OHq7C Jh7L1hdf8cL/1tgkLAJ4XRRZ2ou0oewhp25+DIO2PSFtaR0GnE/SzGYVVwOSO9e5pzQy T/tV7nXLIQTDQejR1fK57lOTGKmCtT3KxjAHUFkUtG26l9802mto7SEGSyeG9aiTjcNF 9+6Q== X-Gm-Message-State: AOAM5330d9/lD6c7rsb5Vd1SRn6yBEVMSZoTfEF+t4VAUgJOb5nBEHYa Ip5jtKMGVeckKqsMOXMH2GPs/MmaH+wpQSLYPA== X-Google-Smtp-Source: ABdhPJyOdBmB4vZPfSZNHktzrjvSV3WgelUopqbYoPUTMVI8+cnDshbn+jqXWwUIw/SzQtu2NNUcE7qT5WdN23ottA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:16ec:6da0:8cc5:5f24]) (user=kaleshsingh job=sendgmr) by 2002:a25:6652:0:b0:645:d4c1:eb7 with SMTP id z18-20020a256652000000b00645d4c10eb7mr25152746ybm.412.1651085275548; Wed, 27 Apr 2022 11:47:55 -0700 (PDT) Date: Wed, 27 Apr 2022 11:46:56 -0700 In-Reply-To: <20220427184716.1949239-1-kaleshsingh@google.com> Message-Id: <20220427184716.1949239-2-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220427184716.1949239-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH 1/4] KVM: arm64: Compile stacktrace.nvhe.o From: Kalesh Singh Cc: mark.rutland@arm.com, will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Masami Hiramatsu , Mark Brown , Peter Collingbourne , Alexei Starovoitov , "Madhavan T. Venkataraman" , Andrew Jones , Zenghui Yu , Keir Fraser , Kefeng Wang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220427_114801_785584_4B1932E6 X-CRM114-Status: GOOD ( 18.92 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Recompile stack unwinding code for use with the nVHE hypervisor. This is a preparatory patch that will allow reusing most of the kernel unwinding logic in the nVHE hypervisor. Suggested-by: Mark Rutland Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/stacktrace.h | 18 ++++--- arch/arm64/kernel/stacktrace.c | 73 ++++++++++++++++++----------- arch/arm64/kvm/hyp/nvhe/Makefile | 3 +- 3 files changed, 60 insertions(+), 34 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index aec9315bf156..f5af9a94c5a6 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -16,12 +16,14 @@ #include enum stack_type { - STACK_TYPE_UNKNOWN, +#ifndef __KVM_NVHE_HYPERVISOR__ STACK_TYPE_TASK, STACK_TYPE_IRQ, STACK_TYPE_OVERFLOW, STACK_TYPE_SDEI_NORMAL, STACK_TYPE_SDEI_CRITICAL, +#endif /* !__KVM_NVHE_HYPERVISOR__ */ + STACK_TYPE_UNKNOWN, __NR_STACK_TYPES }; @@ -31,11 +33,6 @@ struct stack_info { enum stack_type type; }; -extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, - const char *loglvl); - -DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); - static inline bool on_stack(unsigned long sp, unsigned long size, unsigned long low, unsigned long high, enum stack_type type, struct stack_info *info) @@ -54,6 +51,12 @@ static inline bool on_stack(unsigned long sp, unsigned long size, return true; } +#ifndef __KVM_NVHE_HYPERVISOR__ +extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, + const char *loglvl); + +DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); + static inline bool on_irq_stack(unsigned long sp, unsigned long size, struct stack_info *info) { @@ -88,6 +91,7 @@ static inline bool on_overflow_stack(unsigned long sp, unsigned long size, static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info) { return false; } #endif +#endif /* !__KVM_NVHE_HYPERVISOR__ */ /* @@ -101,6 +105,7 @@ static inline bool on_accessible_stack(const struct task_struct *tsk, if (info) info->type = STACK_TYPE_UNKNOWN; +#ifndef __KVM_NVHE_HYPERVISOR__ if (on_task_stack(tsk, sp, size, info)) return true; if (tsk != current || preemptible()) @@ -111,6 +116,7 @@ static inline bool on_accessible_stack(const struct task_struct *tsk, return true; if (on_sdei_stack(sp, size, info)) return true; +#endif /* !__KVM_NVHE_HYPERVISOR__ */ return false; } diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 0467cb79f080..a84e38d41d38 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -81,23 +81,19 @@ NOKPROBE_SYMBOL(unwind_init); * records (e.g. a cycle), determined based on the location and fp value of A * and the location (but not the fp value) of B. */ -static int notrace unwind_next(struct task_struct *tsk, - struct unwind_state *state) +static int notrace __unwind_next(struct task_struct *tsk, + struct unwind_state *state, + struct stack_info *info) { unsigned long fp = state->fp; - struct stack_info info; - - /* Final frame; nothing to unwind */ - if (fp == (unsigned long)task_pt_regs(tsk)->stackframe) - return -ENOENT; if (fp & 0x7) return -EINVAL; - if (!on_accessible_stack(tsk, fp, 16, &info)) + if (!on_accessible_stack(tsk, fp, 16, info)) return -EINVAL; - if (test_bit(info.type, state->stacks_done)) + if (test_bit(info->type, state->stacks_done)) return -EINVAL; /* @@ -113,7 +109,7 @@ static int notrace unwind_next(struct task_struct *tsk, * stack to another, it's never valid to unwind back to that first * stack. */ - if (info.type == state->prev_type) { + if (info->type == state->prev_type) { if (fp <= state->prev_fp) return -EINVAL; } else { @@ -127,7 +123,45 @@ static int notrace unwind_next(struct task_struct *tsk, state->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp)); state->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 8)); state->prev_fp = fp; - state->prev_type = info.type; + state->prev_type = info->type; + + return 0; +} +NOKPROBE_SYMBOL(__unwind_next); + +static int notrace unwind_next(struct task_struct *tsk, + struct unwind_state *state); + +static void notrace unwind(struct task_struct *tsk, + struct unwind_state *state, + stack_trace_consume_fn consume_entry, void *cookie) +{ + while (1) { + int ret; + + if (!consume_entry(cookie, state->pc)) + break; + ret = unwind_next(tsk, state); + if (ret < 0) + break; + } +} +NOKPROBE_SYMBOL(unwind); + +#ifndef __KVM_NVHE_HYPERVISOR__ +static int notrace unwind_next(struct task_struct *tsk, + struct unwind_state *state) +{ + struct stack_info info; + int err; + + /* Final frame; nothing to unwind */ + if (state->fp == (unsigned long)task_pt_regs(tsk)->stackframe) + return -ENOENT; + + err = __unwind_next(tsk, state, &info); + if (err) + return err; state->pc = ptrauth_strip_insn_pac(state->pc); @@ -157,22 +191,6 @@ static int notrace unwind_next(struct task_struct *tsk, } NOKPROBE_SYMBOL(unwind_next); -static void notrace unwind(struct task_struct *tsk, - struct unwind_state *state, - stack_trace_consume_fn consume_entry, void *cookie) -{ - while (1) { - int ret; - - if (!consume_entry(cookie, state->pc)) - break; - ret = unwind_next(tsk, state); - if (ret < 0) - break; - } -} -NOKPROBE_SYMBOL(unwind); - static bool dump_backtrace_entry(void *arg, unsigned long where) { char *loglvl = arg; @@ -224,3 +242,4 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, unwind(task, &state, consume_entry, cookie); } +#endif /* !__KVM_NVHE_HYPERVISOR__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index f9fe4dc21b1f..c0ff0d6fc403 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -14,7 +14,8 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs)) obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ hyp-main.o hyp-smp.o psci-relay.o early_alloc.o page_alloc.o \ - cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o + cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o \ + ../../../kernel/stacktrace.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o obj-$(CONFIG_DEBUG_LIST) += list_debug.o From patchwork Wed Apr 27 18:46:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12829338 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2E7CEC433F5 for ; Wed, 27 Apr 2022 18:49:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8uLVUPs/COhrBCB4ft2VyZjxVG16LFlXaWiXj7QXJJ4=; b=LxTdKr+KxpK2yw BMT2fzwP7ozIHcQQQc2CTXdIM4idHsWCom/u0T+6EvtP8hLTAS7v9GcfCQFOXMu8H4+RNh9xgPT1u 7HBmL3x2eUkwKEAwobfBqQhpe235/UKlIber2sn4j19zkgdYQeKlErL4rGYEtBJwU7boVLDz+On70 Akz9frkpdm+mhhlTHTEctR1Swd5sYr4wxKxaIiq+1nbLh+yF3BbSke4ozIfIgryzP2k1t8lVU9qnj aWeCnA1uj2s/LSN6+yUUz3JUus8aUlAp3rne2fkC4NFmGPNdvyf4DAWOisYX8THA0YuvwBDgXc4mk WvZd96fr6ejkVXhych1Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1njmiQ-0037jE-5l; Wed, 27 Apr 2022 18:48:42 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1njmiM-0037fr-SA for linux-arm-kernel@lists.infradead.org; Wed, 27 Apr 2022 18:48:40 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id b6-20020a253406000000b006484c081280so2467558yba.5 for ; Wed, 27 Apr 2022 11:48:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=Ukl4ZPUY8nPN3AM2d1WxIlfrlMTerAUV8TUpQy4uNBg=; b=MwumuTMcrRhIVZ9j7swb6gFPtCMubn8HLn1QrjGmsWx6TDamUjamppzeC7+Fpo2zsC Hn4bx3jBlAkmNArZrSe6MdpVTjEJg354zjO4FkLXTKbIoUVQPlCUBjO2SZCJxc5IkFM8 cDuvx4Cjo2Ct4xWbHuhsaSXdjyU9NRnacoQoThUDFhnn0R1qKldXoHqIdeFSuyIQg2pC chNYgFV6zRM7TOLluFi18wIvk6VKlbVk1NgR+RDLPm9CxC6YqCUydpbanBRewfexSaQv 7fpo7I+PJ8U4nSeyCdXBluXkIB1PCAe1DvM/MnApanh4SLKGov6OQPuc3nE3UJpm+UME sagA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=Ukl4ZPUY8nPN3AM2d1WxIlfrlMTerAUV8TUpQy4uNBg=; b=wd0xawevcEuHJblY1Gua2x/sWka372EeYCLeEyCe3LX/Khm6n+tYYu6EWh6WQusLK8 MCHslPagqGyzwjcElv23drg/lR5Ig7Mc8If9XjObV8XAgkcjYh+sxMwiEQVYQM1Y1Z3K u2u7MIk11VA2j3o6nG4b+TDlfn4Dcqiab1IlbCx4gQMDzshJHjJ6Be3Ce22KY0LUv2zN E9xFDoKHD5wDB5mID5XTUGk6Xp/5DkD/gjXI6HZJHWQzyxU9ggd0ReGWDZ8I0eSAl496 w90pwo2cS0RCuk6GpaCLCQmNcyLXTm5m61CTJoXVzxQuFQ3UJEeOwX+qcsBW9K2e24JE 4kqw== X-Gm-Message-State: AOAM533vtxY9kDwK8c13c8F8wi5jlxPElkHdz1vS0c0537gQ1tK4ihAS OHiR03w19Lk2fAxDrBwWRMwIK4UgKrXD9huLPA== X-Google-Smtp-Source: ABdhPJxdWYMm4hIQLef81VH/XkscxGsflUx325g8OpMWgt9ADdwq/DhwrfHi07KPyxCIExLBSzRF/kaTl78lvVXnTw== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:16ec:6da0:8cc5:5f24]) (user=kaleshsingh job=sendgmr) by 2002:a5b:44:0:b0:645:d798:590f with SMTP id e4-20020a5b0044000000b00645d798590fmr23864940ybp.228.1651085311643; Wed, 27 Apr 2022 11:48:31 -0700 (PDT) Date: Wed, 27 Apr 2022 11:46:57 -0700 In-Reply-To: <20220427184716.1949239-1-kaleshsingh@google.com> Message-Id: <20220427184716.1949239-3-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220427184716.1949239-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH 2/4] KVM: arm64: Add hypervisor overflow stack From: Kalesh Singh Cc: mark.rutland@arm.com, will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Masami Hiramatsu , Peter Collingbourne , Alexei Starovoitov , Mark Brown , "Madhavan T. Venkataraman" , Andrew Jones , Zenghui Yu , Keir Fraser , Kefeng Wang , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220427_114838_957251_AE2F3092 X-CRM114-Status: GOOD ( 13.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Allocate and switch to 16-byte aligned secondary stack on overflow. This provides us stack space to better handle overflows; and is used in a subsequent patch to dump the hypervisor stacktrace. Signed-off-by: Kalesh Singh --- arch/arm64/kernel/stacktrace.c | 3 +++ arch/arm64/kvm/hyp/nvhe/host.S | 9 ++------- 2 files changed, 5 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index a84e38d41d38..f346b4c66f1c 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -242,4 +242,7 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, unwind(task, &state, consume_entry, cookie); } +#else /* __KVM_NVHE_HYPERVISOR__ */ +DEFINE_PER_CPU(unsigned long [PAGE_SIZE/sizeof(long)], overflow_stack) + __aligned(16); #endif /* !__KVM_NVHE_HYPERVISOR__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S index 09b5254fb497..1cd2de4f039e 100644 --- a/arch/arm64/kvm/hyp/nvhe/host.S +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -179,13 +179,8 @@ SYM_FUNC_END(__host_hvc) b hyp_panic .L__hyp_sp_overflow\@: - /* - * Reset SP to the top of the stack, to allow handling the hyp_panic. - * This corrupts the stack but is ok, since we won't be attempting - * any unwinding here. - */ - ldr_this_cpu x0, kvm_init_params + NVHE_INIT_STACK_HYP_VA, x1 - mov sp, x0 + /* Switch to the overflow stack */ + adr_this_cpu sp, overflow_stack + PAGE_SIZE, x0 b hyp_panic_bad_stack ASM_BUG() From patchwork Wed Apr 27 18:46:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12829339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D0212C433EF for ; Wed, 27 Apr 2022 18:50:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=IVv3y2VnHVdmXQI08uACtFceQxYVAAfAygBWRRSWFTo=; b=OWqOP9CXCCxDT6 fPR6mJF20uBraLSgVjn8KArKY5GkWQeFyBJ6+8nks2xWaw+rahaWPd+3EUp1qOPduQuf+BzLXO1EI GDwVMfs9TENhiV9FQQyY4yatmODZ2jrqZ0zf8UQecMXiIIUVxnTRiKtvTwIm0I2kfeULev3HyEFzN PJQZPkYUfC5TBuMLwgbt3lUYlvtPr0fYRfDZfLI9mv0LbQ4cEY4fu5mi5asx8Uv3DzLr9a38khkTz a5FE8U4O9ILDZjchKlDsu2Vdd7xWc0ajggPBK80MSfLyUbQv5bLrkhqgkv/DGFHErrRHnVkRk1B+z g8MSab+KvZpw1wVMNwbQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1njmjM-003889-5a; Wed, 27 Apr 2022 18:49:40 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1njmjH-00385t-GM for linux-arm-kernel@lists.infradead.org; Wed, 27 Apr 2022 18:49:37 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2f198b4e2d1so24265597b3.14 for ; Wed, 27 Apr 2022 11:49:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=Niw+TDgV2KxWJ6fEGzYEc3H0fJbmY41W2p5431uZWR8=; b=H5K4dvngFXeGToSiwJyc5gDMiauBTyAgTQcKNgijqRH4fda+ZalAnwFobgqTFx5m95 07PESfGxsSypAvapcr9tVD1Dv5o7rLLVvBo/jR0R6LXTRctJIeTqK614GBBJ7Xs1iS25 CY+xTyd2ftGYZh42x0clk4C/Bd/F8zBbHQ621AQ+E6V2xC8gGfH7/9MLLQKlnYkkUb/n LT7qd9n3S9QukrbaySOFwUwrig26yKbRGD/TnCYnSAqsDS3VhHTCZDVjsyAiuQjkV/VF +4z8HcP9gGcur7U6uJKwMpqRHKql3EVYLOGSUMYxAaBTcuIG3mHlfWRs4GZo8Gb7ivJE yEzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=Niw+TDgV2KxWJ6fEGzYEc3H0fJbmY41W2p5431uZWR8=; b=qXGKyHv1ozWCPgBg2swRM0RPCYy6WOFDT/uIuvfHgYJSQ4b6N+I3SrQWxA9whtQo8X a0r+HG8dGyhy1A9NiTZtcQKTh8gv2XzXhW+CNlZs/1v81JCEFvduTLSUj2zT1ggEKFTm iX0jtZD5odA1Rgvy+FFcnpZSov0miL6ZSu9zIHMo+m7COlJVv/RIFsEpIV//jl4Q+zqn dfk8fKyPwmYHwjilES8Gsdq8tkPXGC1SxxlvQ4oXZ84fepk7trynQjp2QdJRjQfQ8+1t 3zETPbkUyINRuB6FpaM53QdSItYnxtm+fpCPZ9jVECX2EmyzvAY2UdRe6rvxoh+SjlQ6 nftg== X-Gm-Message-State: AOAM531OM3WwO2xxbjk1T+NlmsRc7diAdYifL2mazCmx7RvXf8ue7ZSN J1wY5gJlcl4w1HYY1Ni0ZQ5hFTX2Bj3whDfs/Q== X-Google-Smtp-Source: ABdhPJwQ9wuOJWR/zy4bZTJ2QdVCC1H5SyRAsiNowolDxo1lYvW9cV81p5PcXpipjoSEMhqkjHq4UkjNMEefFsqN/Q== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:16ec:6da0:8cc5:5f24]) (user=kaleshsingh job=sendgmr) by 2002:a25:d44e:0:b0:648:3d5b:fbd5 with SMTP id m75-20020a25d44e000000b006483d5bfbd5mr20470009ybf.363.1651085374068; Wed, 27 Apr 2022 11:49:34 -0700 (PDT) Date: Wed, 27 Apr 2022 11:46:58 -0700 In-Reply-To: <20220427184716.1949239-1-kaleshsingh@google.com> Message-Id: <20220427184716.1949239-4-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220427184716.1949239-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH 3/4] KVM: arm64: Allocate shared stacktrace pages From: Kalesh Singh Cc: mark.rutland@arm.com, will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Masami Hiramatsu , Mark Brown , Peter Collingbourne , Alexei Starovoitov , "Madhavan T. Venkataraman" , Andrew Jones , Marco Elver , Keir Fraser , Kefeng Wang , Zenghui Yu , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220427_114935_583877_00F444B0 X-CRM114-Status: GOOD ( 17.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The nVHE hypervisor can use this shared area to dump its stacktrace addresses on hyp_panic(). Symbolization and printing the stacktrace can then be handled by the host in EL1 (done in a later patch in this series). Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/arm.c | 34 ++++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/setup.c | 11 +++++++++++ 3 files changed, 46 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 2e277f2ed671..ad31ac68264f 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -174,6 +174,7 @@ struct kvm_nvhe_init_params { unsigned long hcr_el2; unsigned long vttbr; unsigned long vtcr; + unsigned long stacktrace_hyp_va; }; /* Translate a kernel address @ptr into its equivalent linear mapping */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index dd257d9f21a2..1b21d5a99bfc 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -50,6 +50,7 @@ DEFINE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector); static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); +DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stacktrace_page); unsigned long kvm_arm_hyp_percpu_base[NR_CPUS]; DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); @@ -1483,6 +1484,7 @@ static void cpu_prepare_hyp_mode(int cpu) tcr |= (idmap_t0sz & GENMASK(TCR_TxSZ_WIDTH - 1, 0)) << TCR_T0SZ_OFFSET; params->tcr_el2 = tcr; + params->stacktrace_hyp_va = kern_hyp_va(per_cpu(kvm_arm_hyp_stacktrace_page, cpu)); params->pgd_pa = kvm_mmu_get_httbr(); if (is_protected_kvm_enabled()) params->hcr_el2 = HCR_HOST_NVHE_PROTECTED_FLAGS; @@ -1776,6 +1778,7 @@ static void teardown_hyp_mode(void) free_hyp_pgds(); for_each_possible_cpu(cpu) { free_page(per_cpu(kvm_arm_hyp_stack_page, cpu)); + free_page(per_cpu(kvm_arm_hyp_stacktrace_page, cpu)); free_pages(kvm_arm_hyp_percpu_base[cpu], nvhe_percpu_order()); } } @@ -1867,6 +1870,23 @@ static int init_hyp_mode(void) per_cpu(kvm_arm_hyp_stack_page, cpu) = stack_page; } + /* + * Allocate stacktrace pages for Hypervisor-mode. + * This is used by the hypervisor to share its stacktrace + * with the host on a hyp_panic(). + */ + for_each_possible_cpu(cpu) { + unsigned long stacktrace_page; + + stacktrace_page = __get_free_page(GFP_KERNEL); + if (!stacktrace_page) { + err = -ENOMEM; + goto out_err; + } + + per_cpu(kvm_arm_hyp_stacktrace_page, cpu) = stacktrace_page; + } + /* * Allocate and initialize pages for Hypervisor-mode percpu regions. */ @@ -1974,6 +1994,20 @@ static int init_hyp_mode(void) params->stack_hyp_va = hyp_addr + (2 * PAGE_SIZE); } + /* + * Map the hyp stacktrace pages. + */ + for_each_possible_cpu(cpu) { + char *stacktrace_page = (char *)per_cpu(kvm_arm_hyp_stacktrace_page, cpu); + + err = create_hyp_mappings(stacktrace_page, stacktrace_page + PAGE_SIZE, + PAGE_HYP); + if (err) { + kvm_err("Cannot map hyp stacktrace page\n"); + goto out_err; + } + } + for_each_possible_cpu(cpu) { char *percpu_begin = (char *)kvm_arm_hyp_percpu_base[cpu]; char *percpu_end = percpu_begin + nvhe_percpu_size(); diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index e8d4ea2fcfa0..9b81bf2d40d7 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -135,6 +135,17 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, /* Update stack_hyp_va to end of the stack's private VA range */ params->stack_hyp_va = hyp_addr + (2 * PAGE_SIZE); + + /* + * Map the stacktrace pages as shared and transfer ownership to + * the hypervisor. + */ + prot = pkvm_mkstate(PAGE_HYP, PKVM_PAGE_SHARED_OWNED); + start = (void *)params->stacktrace_hyp_va; + end = start + PAGE_SIZE; + ret = pkvm_create_mappings(start, end, prot); + if (ret) + return ret; } /* From patchwork Wed Apr 27 18:46:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12829340 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A34BDC433EF for ; Wed, 27 Apr 2022 18:51:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=neCG5OY6RuaJq6309/DLuQGNIO7NxzVFKYTTwp7ixKQ=; b=paUEJ3OrcxNy1N JElEiRz+gSxeB3Uq1R7XGDqtyUL9TOWxa2m5BQkTzFteqGMPOC1llSRxl3WfdGZfrH5RgZDGOcxUG DFOBnKjYDJrIZ0VatxDFELNZbGXLuBDf9kk6xRGOTSJrEWYD85oENuHA7mFLfO3wkBybM/kHsE6Dd 1bITFqstkbtHeHajLfCuMXBqRR5w5oXzzi8c9ih8Pn1kj3K7QH6WpSfMjRtJjz+P2EPsv0dCs18IA Z4fOGDwWfdq1vkGd70PQsmpIDxrs99Ka1zqFAbxyq7vI2Aphk4ohIxpQ/YIgK+qxvlOI5gn+4ZDlK Etf9eFTdpGXi4aerPk+Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1njmjs-0038LV-Lu; Wed, 27 Apr 2022 18:50:12 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1njmje-0038Fi-DW for linux-arm-kernel@lists.infradead.org; Wed, 27 Apr 2022 18:50:00 +0000 Received: by mail-yb1-xb49.google.com with SMTP id o64-20020a257343000000b006483069a28aso2481084ybc.3 for ; Wed, 27 Apr 2022 11:49:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=sH7KcOi4dXvStzx7CoNtlhTdwpi2Lae2JmM1YdLUdWE=; b=s9/HxvVMTsEgNz9+Lpu4CzAWwPaS8lKm6a14jcjfp9W5ZNjguoXROmEzBjK10UxFy9 tLEgGfH+WEKIS2zMNNyasptxhMVOoByB2BwCLxZXYIcnpyaSqMGJc4z2t4MVPCY4w9JL ZCKxdO7JZDeysjHv5IZmfjpMDcF5T4q09OxiwAHWwqtWyNANAY2LhSL/xuZCMBG/R0rF P6+3VRPc6JTUKrdTqz6Fagjbtr4+B+8XIbTIx4b/IWL52UZADMw4qnBRMJZwnZuSRQ0T iJgsxqBWuys5Dq5QelGIak2Kksgt0qMdgtaUGmLfBG979WMjpi55K1Nwa3phalPPNiSM GqcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=sH7KcOi4dXvStzx7CoNtlhTdwpi2Lae2JmM1YdLUdWE=; b=DgHD5jQkg82aQ4hnfdkloSSwQBd07JnuhuRu4ao/3NLCEkAkzjO07gDSTddUdap91/ 6ZaGKRpaZ5yDDBfIr3/3eZg40nQxeKJGtcMvvA2ia5n04ChfUgRU9JRakNzYoLRJ4lLb 4//IQ1xw2V48YctdUj7+LTWDR96Pw/Q4lDSSGpWaNGu9VJcwjTADvi0ejz8ZzUf+WoIC Lkqsm8E41B4+aWAB+/CUhhB39GE4TduB6FQ6mUD9VqERk6ACW+ont3z4Iaj7JsSdCc/F UaPQ3ygW85+Qy2xicBncf57lwLiKnTOteOZKHo5jY2Dq8YKaYt/hY1t4kkFHNSc6hJkH 7KYQ== X-Gm-Message-State: AOAM532FCEDW/0V5yAph6qct4RmMsBKDOPS2mJ+0TIjzdC1dBy0w/M1q 7KxdMagHwrh7RscTrfYYQmiGVuYg1ktO/uRjUA== X-Google-Smtp-Source: ABdhPJz/i5n/h2ljJRmeANt+JijtHbBq8jAiICNxa0BDMlFN2U3BSZscHogO0CRADOxNer7VUjlhyrYX3FQL4ZZXmA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:16ec:6da0:8cc5:5f24]) (user=kaleshsingh job=sendgmr) by 2002:a25:3006:0:b0:646:ddbc:dba6 with SMTP id w6-20020a253006000000b00646ddbcdba6mr21665015ybw.113.1651085396777; Wed, 27 Apr 2022 11:49:56 -0700 (PDT) Date: Wed, 27 Apr 2022 11:46:59 -0700 In-Reply-To: <20220427184716.1949239-1-kaleshsingh@google.com> Message-Id: <20220427184716.1949239-5-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220427184716.1949239-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH 4/4] KVM: arm64: Unwind and dump nVHE hypervisor stacktrace From: Kalesh Singh Cc: mark.rutland@arm.com, will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Mark Brown , Masami Hiramatsu , Peter Collingbourne , Alexei Starovoitov , "Madhavan T. Venkataraman" , Andrew Jones , Marco Elver , Kefeng Wang , Zenghui Yu , Keir Fraser , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220427_114958_509784_98A8999C X-CRM114-Status: GOOD ( 21.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On hyp_panic(), the hypervisor dumps the addresses for its stacktrace entries to a page shared with the host. The host then symbolizes and prints the hyp stacktrace before panicking itself. Example stacktrace: [ 122.051187] kvm [380]: Invalid host exception to nVHE hyp! [ 122.052467] kvm [380]: nVHE HYP call trace: [ 122.052814] kvm [380]: [] __kvm_nvhe___pkvm_vcpu_init_traps+0x1f0/0x1f0 [ 122.053865] kvm [380]: [] __kvm_nvhe_hyp_panic+0x130/0x1c0 [ 122.054367] kvm [380]: [] __kvm_nvhe___kvm_vcpu_run+0x10/0x10 [ 122.054878] kvm [380]: [] __kvm_nvhe_handle___kvm_vcpu_run+0x30/0x50 [ 122.055412] kvm [380]: [] __kvm_nvhe_handle_trap+0xbc/0x160 [ 122.055911] kvm [380]: [] __kvm_nvhe___host_exit+0x64/0x64 [ 122.056417] kvm [380]: ---- end of nVHE HYP call trace ---- Signed-off-by: Kalesh Singh Reviewed-by: Mark Brown --- arch/arm64/include/asm/stacktrace.h | 42 ++++++++++++++-- arch/arm64/kernel/stacktrace.c | 75 +++++++++++++++++++++++++++++ arch/arm64/kvm/handle_exit.c | 4 ++ arch/arm64/kvm/hyp/nvhe/switch.c | 4 ++ 4 files changed, 121 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index f5af9a94c5a6..3063912107b0 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -5,6 +5,7 @@ #ifndef __ASM_STACKTRACE_H #define __ASM_STACKTRACE_H +#include #include #include #include @@ -19,10 +20,12 @@ enum stack_type { #ifndef __KVM_NVHE_HYPERVISOR__ STACK_TYPE_TASK, STACK_TYPE_IRQ, - STACK_TYPE_OVERFLOW, STACK_TYPE_SDEI_NORMAL, STACK_TYPE_SDEI_CRITICAL, +#else /* __KVM_NVHE_HYPERVISOR__ */ + STACK_TYPE_HYP, #endif /* !__KVM_NVHE_HYPERVISOR__ */ + STACK_TYPE_OVERFLOW, STACK_TYPE_UNKNOWN, __NR_STACK_TYPES }; @@ -55,6 +58,9 @@ static inline bool on_stack(unsigned long sp, unsigned long size, extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, const char *loglvl); +extern void hyp_dump_backtrace(unsigned long hyp_offset); + +DECLARE_PER_CPU(unsigned long, kvm_arm_hyp_stacktrace_page); DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); static inline bool on_irq_stack(unsigned long sp, unsigned long size, @@ -91,8 +97,32 @@ static inline bool on_overflow_stack(unsigned long sp, unsigned long size, static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info) { return false; } #endif -#endif /* !__KVM_NVHE_HYPERVISOR__ */ +#else /* __KVM_NVHE_HYPERVISOR__ */ + +extern void hyp_save_backtrace(void); + +DECLARE_PER_CPU(unsigned long [PAGE_SIZE/sizeof(long)], overflow_stack); +DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); + +static inline bool on_overflow_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + unsigned long low = (unsigned long)this_cpu_ptr(overflow_stack); + unsigned long high = low + PAGE_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info); +} + +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params); + unsigned long high = params->stack_hyp_va; + unsigned long low = high - PAGE_SIZE; + return on_stack(sp, size, low, high, STACK_TYPE_HYP, info); +} +#endif /* !__KVM_NVHE_HYPERVISOR__ */ /* * We can only safely access per-cpu stacks from current in a non-preemptible @@ -105,6 +135,9 @@ static inline bool on_accessible_stack(const struct task_struct *tsk, if (info) info->type = STACK_TYPE_UNKNOWN; + if (on_overflow_stack(sp, size, info)) + return true; + #ifndef __KVM_NVHE_HYPERVISOR__ if (on_task_stack(tsk, sp, size, info)) return true; @@ -112,10 +145,11 @@ static inline bool on_accessible_stack(const struct task_struct *tsk, return false; if (on_irq_stack(sp, size, info)) return true; - if (on_overflow_stack(sp, size, info)) - return true; if (on_sdei_stack(sp, size, info)) return true; +#else /* __KVM_NVHE_HYPERVISOR__ */ + if (on_hyp_stack(sp, size, info)) + return true; #endif /* !__KVM_NVHE_HYPERVISOR__ */ return false; diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index f346b4c66f1c..c81dea9760ac 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -104,6 +104,7 @@ static int notrace __unwind_next(struct task_struct *tsk, * * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW + * HYP -> OVERFLOW * * ... but the nesting itself is strict. Once we transition from one * stack to another, it's never valid to unwind back to that first @@ -242,7 +243,81 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, unwind(task, &state, consume_entry, cookie); } + +/** + * Symbolizes and dumps the hypervisor backtrace from the shared + * stacktrace page. + */ +noinline notrace void hyp_dump_backtrace(unsigned long hyp_offset) +{ + unsigned long *stacktrace_pos = + (unsigned long *)*this_cpu_ptr(&kvm_arm_hyp_stacktrace_page); + unsigned long va_mask = GENMASK_ULL(vabits_actual - 1, 0); + unsigned long pc = *stacktrace_pos++; + + kvm_err("nVHE HYP call trace:\n"); + + while (pc) { + pc &= va_mask; /* Mask tags */ + pc += hyp_offset; /* Convert to kern addr */ + kvm_err("[<%016lx>] %pB\n", pc, (void *)pc); + pc = *stacktrace_pos++; + } + + kvm_err("---- end of nVHE HYP call trace ----\n"); +} #else /* __KVM_NVHE_HYPERVISOR__ */ DEFINE_PER_CPU(unsigned long [PAGE_SIZE/sizeof(long)], overflow_stack) __aligned(16); + +static int notrace unwind_next(struct task_struct *tsk, + struct unwind_state *state) +{ + struct stack_info info; + + return __unwind_next(tsk, state, &info); +} + +/** + * Saves a hypervisor stacktrace entry (address) to the shared stacktrace page. + */ +static bool hyp_save_backtrace_entry(void *arg, unsigned long where) +{ + struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params); + unsigned long **stacktrace_pos = (unsigned long **)arg; + unsigned long stacktrace_start, stacktrace_end; + + stacktrace_start = (unsigned long)params->stacktrace_hyp_va; + stacktrace_end = stacktrace_start + PAGE_SIZE - (2 * sizeof(long)); + + if ((unsigned long) *stacktrace_pos > stacktrace_end) + return false; + + /* Save the entry to the current pos in stacktrace page */ + **stacktrace_pos = where; + + /* A zero entry delimits the end of the stacktrace. */ + *(*stacktrace_pos + 1) = 0UL; + + /* Increment the current pos */ + ++*stacktrace_pos; + + return true; +} + +/** + * Saves hypervisor stacktrace to the shared stacktrace page. + */ +noinline notrace void hyp_save_backtrace(void) +{ + struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params); + void *stacktrace_start = (void *)params->stacktrace_hyp_va; + struct unwind_state state; + + unwind_init(&state, (unsigned long)__builtin_frame_address(0), + _THIS_IP_); + + unwind(NULL, &state, hyp_save_backtrace_entry, &stacktrace_start); +} + #endif /* !__KVM_NVHE_HYPERVISOR__ */ diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index a377b871bf58..ee5adc9bdb8c 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -323,6 +324,9 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, (void *)panic_addr); } + /* Dump the hypervisor stacktrace */ + hyp_dump_backtrace(hyp_offset); + /* * Hyp has panicked and we're going to handle that by panicking the * kernel. The kernel offset will be revealed in the panic so we're diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 978f1b94fb25..95d810e86c7d 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #include @@ -395,6 +396,9 @@ asmlinkage void __noreturn hyp_panic(void) __sysreg_restore_state_nvhe(host_ctxt); } + /* Save the hypervisor stacktrace */ + hyp_save_backtrace(); + __hyp_do_panic(host_ctxt, spsr, elr, par); unreachable(); }