From patchwork Tue Jun 7 16:50:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12872143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E21D6C43334 for ; Tue, 7 Jun 2022 16:52:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=zEZoXhejLqbVdF2z9Q4qkhdYvQ3RkcgsvSDdLqhfVqE=; b=zO8plKqstSriKcMTStcsRytCwN EbrKrFvOZr9PHzovE8SHO3u99iZIlRlqAF9JcNl1vdj5V474DQ0UvH0HVgaHMsA4fprLnkUlXkluV auhBgdJ6DllXkEMkPlcWTo4L+5SlbeQahWey7JdgC8IIPyYemXUyAhNFrJqm4A0RzzDzn8+NINV/D zAvRKoYUro7wR4lhQXOvl7APmGWbAv5C11XpnZs5I3jI50yJZudj28EMn7OJ/5Jr8cPU6WnfCQPFh D4vLf4O5TwNqEPHxu++BcNq9Ea84IsDvnyxAHxhS0jqZeh8CVWc3VWFoYxVn9bMgd9fbcui1gfsUa 4SbKHNig==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nycQo-008TqM-L0; Tue, 07 Jun 2022 16:51:50 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nycQf-008Tlv-Jw for linux-arm-kernel@lists.infradead.org; Tue, 07 Jun 2022 16:51:44 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2fb7cb07885so152986937b3.23 for ; Tue, 07 Jun 2022 09:51:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=utThYmevKONU/9m2pQGdDHA12doEH9idunElwn0Wp+Y=; b=gUIQMQl0JQzhYd3S4fyRf5XmWfJe5lmGtNGJawPlt9xGCh6iVfChXa73AoojCUrlbl IVXeOtMoTYUHDpAFH3A8brBwPVcti7T1TXiYbRGm9WL+wI2w1S/KEwaSu9WutVR9tygx 84Kvg+VfGwL6M1f0ziUNEjgMhLSuSnn44AbVKJ9wncUs/50JTgmnIHf+wQmtrOV7M8Mt K4uO5mPCKZewDVjZ3Ihsg/rY8/0Ud5jqcoTKOFhCTALO+vhAlMfVTPQqBzexRVTQ5AE6 AWpm7eIByU/UruiBwIFFfpmgXXAyJwhbholwIR4cxXKH/anXN/52PMZ9cQYrHOq/XYDw pDWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=utThYmevKONU/9m2pQGdDHA12doEH9idunElwn0Wp+Y=; b=AYrFL9mEDS2i7ePq+/n9KxNo0O30K3ksjhsLypEHA4urIpSeiHCnrioZfCMrz/GQvO XwKExRKCSHl3csVd0WsSqsk5S2M5IpuL/LvOv5WAJfD5tR/0lzAy4EM3TVT/5PdbusEY wCdehCwJTj4KA5osrjlq3L+Fn/4NOzKKWd4qDjqsYVN9i+dUjLv9t/B0iOL1B2Yne1Yf GGObtUNR5aCEXIJO8glGY2J0WG/9Xj2DXxDJtl9WGY0v+nZifFdz5GaVt66zewaYrs7A BU/PWFEahsQUrl4UFWR1a87gnSqTINJyPzIxcN9MN979byKAooOffRiRIy/Q5ZLv5GlQ DyVQ== X-Gm-Message-State: AOAM530MTQuSb+x2DSam7+fjsn5LNibsYiCMOEwIAtsTa3zQaDisAdg8 NXl9GnsV7hf1hXAx/Bp5EbOIf7+QoLqKyNMDZQ== X-Google-Smtp-Source: ABdhPJzuHcHlzERjFFs10J5b1VKR548pg9LJge0vp7jHTNhan1WTvDRj/6ZH8ppj3YV0piRTv9xwQbWhp9qkOIWgBQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:7091:8518:ec1e:93dc]) (user=kaleshsingh job=sendgmr) by 2002:a81:57d6:0:b0:30c:a234:140d with SMTP id l205-20020a8157d6000000b0030ca234140dmr33093505ywb.269.1654620698862; Tue, 07 Jun 2022 09:51:38 -0700 (PDT) Date: Tue, 7 Jun 2022 09:50:43 -0700 In-Reply-To: <20220607165105.639716-1-kaleshsingh@google.com> Message-Id: <20220607165105.639716-2-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220607165105.639716-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v3 1/5] KVM: arm64: Factor out common stack unwinding logic From: Kalesh Singh To: mark.rutland@arm.com, broonie@kernel.org, maz@kernel.org Cc: will@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, tjmercier@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Masami Hiramatsu , Alexei Starovoitov , "Madhavan T. Venkataraman" , Peter Zijlstra , Andrew Jones , Marco Elver , Kefeng Wang , Zenghui Yu , Keir Fraser , Ard Biesheuvel , Oliver Upton , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220607_095141_742556_576AA8A4 X-CRM114-Status: GOOD ( 17.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Factor out the stack unwinding logic common to both the host kernel and the nVHE hypersivor into __unwind_next(). This allows for reuse in the nVHE hypervisor stack unwinding (later in this series). Signed-off-by: Kalesh Singh Reviewed-by: Mark Brown --- Changes in v3: - Add Mark's Reviewed-by tag arch/arm64/kernel/stacktrace.c | 36 +++++++++++++++++++++++----------- 1 file changed, 25 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 0467cb79f080..ee60c279511c 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -81,23 +81,19 @@ NOKPROBE_SYMBOL(unwind_init); * records (e.g. a cycle), determined based on the location and fp value of A * and the location (but not the fp value) of B. */ -static int notrace unwind_next(struct task_struct *tsk, - struct unwind_state *state) +static int notrace __unwind_next(struct task_struct *tsk, + struct unwind_state *state, + struct stack_info *info) { unsigned long fp = state->fp; - struct stack_info info; - - /* Final frame; nothing to unwind */ - if (fp == (unsigned long)task_pt_regs(tsk)->stackframe) - return -ENOENT; if (fp & 0x7) return -EINVAL; - if (!on_accessible_stack(tsk, fp, 16, &info)) + if (!on_accessible_stack(tsk, fp, 16, info)) return -EINVAL; - if (test_bit(info.type, state->stacks_done)) + if (test_bit(info->type, state->stacks_done)) return -EINVAL; /* @@ -113,7 +109,7 @@ static int notrace unwind_next(struct task_struct *tsk, * stack to another, it's never valid to unwind back to that first * stack. */ - if (info.type == state->prev_type) { + if (info->type == state->prev_type) { if (fp <= state->prev_fp) return -EINVAL; } else { @@ -127,7 +123,25 @@ static int notrace unwind_next(struct task_struct *tsk, state->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp)); state->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 8)); state->prev_fp = fp; - state->prev_type = info.type; + state->prev_type = info->type; + + return 0; +} +NOKPROBE_SYMBOL(__unwind_next); + +static int notrace unwind_next(struct task_struct *tsk, + struct unwind_state *state) +{ + struct stack_info info; + int err; + + /* Final frame; nothing to unwind */ + if (state->fp == (unsigned long)task_pt_regs(tsk)->stackframe) + return -ENOENT; + + err = __unwind_next(tsk, state, &info); + if (err) + return err; state->pc = ptrauth_strip_insn_pac(state->pc); From patchwork Tue Jun 7 16:50:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12872144 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 36955C43334 for ; Tue, 7 Jun 2022 16:53:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=YvxWp6f8V3koK9tuZgUyaJElmcjwpAEsvEpl4ptrm8E=; b=eJ+TgsKZXFDcwaRaUj/GUex77j y83cMyjyzz+6jg7l503DBd1sS31Kf+9lVHyz0SKNn6AHbUBQehE6Y1Yi92AC58x8VlKIN+EUIeFyc 6mBnBOu8sZvTdcCRch3M2P9fdu76g/w9Y8VkupZrZZs9qL0ToUi1S6kenZc4A8HW9ls8tp5THCDM+ S4eu7sZo5gmfveLH28LAyqkApIDKOBxYn79sQXetyaUkVPTv+c6s+k3NgiBky/cfiQNcD9XjQIQdq 2Tt28TagNtdmfoWg5JtM7B1sxn9M7sZ65QhZn/BsRsNbzNX0BjqdUrlIesQgMGMfJ8cKEG31RgVZk vYfE1X3A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nycR6-008TxB-02; Tue, 07 Jun 2022 16:52:08 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nycR2-008Tvo-Dq for linux-arm-kernel@lists.infradead.org; Tue, 07 Jun 2022 16:52:05 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-2fb7bf98f1aso153157227b3.5 for ; Tue, 07 Jun 2022 09:52:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rjRsj4m79FaG01hh1JM+gnevDQqcLHdW4HjfiuVC0I0=; b=hn7Xx71n2KnaaudPsWoOTWsmIKVH2tEImkA5C6UnP/mD/yBngfMLji74jihCgmGl3i Oe+usDOEZhIztDMPc4rcdCiSldhSIax6bzofbCx/aXbOpHA8gJZb4Yt4b3HyzrEDGavY tz1an+NE9s3+fZ4I/wqrsGluqW7Gl9q8cb52CZ1wNMrgLvNqRIVFbx7acB7jUIs6dy3a z6SeXpB+1tP7RqmG/ZlE2XEI4V6CdBQCR3gjZkV62ZVv42q3eIfxz3DsUvqfdulCdf0n aWtbzj2IgfmtGIcAPuyAQ6d5/2iF/92KY92kI9WA2rv9rcUAH23ftma3wW1dEriJCT6t aD5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rjRsj4m79FaG01hh1JM+gnevDQqcLHdW4HjfiuVC0I0=; b=AJIUhK9Ws3tS6NsXJTBJH7ZSqtOzsOX8WMB6mtSY8mm7ZVxkLfRvban7G4tNIwhM2z /hx2UEohBjkmHTadxqgnpyC2/HQOIlKS7E5THMnQkJCM9Pfq96+61Bsf0XOOhLlcykwq /OtuJDhA43013U8yFRM1ZIu8D3pr/CTBnFlw9v68Rm+EsPRcidh+60EZA3pTVek7Do71 rt1zxSii7Yry+zht5RdiqeiJQjv5Q3MYnyquiw2NR8BUXuq7UYJJ8VBREQq6imaijpjA HVAhrVhpdl3ROjpLUys6Dj1aF4iDJFhy9sbGr+QP8KSQZf6hYVUK8mvQlncBo/im8qHr 9bgQ== X-Gm-Message-State: AOAM531Wl5C3X2aMjOB8WbYJ/5nqTyKYLofBSM3ZpbRpAkM9RRn1N4BM EduHgGMYVKPrT8Umq7nbh+f1kaoRj1fqamXOwg== X-Google-Smtp-Source: ABdhPJy+IB15B8hsKv/SihsClOR+eo9Nq0yxU4MyQoY67kA3uwyRsLph8BS0L40V/9kbjX+bqF3kZKaMqzx2PJ910Q== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:7091:8518:ec1e:93dc]) (user=kaleshsingh job=sendgmr) by 2002:a05:6902:1006:b0:660:6f21:a210 with SMTP id w6-20020a056902100600b006606f21a210mr22111816ybt.178.1654620722845; Tue, 07 Jun 2022 09:52:02 -0700 (PDT) Date: Tue, 7 Jun 2022 09:50:44 -0700 In-Reply-To: <20220607165105.639716-1-kaleshsingh@google.com> Message-Id: <20220607165105.639716-3-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220607165105.639716-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v3 2/5] KVM: arm64: Compile stacktrace.nvhe.o From: Kalesh Singh To: mark.rutland@arm.com, broonie@kernel.org, maz@kernel.org Cc: will@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, tjmercier@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Masami Hiramatsu , Alexei Starovoitov , "Madhavan T. Venkataraman" , Andrew Jones , Kefeng Wang , Zenghui Yu , Keir Fraser , Ard Biesheuvel , Oliver Upton , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220607_095204_516527_A4DA977F X-CRM114-Status: GOOD ( 16.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Recompile stack unwinding code for use with the nVHE hypervisor. This is a preparatory patch that will allow reusing most of the kernel unwinding logic in the nVHE hypervisor. Suggested-by: Mark Rutland Signed-off-by: Kalesh Singh Reviewed-by: Mark Brown --- Changes in v3: - Add Mark's Reviewed-by tag Changes in v2: - Split out refactoring of common unwinding logic into a separate patch, per Mark Brown arch/arm64/include/asm/stacktrace.h | 18 +++++++++----- arch/arm64/kernel/stacktrace.c | 37 ++++++++++++++++------------- arch/arm64/kvm/hyp/nvhe/Makefile | 3 ++- 3 files changed, 35 insertions(+), 23 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index aec9315bf156..f5af9a94c5a6 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -16,12 +16,14 @@ #include enum stack_type { - STACK_TYPE_UNKNOWN, +#ifndef __KVM_NVHE_HYPERVISOR__ STACK_TYPE_TASK, STACK_TYPE_IRQ, STACK_TYPE_OVERFLOW, STACK_TYPE_SDEI_NORMAL, STACK_TYPE_SDEI_CRITICAL, +#endif /* !__KVM_NVHE_HYPERVISOR__ */ + STACK_TYPE_UNKNOWN, __NR_STACK_TYPES }; @@ -31,11 +33,6 @@ struct stack_info { enum stack_type type; }; -extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, - const char *loglvl); - -DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); - static inline bool on_stack(unsigned long sp, unsigned long size, unsigned long low, unsigned long high, enum stack_type type, struct stack_info *info) @@ -54,6 +51,12 @@ static inline bool on_stack(unsigned long sp, unsigned long size, return true; } +#ifndef __KVM_NVHE_HYPERVISOR__ +extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, + const char *loglvl); + +DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); + static inline bool on_irq_stack(unsigned long sp, unsigned long size, struct stack_info *info) { @@ -88,6 +91,7 @@ static inline bool on_overflow_stack(unsigned long sp, unsigned long size, static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info) { return false; } #endif +#endif /* !__KVM_NVHE_HYPERVISOR__ */ /* @@ -101,6 +105,7 @@ static inline bool on_accessible_stack(const struct task_struct *tsk, if (info) info->type = STACK_TYPE_UNKNOWN; +#ifndef __KVM_NVHE_HYPERVISOR__ if (on_task_stack(tsk, sp, size, info)) return true; if (tsk != current || preemptible()) @@ -111,6 +116,7 @@ static inline bool on_accessible_stack(const struct task_struct *tsk, return true; if (on_sdei_stack(sp, size, info)) return true; +#endif /* !__KVM_NVHE_HYPERVISOR__ */ return false; } diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index ee60c279511c..a84e38d41d38 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -129,6 +129,26 @@ static int notrace __unwind_next(struct task_struct *tsk, } NOKPROBE_SYMBOL(__unwind_next); +static int notrace unwind_next(struct task_struct *tsk, + struct unwind_state *state); + +static void notrace unwind(struct task_struct *tsk, + struct unwind_state *state, + stack_trace_consume_fn consume_entry, void *cookie) +{ + while (1) { + int ret; + + if (!consume_entry(cookie, state->pc)) + break; + ret = unwind_next(tsk, state); + if (ret < 0) + break; + } +} +NOKPROBE_SYMBOL(unwind); + +#ifndef __KVM_NVHE_HYPERVISOR__ static int notrace unwind_next(struct task_struct *tsk, struct unwind_state *state) { @@ -171,22 +191,6 @@ static int notrace unwind_next(struct task_struct *tsk, } NOKPROBE_SYMBOL(unwind_next); -static void notrace unwind(struct task_struct *tsk, - struct unwind_state *state, - stack_trace_consume_fn consume_entry, void *cookie) -{ - while (1) { - int ret; - - if (!consume_entry(cookie, state->pc)) - break; - ret = unwind_next(tsk, state); - if (ret < 0) - break; - } -} -NOKPROBE_SYMBOL(unwind); - static bool dump_backtrace_entry(void *arg, unsigned long where) { char *loglvl = arg; @@ -238,3 +242,4 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, unwind(task, &state, consume_entry, cookie); } +#endif /* !__KVM_NVHE_HYPERVISOR__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index f9fe4dc21b1f..c0ff0d6fc403 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -14,7 +14,8 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs)) obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ hyp-main.o hyp-smp.o psci-relay.o early_alloc.o page_alloc.o \ - cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o + cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o \ + ../../../kernel/stacktrace.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o obj-$(CONFIG_DEBUG_LIST) += list_debug.o From patchwork Tue Jun 7 16:50:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12872145 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 55DD5C433EF for ; Tue, 7 Jun 2022 16:53:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=POFK/Wm76i0KcSfHTgKQyrebYF3/LM/FonxvsueA1WE=; b=SVqbLKNgc9RvbRE8olry1t5eQ3 HZBtMsBRc8vY69KXzvPjEnsjrrEy49hik8fYPj838kj0VVObEugKjXL7Yu7uX6W0olZQnkk7btrUQ NCbuElPmaHu003BB9zRk8lKuXx66WC//47UmCmo/oPW0kxzwr00pYn52lEP+Ifqa5aDZMgUTJCyWC OQxT5IEbx6/GeNsyACUVn0771Hz/CdiWpVPPPefvlLIaNZPw3Xj2EwCtTQRdTQN6PB5yZqj/OiyXM Ra9nfWSNZJaF6Ci5ATn5RIh+e2SA40JIkxD5bAEba3HusXjcAME2Ok1/KhtwmkIgjyH8IQT7EJJI1 CiFgibsQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nycRU-008U6O-1f; Tue, 07 Jun 2022 16:52:32 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nycRO-008U4K-UF for linux-arm-kernel@lists.infradead.org; Tue, 07 Jun 2022 16:52:28 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2fb7cb07885so153004277b3.23 for ; Tue, 07 Jun 2022 09:52:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=x4HYk8ZXIQ3L7NS6dgOSqFvxE7Amt+Ba78KLaT8irts=; b=bbRfr+VLNGH5abSF72ZiMEU0zNJsQdJuZGojBmlWNAKmd1Ix07dEopbMuqAJ5xYIMb z01k3Ui7ONQkbcxRWF9ga19riZtMb8V7AbmCO4w39rPeR/t8TvN1l3MyXBCK/tQXXgBt +8NF8qc1zN7qjitT9iTsE1VPPfeXteLDvl9Ay4SBBHThXEGq/BXjsxwklqYe70WmQ8ht B//5R0e0lXiDLWSo1IWOJLY2QJzHyPd5V+AnnA2sqNE39mfpfPTbR7oRZ1zS2kIl3LpD g7cHHtq6aP+d8NyxJqCmyhVy37PTQdtOFWQ2sJus28hTcinWBTQXvWv7QiBi74k2Tz1M dw5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=x4HYk8ZXIQ3L7NS6dgOSqFvxE7Amt+Ba78KLaT8irts=; b=2lv/wJKB4yL0nS+uXiHsV9kIjfHmbZME6vKaiQg+ijlan5l7PJ1Igxb8rahVA/0ZK4 hr1K6SyEr62pai4o5O6tlsKqOV7s4wqq/7Ep1mcq00Cmu6iaWg1giwMK84biFdBfSIbR UDJO67zhvbZf8u+lM5vJWqBYwvBHO1RMnkZj8XqgM21diQpduQJIsFBZIQzMLy9b5pTg lbkDLPBJNclifsa6DUS5dpNIVY/HI6PGrpjEBIWHZXRsHuTS22dsXJzHYZSpdlYdcDDx fjs0OJ8GNPbgcJsQ0Vnrgl2iP0yXtCmavs18aqTLt+paZzSbVDwsUk1O5EfUaFvKz4fZ TDUA== X-Gm-Message-State: AOAM531woCYpRC/RsBpWixjY0zo0RYvZqGfCybrcne5BnHETkUiCE3j4 k8ehQO5/NHYPzeLIukPMIIYsd8f1kqvyzkC+UQ== X-Google-Smtp-Source: ABdhPJz5cu56sG78OQqQQjV0I3/hccrWnE63JIWa21o18y/iEx0ZFYvdLKe0iAXWzw5Pfvq5YrrnFsuiy2fijwl19w== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:7091:8518:ec1e:93dc]) (user=kaleshsingh job=sendgmr) by 2002:a5b:12:0:b0:663:e4df:7bc0 with SMTP id a18-20020a5b0012000000b00663e4df7bc0mr2722447ybp.208.1654620745478; Tue, 07 Jun 2022 09:52:25 -0700 (PDT) Date: Tue, 7 Jun 2022 09:50:45 -0700 In-Reply-To: <20220607165105.639716-1-kaleshsingh@google.com> Message-Id: <20220607165105.639716-4-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220607165105.639716-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v3 3/5] KVM: arm64: Add hypervisor overflow stack From: Kalesh Singh To: mark.rutland@arm.com, broonie@kernel.org, maz@kernel.org Cc: will@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, tjmercier@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Masami Hiramatsu , Alexei Starovoitov , "Madhavan T. Venkataraman" , Peter Zijlstra , Andrew Jones , Zenghui Yu , Kefeng Wang , Keir Fraser , Ard Biesheuvel , Oliver Upton , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220607_095227_024074_D145F37E X-CRM114-Status: GOOD ( 14.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Allocate and switch to 16-byte aligned secondary stack on overflow. This provides us stack space to better handle overflows; and is used in a subsequent patch to dump the hypervisor stacktrace. Signed-off-by: Kalesh Singh --- arch/arm64/kernel/stacktrace.c | 3 +++ arch/arm64/kvm/hyp/nvhe/host.S | 9 ++------- 2 files changed, 5 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index a84e38d41d38..f346b4c66f1c 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -242,4 +242,7 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, unwind(task, &state, consume_entry, cookie); } +#else /* __KVM_NVHE_HYPERVISOR__ */ +DEFINE_PER_CPU(unsigned long [PAGE_SIZE/sizeof(long)], overflow_stack) + __aligned(16); #endif /* !__KVM_NVHE_HYPERVISOR__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S index ea6a397b64a6..4e3032a244e1 100644 --- a/arch/arm64/kvm/hyp/nvhe/host.S +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -177,13 +177,8 @@ SYM_FUNC_END(__host_hvc) b hyp_panic .L__hyp_sp_overflow\@: - /* - * Reset SP to the top of the stack, to allow handling the hyp_panic. - * This corrupts the stack but is ok, since we won't be attempting - * any unwinding here. - */ - ldr_this_cpu x0, kvm_init_params + NVHE_INIT_STACK_HYP_VA, x1 - mov sp, x0 + /* Switch to the overflow stack */ + adr_this_cpu sp, overflow_stack + PAGE_SIZE, x0 b hyp_panic_bad_stack ASM_BUG() From patchwork Tue Jun 7 16:50:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12872146 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C79B1C43334 for ; Tue, 7 Jun 2022 16:57:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=wOpOOXXceBXQWPPNbiFPuXKzxm733B2qEMjCYFQkIW0=; b=IWLVjoIw7Vl/kBjnbnkewajQDY Lo77N5nMblvn9SfsjWZOHy+AjmetandLe49QPeqENXFWatqOC9AMaeIaH5u/Ycdq4hKViqOAWsONX PVIA83eH0IkDTUnlPw0DVOkJv+ff3SdKAfFv05Z8CX/8vEoPkFG4m5hs3TYW9H2RBSHfGODZVToQ5 sBb9/RIL/ClMXA+K7ZwT8Jla7yoOyxxeXTi+4QbY9A6z60/MGKl9cQjRcpHSTKsGRA6xlUsuxl1Qa DGt5oDL1MiJ5tgCt+3BCIS6bULA+YXjoovHTKyPP0RtQ7jofr6XB8olse1STNmYiFst6Syn26tU9v HHTMC4VA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nycUv-008VMI-Sr; Tue, 07 Jun 2022 16:56:06 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nycUs-008VLD-Uv for linux-arm-kernel@lists.infradead.org; Tue, 07 Jun 2022 16:56:04 +0000 Received: by mail-pj1-x1049.google.com with SMTP id hf12-20020a17090aff8c00b001e2c0a2584cso7100969pjb.1 for ; Tue, 07 Jun 2022 09:56:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=0VHqSpYj40pidwvqadPpnWxx4OZYKsR2ZljfrIJfKyE=; b=BeaU0wAa2g0Kiu5D1gmb5uJ76d5AlU9rIdKN+WKMfPZXCRaPDOZBeHGCb2YXAwixzI OCBny7z0nInd3F0lowOxrNMPOTwmqhRt9oX2f6UGqzs8Oen4JP8iVFII0AS/WKCB7LqD jIsycA3VZfWRs5Fm2yn5GtlJfWCb8qQJGPwObJYjk4GDn3r8aa9TSi+529JuJeETKo3/ 84fYQv31ibh/BGHYCOmG57ssI0vQEjktcHUHxnCJUxTUih+7EdUXIXmgq3d33ilfDCbA JFh/tPiGwAvstxtB5QdVm4cVf74EAJvixcorLVxLr1Bor3M/N6AqphbawD+oB21uPfm/ PlMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=0VHqSpYj40pidwvqadPpnWxx4OZYKsR2ZljfrIJfKyE=; b=mPrpHANgdPYc5ZntMbcfgiZ2OmddDuVsjPCzYh/qJLNDqGsDZaaEKXw6670sgGxAVC QCa0PrDJw27v/1pOV2RQykp7fSzuQjHvRL60CDlaOjS+jX5JkrmB+OuhgP7noSrvmJJl bzGytqq8QR4p8mUj5UKUWQoy4iBvMf2kjk9qxH/liahNFOdaFIWqr0QtcFWz0d7YcFqz vZOUfyzT1oZV19y9uVlkjBPfWy9Wht43nxFLPUK3YCCB8fcAfhB07c4gYBG2VyKWTXCV lSgngMNXEQZy48KyigX3qN8h+U82Uxae5SJgD1vqpepRRwJK4S9aCKGeaU3IdLk3ApuX m7Tg== X-Gm-Message-State: AOAM531UUNA203oGYJib6SmEKKFXJp5dogGyja1INoVwIURR5RKdORay B9ZpBZBQl+uoz8LykyxmNomfZDrN48lJgv8smQ== X-Google-Smtp-Source: ABdhPJxgwIRDwveYRMn6jZYEppdEHf6zMGiXAbdouv/bN5rkU0F3Reqsgn5ZoRJJoWKi0KbuJ4cps8DTxTVzu+nyrA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:7091:8518:ec1e:93dc]) (user=kaleshsingh job=sendgmr) by 2002:a17:90a:1f4c:b0:1e6:6f77:c573 with SMTP id y12-20020a17090a1f4c00b001e66f77c573mr34893288pjy.17.1654620960558; Tue, 07 Jun 2022 09:56:00 -0700 (PDT) Date: Tue, 7 Jun 2022 09:50:46 -0700 In-Reply-To: <20220607165105.639716-1-kaleshsingh@google.com> Message-Id: <20220607165105.639716-5-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220607165105.639716-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v3 4/5] KVM: arm64: Allocate shared stacktrace pages From: Kalesh Singh To: mark.rutland@arm.com, broonie@kernel.org, maz@kernel.org Cc: will@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, tjmercier@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Masami Hiramatsu , Alexei Starovoitov , "Madhavan T. Venkataraman" , Peter Zijlstra , Andrew Jones , Zenghui Yu , Keir Fraser , Kefeng Wang , Ard Biesheuvel , Oliver Upton , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220607_095603_035984_7E8DB4DC X-CRM114-Status: GOOD ( 17.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The nVHE hypervisor can use this shared area to dump its stacktrace addresses on hyp_panic(). Symbolization and printing the stacktrace can then be handled by the host in EL1 (done in a later patch in this series). Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/arm.c | 34 ++++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/setup.c | 11 +++++++++++ 3 files changed, 46 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 2e277f2ed671..ad31ac68264f 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -174,6 +174,7 @@ struct kvm_nvhe_init_params { unsigned long hcr_el2; unsigned long vttbr; unsigned long vtcr; + unsigned long stacktrace_hyp_va; }; /* Translate a kernel address @ptr into its equivalent linear mapping */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 400bb0fe2745..c0a936a7623d 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -50,6 +50,7 @@ DEFINE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector); static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); +DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stacktrace_page); unsigned long kvm_arm_hyp_percpu_base[NR_CPUS]; DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); @@ -1554,6 +1555,7 @@ static void cpu_prepare_hyp_mode(int cpu) tcr |= (idmap_t0sz & GENMASK(TCR_TxSZ_WIDTH - 1, 0)) << TCR_T0SZ_OFFSET; params->tcr_el2 = tcr; + params->stacktrace_hyp_va = kern_hyp_va(per_cpu(kvm_arm_hyp_stacktrace_page, cpu)); params->pgd_pa = kvm_mmu_get_httbr(); if (is_protected_kvm_enabled()) params->hcr_el2 = HCR_HOST_NVHE_PROTECTED_FLAGS; @@ -1845,6 +1847,7 @@ static void teardown_hyp_mode(void) free_hyp_pgds(); for_each_possible_cpu(cpu) { free_page(per_cpu(kvm_arm_hyp_stack_page, cpu)); + free_page(per_cpu(kvm_arm_hyp_stacktrace_page, cpu)); free_pages(kvm_arm_hyp_percpu_base[cpu], nvhe_percpu_order()); } } @@ -1936,6 +1939,23 @@ static int init_hyp_mode(void) per_cpu(kvm_arm_hyp_stack_page, cpu) = stack_page; } + /* + * Allocate stacktrace pages for Hypervisor-mode. + * This is used by the hypervisor to share its stacktrace + * with the host on a hyp_panic(). + */ + for_each_possible_cpu(cpu) { + unsigned long stacktrace_page; + + stacktrace_page = __get_free_page(GFP_KERNEL); + if (!stacktrace_page) { + err = -ENOMEM; + goto out_err; + } + + per_cpu(kvm_arm_hyp_stacktrace_page, cpu) = stacktrace_page; + } + /* * Allocate and initialize pages for Hypervisor-mode percpu regions. */ @@ -2043,6 +2063,20 @@ static int init_hyp_mode(void) params->stack_hyp_va = hyp_addr + (2 * PAGE_SIZE); } + /* + * Map the hyp stacktrace pages. + */ + for_each_possible_cpu(cpu) { + char *stacktrace_page = (char *)per_cpu(kvm_arm_hyp_stacktrace_page, cpu); + + err = create_hyp_mappings(stacktrace_page, stacktrace_page + PAGE_SIZE, + PAGE_HYP); + if (err) { + kvm_err("Cannot map hyp stacktrace page\n"); + goto out_err; + } + } + for_each_possible_cpu(cpu) { char *percpu_begin = (char *)kvm_arm_hyp_percpu_base[cpu]; char *percpu_end = percpu_begin + nvhe_percpu_size(); diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index e8d4ea2fcfa0..9b81bf2d40d7 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -135,6 +135,17 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, /* Update stack_hyp_va to end of the stack's private VA range */ params->stack_hyp_va = hyp_addr + (2 * PAGE_SIZE); + + /* + * Map the stacktrace pages as shared and transfer ownership to + * the hypervisor. + */ + prot = pkvm_mkstate(PAGE_HYP, PKVM_PAGE_SHARED_OWNED); + start = (void *)params->stacktrace_hyp_va; + end = start + PAGE_SIZE; + ret = pkvm_create_mappings(start, end, prot); + if (ret) + return ret; } /* From patchwork Tue Jun 7 16:50:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12872150 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9AF11C43334 for ; Tue, 7 Jun 2022 16:57:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=MoUps0eQfUKBexpe0UXRbNrYX2lbfNOGh03Mf/vzlUI=; b=YbZznlnkSwiw9h4CJpfGXOsfTK z1ziHM3rW9RT8W+0O6s+NSeok7LhY6fkGqxwMXkbLYYbRwf+omZs/vAPSYS+lQRvC8YrSAWwekNAA 0oooFtj/1SzeRwPKCQ4j+wBGDde0gFdyAZ2s9Ou787MU0D1rCsfmDNuVo5umTpeJIrY9nvtWcNR2f hdWv7nZDL/zPtnq/Jg/GStTriIMmtKPYVcJbdLmXvRRcpLKvhyCYl/ORIODW50RkIsonZpuA1R7Gd SoTCzfAMalDy/lNS+s1XpZVmFO7g3iDIdma19cA75/BJJC1sWOqZYWYd8ABK6mfjVlT7oBCH0fcJW Qhfz5nOQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nycVK-008VU9-RJ; Tue, 07 Jun 2022 16:56:31 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nycVG-008VSA-LU for linux-arm-kernel@lists.infradead.org; Tue, 07 Jun 2022 16:56:28 +0000 Received: by mail-pj1-x1049.google.com with SMTP id q9-20020a17090a1b0900b001e87ad1beadso4366454pjq.1 for ; Tue, 07 Jun 2022 09:56:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ik67eq7ZFLPpGR0zNaRH/kqbteisvZSFHAun96EESgY=; b=c5MnAxiYEqPUKzHWj6VaKDHLpVGmdQSneovsbZDaIlVlrUT0cWvKa+9KuwbIMXJkX2 nq1k6E8l8eEqVBh+PyGQX9aHE/jBgk8dsv85NbeBbW3lrnYlhkCSdt2/c+HYe2GXH982 UJSNvT3tPQ/Ec/J58jiDra/FhbZgxUwcqR9HKv3WBvsUtKISRf2Fc+3EgZ5Adritd4Ob 3BEBfW/Ky4rJ2Ct1yG69tsW4yEuwJYgysrD/gGd/gHBqOjs3fc6ud2uwuV/S36ASvt6l 7vBEVwvj20nxRnZGSjY/Z0Z0kk5RxYqg2AqqfHtUNCjzErEPrMt2bw/TJV6MYV9j8eRH 3M0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ik67eq7ZFLPpGR0zNaRH/kqbteisvZSFHAun96EESgY=; b=XRUAH3CK806DjBEAAsGzbYqTECE1BlvpFRVGv9GSgpCsWrSfY8QgTtzbHiXx9r1C1b 4fBw5iv9XhpalYhWrf16yzR78tvJ1JEmKf4AHfjjVDSWAPsvq+TO9+iswgDpRaHbqwDh kn75VoSoR+iBEgigZJr+aJDHH0uDYtVOmOiRByAtsrdQ2vUG8CzCamnv6COc26w0NVHr vlI75mrnuHlwVAyzzPqaOJGeVwbb92n4Ygg1VlbuT6y0PEo9xltE4TtlFD/KrtEN+Hb2 ndTwJ8ciwfIbF5M8uMdtALGA+VGWGXF/drI71RvkaldjVWIKYlWhGiWuBG1pyV8YJyHs Cabw== X-Gm-Message-State: AOAM530orStDcJV1WLIRNKGPPH0iKiAQFOTXY5nPPxrcGkB9Q8AM+W7m UQG8PnU5nt7yrXx9ty4DMC3kGMtETtcXrGki5Q== X-Google-Smtp-Source: ABdhPJyQoKHawMwo4HxqpkAQ9clAAvMd8TryB/FlktgNoP0ffixbJjrIY+9zZn++XlD8hy1B7BR/4fhUDYjPOaZJwA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:7091:8518:ec1e:93dc]) (user=kaleshsingh job=sendgmr) by 2002:a17:902:b683:b0:163:4ef2:3c40 with SMTP id c3-20020a170902b68300b001634ef23c40mr29010356pls.123.1654620984901; Tue, 07 Jun 2022 09:56:24 -0700 (PDT) Date: Tue, 7 Jun 2022 09:50:47 -0700 In-Reply-To: <20220607165105.639716-1-kaleshsingh@google.com> Message-Id: <20220607165105.639716-6-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220607165105.639716-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v3 5/5] KVM: arm64: Unwind and dump nVHE hypervisor stacktrace From: Kalesh Singh To: mark.rutland@arm.com, broonie@kernel.org, maz@kernel.org Cc: will@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, tjmercier@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Masami Hiramatsu , Alexei Starovoitov , "Madhavan T. Venkataraman" , Peter Zijlstra , Andrew Jones , Keir Fraser , Zenghui Yu , Kefeng Wang , Ard Biesheuvel , Oliver Upton , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220607_095626_754584_DA705F4A X-CRM114-Status: GOOD ( 22.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On hyp_panic(), the hypervisor dumps the addresses for its stacktrace entries to a page shared with the host. The host then symbolizes and prints the hyp stacktrace before panicking itself. Example stacktrace: [ 122.051187] kvm [380]: Invalid host exception to nVHE hyp! [ 122.052467] kvm [380]: nVHE HYP call trace: [ 122.052814] kvm [380]: [] __kvm_nvhe___pkvm_vcpu_init_traps+0x1f0/0x1f0 [ 122.053865] kvm [380]: [] __kvm_nvhe_hyp_panic+0x130/0x1c0 [ 122.054367] kvm [380]: [] __kvm_nvhe___kvm_vcpu_run+0x10/0x10 [ 122.054878] kvm [380]: [] __kvm_nvhe_handle___kvm_vcpu_run+0x30/0x50 [ 122.055412] kvm [380]: [] __kvm_nvhe_handle_trap+0xbc/0x160 [ 122.055911] kvm [380]: [] __kvm_nvhe___host_exit+0x64/0x64 [ 122.056417] kvm [380]: ---- end of nVHE HYP call trace ---- Signed-off-by: Kalesh Singh Reviewed-by: Mark Brown Reported-by: kernel test robot Reported-by: kernel test robot --- Changes in v2: - Add Mark's Reviewed-by tag arch/arm64/include/asm/stacktrace.h | 42 ++++++++++++++-- arch/arm64/kernel/stacktrace.c | 75 +++++++++++++++++++++++++++++ arch/arm64/kvm/handle_exit.c | 4 ++ arch/arm64/kvm/hyp/nvhe/switch.c | 4 ++ 4 files changed, 121 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index f5af9a94c5a6..3063912107b0 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -5,6 +5,7 @@ #ifndef __ASM_STACKTRACE_H #define __ASM_STACKTRACE_H +#include #include #include #include @@ -19,10 +20,12 @@ enum stack_type { #ifndef __KVM_NVHE_HYPERVISOR__ STACK_TYPE_TASK, STACK_TYPE_IRQ, - STACK_TYPE_OVERFLOW, STACK_TYPE_SDEI_NORMAL, STACK_TYPE_SDEI_CRITICAL, +#else /* __KVM_NVHE_HYPERVISOR__ */ + STACK_TYPE_HYP, #endif /* !__KVM_NVHE_HYPERVISOR__ */ + STACK_TYPE_OVERFLOW, STACK_TYPE_UNKNOWN, __NR_STACK_TYPES }; @@ -55,6 +58,9 @@ static inline bool on_stack(unsigned long sp, unsigned long size, extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, const char *loglvl); +extern void hyp_dump_backtrace(unsigned long hyp_offset); + +DECLARE_PER_CPU(unsigned long, kvm_arm_hyp_stacktrace_page); DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); static inline bool on_irq_stack(unsigned long sp, unsigned long size, @@ -91,8 +97,32 @@ static inline bool on_overflow_stack(unsigned long sp, unsigned long size, static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info) { return false; } #endif -#endif /* !__KVM_NVHE_HYPERVISOR__ */ +#else /* __KVM_NVHE_HYPERVISOR__ */ + +extern void hyp_save_backtrace(void); + +DECLARE_PER_CPU(unsigned long [PAGE_SIZE/sizeof(long)], overflow_stack); +DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); + +static inline bool on_overflow_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + unsigned long low = (unsigned long)this_cpu_ptr(overflow_stack); + unsigned long high = low + PAGE_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info); +} + +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params); + unsigned long high = params->stack_hyp_va; + unsigned long low = high - PAGE_SIZE; + return on_stack(sp, size, low, high, STACK_TYPE_HYP, info); +} +#endif /* !__KVM_NVHE_HYPERVISOR__ */ /* * We can only safely access per-cpu stacks from current in a non-preemptible @@ -105,6 +135,9 @@ static inline bool on_accessible_stack(const struct task_struct *tsk, if (info) info->type = STACK_TYPE_UNKNOWN; + if (on_overflow_stack(sp, size, info)) + return true; + #ifndef __KVM_NVHE_HYPERVISOR__ if (on_task_stack(tsk, sp, size, info)) return true; @@ -112,10 +145,11 @@ static inline bool on_accessible_stack(const struct task_struct *tsk, return false; if (on_irq_stack(sp, size, info)) return true; - if (on_overflow_stack(sp, size, info)) - return true; if (on_sdei_stack(sp, size, info)) return true; +#else /* __KVM_NVHE_HYPERVISOR__ */ + if (on_hyp_stack(sp, size, info)) + return true; #endif /* !__KVM_NVHE_HYPERVISOR__ */ return false; diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index f346b4c66f1c..c81dea9760ac 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -104,6 +104,7 @@ static int notrace __unwind_next(struct task_struct *tsk, * * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW + * HYP -> OVERFLOW * * ... but the nesting itself is strict. Once we transition from one * stack to another, it's never valid to unwind back to that first @@ -242,7 +243,81 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, unwind(task, &state, consume_entry, cookie); } + +/** + * Symbolizes and dumps the hypervisor backtrace from the shared + * stacktrace page. + */ +noinline notrace void hyp_dump_backtrace(unsigned long hyp_offset) +{ + unsigned long *stacktrace_pos = + (unsigned long *)*this_cpu_ptr(&kvm_arm_hyp_stacktrace_page); + unsigned long va_mask = GENMASK_ULL(vabits_actual - 1, 0); + unsigned long pc = *stacktrace_pos++; + + kvm_err("nVHE HYP call trace:\n"); + + while (pc) { + pc &= va_mask; /* Mask tags */ + pc += hyp_offset; /* Convert to kern addr */ + kvm_err("[<%016lx>] %pB\n", pc, (void *)pc); + pc = *stacktrace_pos++; + } + + kvm_err("---- end of nVHE HYP call trace ----\n"); +} #else /* __KVM_NVHE_HYPERVISOR__ */ DEFINE_PER_CPU(unsigned long [PAGE_SIZE/sizeof(long)], overflow_stack) __aligned(16); + +static int notrace unwind_next(struct task_struct *tsk, + struct unwind_state *state) +{ + struct stack_info info; + + return __unwind_next(tsk, state, &info); +} + +/** + * Saves a hypervisor stacktrace entry (address) to the shared stacktrace page. + */ +static bool hyp_save_backtrace_entry(void *arg, unsigned long where) +{ + struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params); + unsigned long **stacktrace_pos = (unsigned long **)arg; + unsigned long stacktrace_start, stacktrace_end; + + stacktrace_start = (unsigned long)params->stacktrace_hyp_va; + stacktrace_end = stacktrace_start + PAGE_SIZE - (2 * sizeof(long)); + + if ((unsigned long) *stacktrace_pos > stacktrace_end) + return false; + + /* Save the entry to the current pos in stacktrace page */ + **stacktrace_pos = where; + + /* A zero entry delimits the end of the stacktrace. */ + *(*stacktrace_pos + 1) = 0UL; + + /* Increment the current pos */ + ++*stacktrace_pos; + + return true; +} + +/** + * Saves hypervisor stacktrace to the shared stacktrace page. + */ +noinline notrace void hyp_save_backtrace(void) +{ + struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params); + void *stacktrace_start = (void *)params->stacktrace_hyp_va; + struct unwind_state state; + + unwind_init(&state, (unsigned long)__builtin_frame_address(0), + _THIS_IP_); + + unwind(NULL, &state, hyp_save_backtrace_entry, &stacktrace_start); +} + #endif /* !__KVM_NVHE_HYPERVISOR__ */ diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index f66c0142b335..96c5dc5529a1 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -353,6 +354,9 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, (void *)panic_addr); } + /* Dump the hypervisor stacktrace */ + hyp_dump_backtrace(hyp_offset); + /* * Hyp has panicked and we're going to handle that by panicking the * kernel. The kernel offset will be revealed in the panic so we're diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 6db801db8f27..add157f8e3f3 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #include @@ -375,6 +376,9 @@ asmlinkage void __noreturn hyp_panic(void) __sysreg_restore_state_nvhe(host_ctxt); } + /* Save the hypervisor stacktrace */ + hyp_save_backtrace(); + __hyp_do_panic(host_ctxt, spsr, elr, par); unreachable(); }