From patchwork Thu Jul 21 05:57:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12924767 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B1F02C433EF for ; Thu, 21 Jul 2022 06:09:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=73FvM7OSZOKpVKmdxBAiDbty8102Qqr/XggA+4YPIG0=; b=4ZeKqplvk5R2F06UtJVLftioCy f+pvxtSiwhH8FTSjS9yl1ST7dlq0gmZvsY5EJuMQ16pMbLe/PVbQ3/BEakxMd0C1LqW7MaTF3b4S0 UUEOw+WdkHI0cyjVHaSCQjmwHwUm9Ovk0q0PfqPrEAcWwH3QaIQNwkLHoZO9cBLfoXRFixxEPvpzi V9mLmnj6vED6whuw8FRgDbShewRYIp45aTdtuoCQAp8KpZgyce0VdgaViuHDVF3LPxRJoy8Hk3Cge imxPvXLgGk7b3Hjeh3kJXfXPyfqCoUyiXUvk3/+GmrDBn4yV5in3S+2AUWpGU0+0QFAj/+fQh/zEV gQFVB2gg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEPLe-000jKT-Q8; Thu, 21 Jul 2022 06:07:47 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEPCT-000X00-7N for linux-arm-kernel@lists.infradead.org; Thu, 21 Jul 2022 05:58:18 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-31e63e48e49so6710797b3.5 for ; Wed, 20 Jul 2022 22:58:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=eY5/lFgLy/e/q0JCyLsG7Dq+dlCTuLB6RCoRX5a1q0g=; b=WMRHbze1UjE98lLidYWgTOC6PORIneSd6GV0ozEhz9vPbd9YOdu7j9Ryu+HJ4trz/J QRckppVKgo+ATqknDecnbXMNFaAhNymsEFAFGzocktzNwndsSyA8oxc0WeCOA36n4faE U1eO1R/yW1qyJZeqwVMLBEK8wlOeB/MOLmcGmUwoFu/ejt8IPrrrznyRtzlK1kFucm0m 2LbPQ5kNE19uG8EbRX2/P8uEWit1VVLXUxE+Qisf8tule0DtgZai7yq9seEfR65ZG1IB J/h8ACRHJmOzCd8nRGtwjzD2eprVrkcjZHDiDDazJAwK0BnFXAApBjz4lW8Mq335iydt YGCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=eY5/lFgLy/e/q0JCyLsG7Dq+dlCTuLB6RCoRX5a1q0g=; b=cm3kEHCIlJGIadzNMSo2ukIkGnS4JaLIcfajhBctPxZJ/pbSgpM+g/tx1G2bV5cPM8 194UZeQ0BmduWj/yxq8MTgr4bdX8nEADdSqeBaGsuTAlvVFSZ+2FCdfgBadKL3JHi+T5 ilsWd16gUHiRocMxNZnwtbmO9mf7cJLj7Bff3XBktLcdfFfsS1QbPfKlKuo1YGSQ1Ok5 mTVOaisI7+vAyqIIWjmLUF6shuCHV8ZvA+Tm8L6DAGOBKWyzLwRRVm4zF3Z8eMNulc0K 9Fbmcio+j8MH5HmxBC2IEPx6KUjHE1dHcCxVcVc/RpHF2OABIsqxKJQVoJA6/jIPjqRH V1GQ== X-Gm-Message-State: AJIora9Fd4IpOp/MKOVx6lJ6AlZ+qPGLFYbK0xJuCMeWU1WubmbquGoL l1YwjcU1FTw7LmQwVBsDfpw/xPgE9iSIQDnqig== X-Google-Smtp-Source: AGRyM1sBVPLMJaPOkAoWmG3Bd9FIx/vNpGbpXpxi987HJ+/EiD9zA+WxndpdbIc+7PEG26Pc7JjNZnzchEWAgAKDFw== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a05:6902:3c4:b0:670:6a54:dbc2 with SMTP id g4-20020a05690203c400b006706a54dbc2mr13483925ybs.576.1658383095838; Wed, 20 Jul 2022 22:58:15 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:28 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-18-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 17/17] KVM: arm64: Introduce hyp_dump_backtrace() From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220720_225817_323250_682D5607 X-CRM114-Status: GOOD ( 14.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In non-protected nVHE mode, unwinds and dumps the hypervisor backtrace from EL1. This is possible beacuase the host can directly access the hypervisor stack pages in non-proteced mode. Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba --- Changes in v5: - Move code out from nvhe.h header to handle_exit.c, per Marc - Fix stacktrace symoblization when CONFIG_RAMDOMIZE_BASE is enabled, per Fuad - Use regular comments instead of doc comments, per Fuad arch/arm64/kvm/handle_exit.c | 65 +++++++++++++++++++++++++++++++----- 1 file changed, 56 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index ad568da5c7d7..432b6b26f4ad 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -318,6 +319,56 @@ void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index) kvm_handle_guest_serror(vcpu, kvm_vcpu_get_esr(vcpu)); } +/* + * kvm_nvhe_print_backtrace_entry - Symbolizes and prints the HYP stack address + */ +static void kvm_nvhe_print_backtrace_entry(unsigned long addr, + unsigned long hyp_offset) +{ + unsigned long va_mask = GENMASK_ULL(vabits_actual - 1, 0); + + /* Mask tags and convert to kern addr */ + addr = (addr & va_mask) + hyp_offset; + kvm_err(" [<%016lx>] %pB\n", addr, (void *)(addr + kaslr_offset())); +} + +/* + * hyp_dump_backtrace_entry - Dump an entry of the non-protected nVHE HYP stacktrace + * + * @arg : the hypervisor offset, used for address translation + * @where : the program counter corresponding to the stack frame + */ +static bool hyp_dump_backtrace_entry(void *arg, unsigned long where) +{ + kvm_nvhe_print_backtrace_entry(where, (unsigned long)arg); + + return true; +} + +/* + * hyp_dump_backtrace - Dump the non-proteced nVHE HYP backtrace. + * + * @hyp_offset: hypervisor offset, used for address translation. + * + * The host can directly access HYP stack pages in non-protected + * mode, so the unwinding is done directly from EL1. This removes + * the need for shared buffers between host and hypervisor for + * the stacktrace. + */ +static void hyp_dump_backtrace(unsigned long hyp_offset) +{ + struct kvm_nvhe_stacktrace_info *stacktrace_info; + struct unwind_state state; + + stacktrace_info = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); + + kvm_nvhe_unwind_init(&state, stacktrace_info->fp, stacktrace_info->pc); + + kvm_err("Non-protected nVHE HYP call trace:\n"); + unwind(&state, hyp_dump_backtrace_entry, (void *)hyp_offset); + kvm_err("---- End of Non-protected nVHE HYP call trace ----\n"); +} + #ifdef CONFIG_PROTECTED_NVHE_STACKTRACE DECLARE_KVM_NVHE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_stacktrace); @@ -336,18 +387,12 @@ static void pkvm_dump_backtrace(unsigned long hyp_offset) { unsigned long *stacktrace_entry = (unsigned long *)this_cpu_ptr_nvhe_sym(pkvm_stacktrace); - unsigned long va_mask, pc; - - va_mask = GENMASK_ULL(vabits_actual - 1, 0); kvm_err("Protected nVHE HYP call trace:\n"); - /* The stack trace is terminated by a null entry */ - for (; *stacktrace_entry; stacktrace_entry++) { - /* Mask tags and convert to kern addr */ - pc = (*stacktrace_entry & va_mask) + hyp_offset; - kvm_err(" [<%016lx>] %pB\n", pc, (void *)(pc + kaslr_offset())); - } + /* The saved stacktrace is terminated by a null entry */ + for (; *stacktrace_entry; stacktrace_entry++) + kvm_nvhe_print_backtrace_entry(*stacktrace_entry, hyp_offset); kvm_err("---- End of Protected nVHE HYP call trace ----\n"); } @@ -367,6 +412,8 @@ static void kvm_nvhe_dump_backtrace(unsigned long hyp_offset) { if (is_protected_kvm_enabled()) pkvm_dump_backtrace(hyp_offset); + else + hyp_dump_backtrace(hyp_offset); } void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr,