From patchwork Thu Jul 21 05:57:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12924752 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8F4FEC433EF for ; Thu, 21 Jul 2022 06:06:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=40r3/X4F4xpeHl51irSU3l89KmQwvzI2XzbsdmTwUek=; b=denaCT4BDVNqDgS1WLffsSiLFI KKgvt71gOKzmdw2ljQQdi4qAR6G/4+Nw6X9jn8Ozcil15+QRbqE5KOe7XNH2Kgoq7Z4Ku3ScjWHS0 5jNQkeR7Ee+Z9lO0E7z17sV4t4LcdKvQnGJ6nJ+pFThJabJEZFg8esf2BYNH4nmzyc1O+diAxblpC zNlXtfN3Sj4+1HBiMbquCNMy8MtorRFceJqOWKY+XIEx/AOTB6Nc+KkWoak91Hk1IIsXAte815JVV hTrV4GP69mFBQKh4zFBiF3wFFaYaUqqoPF2pBxANkqT0o9Kfs2T4T1+GfTctAP8v6gkKz6mnVRpYu dJbjgblg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEPIj-000fH1-Gr; Thu, 21 Jul 2022 06:04:46 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEPCL-000Wnc-Fy for linux-arm-kernel@lists.infradead.org; Thu, 21 Jul 2022 05:58:10 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-31e559f6840so6710157b3.6 for ; Wed, 20 Jul 2022 22:58:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=4s5X/+YCEmbcnDUwCHy1+kT8bVMvMylr6BkPvcZOYaI=; b=Hn8tDDVc0jcMOfIfcL7Tu34q/rCTcA8rnhoPvX/kVCgOu3vXyuDDkfkT597nYfwbfp 4ORIBESXdaMaeKjj+U19qYy2XGpQxAUT0bsM+BQretTXEB6PWKE9iqRxqc1HxgVvwTyv YBR+U/GgeyaOOAiDI2fE/vXex7Eo6fMwLqUJa27U4Yef2PWnHrh8LZbTHnOmpg67QL8D yTgQkqEUS92Ba6PkU2POCDZkstTHe5z/0JDT6Kxx7sbSgJ2m1J4lJv6xJ8MekVNKiZ6X koOKLyQNvI2jYg07OxqXuZRj5C4KdqD1MwZvvFWcWvHYVoKuh6/BK6s8uw6GYBwWV9KL 0Brg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=4s5X/+YCEmbcnDUwCHy1+kT8bVMvMylr6BkPvcZOYaI=; b=C1z+/ONem7ZRrM9n43uWh9GM6FTWzu836lqMyz+TyIrsB64NwdeaL+OESS1m02h+ur 9gSrjAspBevnLSKq0eDZ7wAKK6CtVoUmTKF2FRbAyWtd9x4AifvNGHM1qjJ0hAhuDhCY mU94AcdAjE2SRYSi6fWHCXsWp/Musl2UtMg48T7IlarmrZGIDf4/0J0FBpbQ86cStesZ Th5jFyWmuoyHpias7ghvx49ufoXG8H5qHZl8MjvcTCxoj2Ti7SgYNGsANFum6TG2ZkG5 FxNyjrYT4EGdC8AVWsy0F0bBbLMdHi3BsAGDm08EpuC2kZWwEholV1fCSPYklUPIkwBx xFaQ== X-Gm-Message-State: AJIora9tX3jNMPqQ6kziL4sD2Yxw7HxrB2Itjb3RaPLKvqzUiihfeVxI CR3shlIruDOkNfia0cqD7B+/9lZzXcFMzOvjUQ== X-Google-Smtp-Source: AGRyM1vTsife9M3jC/RGyzsoBBcL6UKgtSGPhy1mwndXgmrgVzbFMS42En18APbAoAXWfT95Wu4YGZ+TT/Ki5Tnxxw== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5a87:b61e:76b5:d1e0]) (user=kaleshsingh job=sendgmr) by 2002:a05:6902:282:b0:66e:26b0:8f16 with SMTP id v2-20020a056902028200b0066e26b08f16mr39101418ybh.469.1658383088431; Wed, 20 Jul 2022 22:58:08 -0700 (PDT) Date: Wed, 20 Jul 2022 22:57:25 -0700 In-Reply-To: <20220721055728.718573-1-kaleshsingh@google.com> Message-Id: <20220721055728.718573-15-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220721055728.718573-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v5 14/17] KVM: arm64: Implement protected nVHE hyp stack unwinder From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220720_225809_587323_66F5190A X-CRM114-Status: GOOD ( 14.15 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Implements the common framework necessary for unwind() to work in the protected nVHE context: - on_accessible_stack() - on_overflow_stack() - unwind_next() Protected nVHE unwind() is used to unwind and save the hyp stack addresses to the shared stacktrace buffer. The host reads the entries in this buffer, symbolizes and dumps the stacktrace (later patch in the series). Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba --- arch/arm64/include/asm/stacktrace/common.h | 2 ++ arch/arm64/include/asm/stacktrace/nvhe.h | 34 ++++++++++++++++++++-- 2 files changed, 34 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/include/asm/stacktrace/common.h index be7920ba70b0..73fd9e143c4a 100644 --- a/arch/arm64/include/asm/stacktrace/common.h +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -34,6 +34,7 @@ enum stack_type { STACK_TYPE_OVERFLOW, STACK_TYPE_SDEI_NORMAL, STACK_TYPE_SDEI_CRITICAL, + STACK_TYPE_HYP, __NR_STACK_TYPES }; @@ -186,6 +187,7 @@ static inline int unwind_next_common(struct unwind_state *state, * * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW + * HYP -> OVERFLOW * * ... but the nesting itself is strict. Once we transition from one * stack to another, it's never valid to unwind back to that first diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/asm/stacktrace/nvhe.h index 8f02803a005f..c3688e717136 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -39,10 +39,19 @@ static inline void kvm_nvhe_unwind_init(struct unwind_state *state, state->pc = pc; } +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info); + static inline bool on_accessible_stack(const struct task_struct *tsk, unsigned long sp, unsigned long size, struct stack_info *info) { + if (on_accessible_stack_common(tsk, sp, size, info)) + return true; + + if (on_hyp_stack(sp, size, info)) + return true; + return false; } @@ -60,12 +69,27 @@ DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info) { - return false; + unsigned long low = (unsigned long)this_cpu_ptr(overflow_stack); + unsigned long high = low + OVERFLOW_STACK_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info); +} + +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params); + unsigned long high = params->stack_hyp_va; + unsigned long low = high - PAGE_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_HYP, info); } static inline int notrace unwind_next(struct unwind_state *state) { - return 0; + struct stack_info info; + + return unwind_next_common(state, &info, NULL); } NOKPROBE_SYMBOL(unwind_next); #else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */ @@ -75,6 +99,12 @@ static inline bool on_overflow_stack(unsigned long sp, unsigned long size, return false; } +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + return false; +} + static inline int notrace unwind_next(struct unwind_state *state) { return 0;