From patchwork Fri Jul 15 06:10:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12918776 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C8208CCA47C for ; Fri, 15 Jul 2022 06:17:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ABxyMjLoIs/vmvw4ACXqlgGG6+ztNZYuKIg/DJaowC0=; b=j7Iy1DCQ8VT4FEAf8uyN19ahyG 40ACtwAulVkPMBqylLvvUc0m54eBmf9CUu/Cexp2iVkGu0doSY53A+IfzzSL5ptFYF29IypNLIh9O dkftPTw3oKk0zd9/d0JFWkWZRj97Cum6Fgpmmz5M4/CyTTjDjARUYlnmUZQvtgRK0dTN29H1G6SI7 nfqornpXLL8xT9EMH0YV7pARvEqU6SMjBftdYMdSN3keaAmRL+Q9ht95fYjHgvlHP0XgqAxi3DihA Nj3jUQ4OJHwDsQCDnDESOXNDW6PWHoasB5DuNNYZNO4YF4eDfBagDMOa7i5ga6qW8HP45OSY+5Ae5 RxE3W09Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oCEcE-004YKJ-Jy; Fri, 15 Jul 2022 06:15:54 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oCEY8-004WVJ-Ur for linux-arm-kernel@lists.infradead.org; Fri, 15 Jul 2022 06:11:42 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-31c936387fbso33728567b3.2 for ; Thu, 14 Jul 2022 23:11:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Aen+FkdIlH4p9Bga5nOuPk26qtzTJbITGhY4IEaQl1E=; b=Kl/Wb7az0Dpk964K/t53obqM+aCIu3rAPB8Mi4lFae6KU/yHzI081j9LkXEZvLEs5F MPJViYowMIzNBQddAvvw7Po85N97ikHIUNw2aCqDw4i0AXDa82B4v1KEU83z8qseqlST HTZXzVTx3cEKvN3jrQAGdDnMKiiAgo4UkwjXsn09TeVUtUfosl4u4YQzK7dAc35q0KiF K6rSEFoTr7QPg89dMkQWTWdYZMsZfSwVCROcIJWnD/ArXlt2GsZXHYJeqHpcyX2uz6SL H7qz9XZxaBwBvC2Fh/vPGLAMM6u3TCdO9kkrMhmIrKu0QEl1nu32DcC3LA/cIr2Y6+19 rXmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Aen+FkdIlH4p9Bga5nOuPk26qtzTJbITGhY4IEaQl1E=; b=glRRBGv0dIHlL/68EHEfEXqDXfPlzh/yuNZkm9JxKOqCQFynxzolaN8CipsfRU7K8x FYgD52cyddxYjZnK0TBEHqZcHBq5TJBHInCEbw3TA4OH0zUv7JCxjDOB/B2iI9e17/L7 lPdqiKmhYxAWPR92gAXyaw6fo03gWMaQDQlWaRv45YUu7Uy2Ud0RCdaYwrxXFgqBS1nu BPX24D8Uk8FWfqdwPE0NukkUhABlegeVsGUuL5vxhgOjOVMcNY6MLWe6JE4+//ZIvvIG cwvmlIsfAy44ArKpwUQfUJbXRsZZJItVFZZDcoXH+jj+/3n89PRd+ehcbgtT/9d2TH4J 9eIQ== X-Gm-Message-State: AJIora83/KGE7HzUXbBbJPvGhkM6iLkerJpeb7vEGv6uAttKPxILzkSD OmO1oUhvavxRSF+SiDFovyacL/nos8k+3Vkq8Q== X-Google-Smtp-Source: AGRyM1u/PT8LdM4OlzT2z2Ts/bIUkpNGqVgHGqY8xhTai+zpsROBqKpKpWbwRpuEkJh/W1ny4LwrRcq2gKzotGoM5Q== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a5b:849:0:b0:66e:a027:3c with SMTP id v9-20020a5b0849000000b0066ea027003cmr11796208ybq.208.1657865499596; Thu, 14 Jul 2022 23:11:39 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:23 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-15-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 14/18] KVM: arm64: Implement protected nVHE hyp stack unwinder From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220714_231141_018485_DEDAA249 X-CRM114-Status: GOOD ( 14.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Implements the common framework necessary for unwind() to work in the protected nVHE context: - on_accessible_stack() - on_overflow_stack() - unwind_next() Protected nVHE unwind() is used to unwind and save the hyp stack addresses to the shared stacktrace buffer. The host reads the entries in this buffer, symbolizes and dumps the stacktrace (later patch in the series). Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/stacktrace/common.h | 2 ++ arch/arm64/include/asm/stacktrace/nvhe.h | 34 ++++++++++++++++++++-- 2 files changed, 34 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/include/asm/stacktrace/common.h index b362086f4c70..cf442e67dccd 100644 --- a/arch/arm64/include/asm/stacktrace/common.h +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -27,6 +27,7 @@ enum stack_type { STACK_TYPE_OVERFLOW, STACK_TYPE_SDEI_NORMAL, STACK_TYPE_SDEI_CRITICAL, + STACK_TYPE_HYP, __NR_STACK_TYPES }; @@ -171,6 +172,7 @@ static inline int unwind_next_common(struct unwind_state *state, * * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW + * HYP -> OVERFLOW * * ... but the nesting itself is strict. Once we transition from one * stack to another, it's never valid to unwind back to that first diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/asm/stacktrace/nvhe.h index 1aadfd8d7ac9..c7c8ac889ec1 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -38,10 +38,19 @@ static __always_inline void kvm_nvhe_unwind_init(struct unwind_state *state, state->pc = pc; } +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info); + static inline bool on_accessible_stack(const struct task_struct *tsk, unsigned long sp, unsigned long size, struct stack_info *info) { + if (on_accessible_stack_common(tsk, sp, size, info)) + return true; + + if (on_hyp_stack(sp, size, info)) + return true; + return false; } @@ -59,12 +68,27 @@ extern void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc); static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info) { - return false; + unsigned long low = (unsigned long)this_cpu_ptr(overflow_stack); + unsigned long high = low + OVERFLOW_STACK_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info); +} + +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params); + unsigned long high = params->stack_hyp_va; + unsigned long low = high - PAGE_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_HYP, info); } static int notrace unwind_next(struct unwind_state *state) { - return 0; + struct stack_info info; + + return unwind_next_common(state, &info, NULL); } NOKPROBE_SYMBOL(unwind_next); #else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */ @@ -74,6 +98,12 @@ static inline bool on_overflow_stack(unsigned long sp, unsigned long size, return false; } +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + return false; +} + static int notrace unwind_next(struct unwind_state *state) { return 0;