From patchwork Sun Aug 26 22:46:56 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colin Cross X-Patchwork-Id: 1375991 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork1.kernel.org (Postfix) with ESMTP id C197F402E1 for ; Sun, 26 Aug 2012 22:51:06 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1T5lcc-0003Dk-98; Sun, 26 Aug 2012 22:47:58 +0000 Received: from mail-ey0-f201.google.com ([209.85.215.201]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1T5lcX-0003D6-Om for linux-arm-kernel@lists.infradead.org; Sun, 26 Aug 2012 22:47:55 +0000 Received: by eabm6 with SMTP id m6so159137eab.0 for ; Sun, 26 Aug 2012 15:47:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=DKYrmYZDecJlxh6554FBiJ6Se0kvv/Vi2KmIbdnEi5Q=; b=ETmc979+h9z21W3VHY5y5+LEL0FHBeneqQy9yy3ccLty7r5aLmrq5jZJf8aFjm8Mfy WKI9oeXZPOAkzx1DfQTBY7GYpNYOT+4BaFzr/g2hEJ7LJkxwNjg9MJ4wibLwEjIgLDOl OvqDCGeprhahqKTHN97d9pHPRenoFDCrRmxoIW1zDLanOEVLo5j7h2dJmEVqcMeYZqXP gN8Pr9K9hfckTQOUeyR9SQkaviyweOZfiRMVDy/iCpd1uoAkRagE49vDwpsu25ITJnr8 HpAS59IJRPlXSgWhqOiKljWhIy38B6UwityGJGuU+Q6Jbgh4/TbXRvWAEFKvT6sxIpY+ c4LQ== Received: by 10.180.24.165 with SMTP id v5mr1402862wif.1.1346021271465; Sun, 26 Aug 2012 15:47:51 -0700 (PDT) Received: by 10.180.24.165 with SMTP id v5mr1402852wif.1.1346021271411; Sun, 26 Aug 2012 15:47:51 -0700 (PDT) Received: from hpza10.eem.corp.google.com ([74.125.121.33]) by gmr-mx.google.com with ESMTPS id hm1si1228078wib.3.2012.08.26.15.47.51 (version=TLSv1/SSLv3 cipher=AES128-SHA); Sun, 26 Aug 2012 15:47:51 -0700 (PDT) Received: from walnut.mtv.corp.google.com (walnut.mtv.corp.google.com [172.18.104.116]) by hpza10.eem.corp.google.com (Postfix) with ESMTP id C3DA420004E; Sun, 26 Aug 2012 15:47:50 -0700 (PDT) Received: by walnut.mtv.corp.google.com (Postfix, from userid 99897) id 17B52257923; Sun, 26 Aug 2012 15:47:50 -0700 (PDT) From: Colin Cross To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 2/2] ARM: unwind: enable dumping stacks for SMP && ARM_UNWIND Date: Sun, 26 Aug 2012 15:46:56 -0700 Message-Id: <1346021216-21979-3-git-send-email-ccross@android.com> X-Mailer: git-send-email 1.7.7.3 In-Reply-To: <1346021216-21979-1-git-send-email-ccross@android.com> References: <1346021216-21979-1-git-send-email-ccross@android.com> X-Gm-Message-State: ALoCoQnaR6kCslWGoVTE03S1zwqvI8kLEdzRxoqflOBuAF7wyo4xko9r5ZcBKKwOO8g7B5hPdWE4cIBQ7RkbWdoABb2MAQBLM2/7KXQGWFagV5AgJDGuT/r7g6CbCUEOkoKMIiHuUD7OR9AkVgAv2IbJRa001Zp0YIQZh5+D28oM7XkGcTkqv9dp7DGCLyiBannByBb5+N7YTVhjRK+Mlfe2MtueN6MeGw== X-Spam-Note: CRM114 invocation failed X-Spam-Score: -2.8 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.8 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.215.201 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.2 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Rabin Vincent , Catalin Marinas , Russell King , Will Deacon , Colin Cross X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Unwinding with CONFIG_ARM_UNWIND is much more complicated than unwinding with CONFIG_FRAME_POINTER, but there are only a few points that require validation in order to avoid faults or infinite loops. Avoiding faults is easy by adding checks to verify that all accesses relative to the frame's stack pointer remain inside the stack. When CONFIG_FRAME_POINTER is not set it is possible for two frames to have the same SP, so there is no way to avoid repeated calls to unwind_frame continuing forever. Signed-off-by: Colin Cross --- arch/arm/kernel/stacktrace.c | 12 ------------ arch/arm/kernel/unwind.c | 31 +++++++++++++++++++++++++++---- 2 files changed, 27 insertions(+), 16 deletions(-) diff --git a/arch/arm/kernel/stacktrace.c b/arch/arm/kernel/stacktrace.c index 45e6b7e..f51dd68 100644 --- a/arch/arm/kernel/stacktrace.c +++ b/arch/arm/kernel/stacktrace.c @@ -105,23 +105,11 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace) data.skip = trace->skip; if (tsk != current) { -#if defined(CONFIG_SMP) || \ - (defined(CONFIG_FRAME_POINTER) && !defined(CONFIG_ARM_UNWIND)) - /* - * What guarantees do we have here that 'tsk' is not - * running on another CPU? For now, ignore it as we - * can't guarantee we won't explode. - */ - if (trace->nr_entries < trace->max_entries) - trace->entries[trace->nr_entries++] = ULONG_MAX; - return; -#else data.no_sched_functions = 1; frame.fp = thread_saved_fp(tsk); frame.sp = thread_saved_sp(tsk); frame.lr = 0; /* recovered from the stack */ frame.pc = thread_saved_pc(tsk); -#endif } else { register unsigned long current_sp asm ("sp"); diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c 00df012..b3a09ad 100644 --- a/arch/arm/kernel/unwind.c +++ b/arch/arm/kernel/unwind.c @@ -98,6 +98,16 @@ enum regs { (unsigned long)(ptr) + offset; \ }) +static bool valid_stack_addr(unsigned long sp, unsigned long *vsp) +{ + unsigned long low; + unsigned long high; + + low = round_down(sp, THREAD_SIZE) + sizeof(struct thread_info); + high = ALIGN(sp, THREAD_SIZE); + return ((unsigned long)vsp >= low && (unsigned long)vsp < high); +} + /* * Binary search in the unwind index. The entries are * guaranteed to be sorted in ascending order by the linker. @@ -241,6 +251,7 @@ static unsigned long unwind_get_byte(struct unwind_ctrl_block *ctrl) static int unwind_exec_insn(struct unwind_ctrl_block *ctrl) { unsigned long insn = unwind_get_byte(ctrl); + unsigned long orig_sp = ctrl->vrs[SP]; pr_debug("%s: insn = %08lx\n", __func__, insn); @@ -264,8 +275,11 @@ static int unwind_exec_insn(struct unwind_ctrl_block *ctrl) /* pop R4-R15 according to mask */ load_sp = mask & (1 << (13 - 4)); while (mask) { - if (mask & 1) + if (mask & 1) { + if (!valid_stack_addr(orig_sp, vsp)) + return -URC_FAILURE; ctrl->vrs[reg] = *vsp++; + } mask >>= 1; reg++; } @@ -279,10 +293,16 @@ static int unwind_exec_insn(struct unwind_ctrl_block *ctrl) int reg; /* pop R4-R[4+bbb] */ - for (reg = 4; reg <= 4 + (insn & 7); reg++) + for (reg = 4; reg <= 4 + (insn & 7); reg++) { + if (!valid_stack_addr(orig_sp, vsp)) + return -URC_FAILURE; ctrl->vrs[reg] = *vsp++; - if (insn & 0x80) + } + if (insn & 0x80) { + if (!valid_stack_addr(orig_sp, vsp)) + return -URC_FAILURE; ctrl->vrs[14] = *vsp++; + } ctrl->vrs[SP] = (unsigned long)vsp; } else if (insn == 0xb0) { if (ctrl->vrs[PC] == 0) @@ -302,8 +322,11 @@ static int unwind_exec_insn(struct unwind_ctrl_block *ctrl) /* pop R0-R3 according to mask */ while (mask) { - if (mask & 1) + if (mask & 1) { + if (!valid_stack_addr(orig_sp, vsp)) + return -URC_FAILURE; ctrl->vrs[reg] = *vsp++; + } mask >>= 1; reg++; }