From patchwork Mon Jan 5 15:12:38 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 5568601 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 007CA9F4DC for ; Mon, 5 Jan 2015 15:16:25 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 26B252012E for ; Mon, 5 Jan 2015 15:16:25 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 503272012D for ; Mon, 5 Jan 2015 15:16:24 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Y89M6-00010g-9S; Mon, 05 Jan 2015 15:14:06 +0000 Received: from mail-wi0-f182.google.com ([209.85.212.182]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Y89M2-0000vY-O1 for linux-arm-kernel@lists.infradead.org; Mon, 05 Jan 2015 15:14:03 +0000 Received: by mail-wi0-f182.google.com with SMTP id h11so3476812wiw.9 for ; Mon, 05 Jan 2015 07:13:39 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=U0uEMdtOf1PkXtxX5oeWHaRhMFTuifI/FUwibjluewg=; b=DlB+HnM6Bs8v+1d7WEbsw7caC7wIUSMA9fBEZeeyCHZ0ROns7PdkPWy1btRwOzEMcB 86fPVRs9v4joxB+Y0QQUeFSeVUJ/FzSfZz6Xv8zRzUDKBBp5Bn2ZewHls9w77/YmPc1f ki8enJaxBZ/+iKIxX02CO6vxl9piQGkgT2QAa/rV0yipH8weGiuAQfO05RxVYyPB3azo 75czeeITedE5DffwTq8ZSDnyslJeQ1k6KO/nlmV6oraP3ocLj30flywSxqWyf21OTDLs /BFQK2Ds4U6JcqBnO2wYgpUIsh2YsCEGVf7kiaB3rsCuSN312D1IdCBYTHqZ1ssJP94y zryg== X-Gm-Message-State: ALoCoQlJfT/cyiouRIvpGTPRhraXwyuzvqYFVznhQkRiP7R/Fve9hnqaGU4YBgYhwAFowthAOGRq X-Received: by 10.180.73.235 with SMTP id o11mr26461431wiv.51.1420470819727; Mon, 05 Jan 2015 07:13:39 -0800 (PST) Received: from sundance.lan (cpc4-aztw19-0-0-cust157.18-1.cable.virginm.net. [82.33.25.158]) by mx.google.com with ESMTPSA id a1sm26937430wjx.28.2015.01.05.07.13.37 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 05 Jan 2015 07:13:38 -0800 (PST) From: Daniel Thompson To: Russell King Subject: [PATCH] arm: Remove early stack deallocation from restore_user_regs Date: Mon, 5 Jan 2015 15:12:38 +0000 Message-Id: <1420470758-5874-1-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1418382718-16323-1-git-send-email-daniel.thompson@linaro.org> References: <1418382718-16323-1-git-send-email-daniel.thompson@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150105_071402_929687_BE397FE4 X-CRM114-Status: GOOD ( 14.13 ) X-Spam-Score: -0.7 (/) Cc: Daniel Thompson , linaro-kernel@lists.linaro.org, patches@linaro.org, linux-kernel@vger.kernel.org, John Stultz , Sumit Semwal , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently restore_user_regs deallocates the SVC stack early in its execution and relies on no exception being taken between the deallocation and the registers being restored. The introduction of a default FIQ handler that also uses the SVC stack breaks this assumption and can result in corrupted register state. This patch works around the problem by removing the early stack deallocation and using r2 as a temporary instead. I have not found a way to do this without introducing an extra mov instruction to the macro. Signed-off-by: Daniel Thompson --- Notes: [This patch has not been modified since its original posting as an RFC]. I have recently started to hook up the PMU via FIQ (although it's slightly hacky at present) and was seeing random userspace SEGVs when perf was running (after ~100,000 or so FIQs). Instrumenting the code eventually revealed that in almost all cases the last FIQ handler to run prior the SEGV had interrupted ret_to_user_from_irq or ret_fast_syscall. Very occasionally it was in the fault handling code (because that code runs as part of SEGV handling and the PMU is instrumenting that too). No SEGV problems have been observed since fixing the issue. This version of the patch has seen >7M FIQs and an older version (based on cpsid f) ran overnight. arch/arm/kernel/entry-header.S | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) -- 1.9.3 diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S index 4176df721bf0..1a0045abead7 100644 --- a/arch/arm/kernel/entry-header.S +++ b/arch/arm/kernel/entry-header.S @@ -253,21 +253,22 @@ .endm .macro restore_user_regs, fast = 0, offset = 0 - ldr r1, [sp, #\offset + S_PSR] @ get calling cpsr - ldr lr, [sp, #\offset + S_PC]! @ get pc + mov r2, sp + ldr r1, [r2, #\offset + S_PSR] @ get calling cpsr + ldr lr, [r2, #\offset + S_PC]! @ get pc msr spsr_cxsf, r1 @ save in spsr_svc #if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_32v6K) @ We must avoid clrex due to Cortex-A15 erratum #830321 - strex r1, r2, [sp] @ clear the exclusive monitor + strex r1, r2, [r2] @ clear the exclusive monitor #endif .if \fast - ldmdb sp, {r1 - lr}^ @ get calling r1 - lr + ldmdb r2, {r1 - lr}^ @ get calling r1 - lr .else - ldmdb sp, {r0 - lr}^ @ get calling r0 - lr + ldmdb r2, {r0 - lr}^ @ get calling r0 - lr .endif mov r0, r0 @ ARMv5T and earlier require a nop @ after ldm {}^ - add sp, sp, #S_FRAME_SIZE - S_PC + add sp, sp, #\offset + S_FRAME_SIZE movs pc, lr @ return & move spsr_svc into cpsr .endm