From patchwork Thu Jun 15 10:16:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13281025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A3AB2C0015E for ; Thu, 15 Jun 2023 10:17:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=soR7ESgMuuFw2vCg0clif28yAKpbVYazlRhzjC/LicI=; b=r/PBn6v7rDUmoI Yem5RhkX89J40Q2NRsqc+MSoNvnH3s4YiGePIDJJ9D1668nEY4dWP6dakVqXyKrJP/evuO+CiduGb nylsi7FxascAEjIdL+CQz1snhnD6fLkkcxX5Jv71e0YeIwz1Gll5XO81oQ0JmO+AVeDdiqeEr7uLW x/t5GgO5cTEU+MMNKRDNUyEECiZFumCpKPWqeiRbw8CrFnuh29z+dKHppsIYxxeykwnGK6jE4lR5v 7B39Fi+k7NMI+ecLPk/qGVmSrak5opggAoL1S3xbrDydIZvIfk8q7q7GYMTKM54Jas4UeTAniADBI KObsV9e09gMVzi986FwA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q9k2u-00EQaY-35; Thu, 15 Jun 2023 10:17:40 +0000 Received: from galois.linutronix.de ([193.142.43.55]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q9k2p-00EQY9-2V for linux-arm-kernel@lists.infradead.org; Thu, 15 Jun 2023 10:17:37 +0000 From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1686824251; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qfXLkJc4J7D887ljO74gBwu9WWMz7jLm1uwGGp6XxxQ=; b=L/j2VmTqTXiqnwCsuBAI1T/FjXdicfOCbuEfYLIyOBZeLllkpwOrW1Tg5Y1Xywbjftlu/T PHktoZYyXKMnzCiSJ2Y2UsR479EOIr5PfQCVp+1baY0VoHROoouqfQWc9Tt5coRgINjqqS TozI6dm4TohjndoYZJj5PKtfcIR9MIufvtavnx5P7XtZrWawjIyaGVccQfen02W2dmOCo0 ltc8ZCDSUgSXhTk0C6X1WRBTmTuspmo01RSe/N0ab9fdWfd2Y4ShkuWV3MvixW3YKV6ftS xitna1UYk2yJ/vXMq4ZVcCDiM8ddNp+Sz8W3QZpr4z6s/00WCoQj12xU/4U/GQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1686824251; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qfXLkJc4J7D887ljO74gBwu9WWMz7jLm1uwGGp6XxxQ=; b=MgNsRv+sNtuDdUNJKyUMGQHdO7ud2qbwWTHZY4hmeh8eAMKQJTEcgiNu0IXzafydtt1L3a ZPsgC8WK3Ew4beAg== To: linux-arm-kernel@lists.infradead.org Cc: Russell King , Ard Biesheuvel , Thomas Gleixner , Sebastian Andrzej Siewior Subject: [PATCH 1/3] ARM: vfp: Provide vfp_lock() for VFP locking. Date: Thu, 15 Jun 2023 12:16:54 +0200 Message-Id: <20230615101656.1308942-2-bigeasy@linutronix.de> In-Reply-To: <20230615101656.1308942-1-bigeasy@linutronix.de> References: <20230615101656.1308942-1-bigeasy@linutronix.de> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230615_031735_954203_349B131A X-CRM114-Status: GOOD ( 13.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org kernel_neon_begin() uses local_bh_disable() to ensure exclusive access to the VFP unit. This is broken on PREEMPT_RT because a BH disabled section remains preemptible on PREEMPT_RT. Introduce vfp_lock() which uses local_bh_disable() and preempt_disable() on PREEMPT_RT. Since softirqs are processed always in thread context, disabling preemption is enough to ensure that the current context won't get interrupted by something that is using the VFP. Use it in kernel_neon_begin(). Signed-off-by: Sebastian Andrzej Siewior --- arch/arm/vfp/vfpmodule.c | 32 ++++++++++++++++++++++++++++++-- 1 file changed, 30 insertions(+), 2 deletions(-) diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c index 58a9442add24b..0a21e13095809 100644 --- a/arch/arm/vfp/vfpmodule.c +++ b/arch/arm/vfp/vfpmodule.c @@ -54,6 +54,34 @@ extern unsigned int VFP_arch_feroceon __alias(VFP_arch); */ union vfp_state *vfp_current_hw_state[NR_CPUS]; +/* + * Claim ownership of the VFP unit. + * + * The caller may change VFP registers until vfp_unlock() is called. + * + * local_bh_disable() is used to disable preemption and to disable VFP + * processing in softirq context. On PREEMPT_RT kernels local_bh_disable() is + * not sufficient because it only serializes soft interrupt related sections + * via a local lock, but stays preemptible. Disabling preemption is the right + * choice here as bottom half processing is always in thread context on RT + * kernels so it implicitly prevents bottom half processing as well. + */ +static void vfp_lock(void) +{ + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + local_bh_disable(); + else + preempt_disable(); +} + +static void vfp_unlock(void) +{ + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + local_bh_enable(); + else + preempt_enable(); +} + /* * Is 'thread's most up to date state stored in this CPUs hardware? * Must be called from non-preemptible context. @@ -818,7 +846,7 @@ void kernel_neon_begin(void) unsigned int cpu; u32 fpexc; - local_bh_disable(); + vfp_lock(); /* * Kernel mode NEON is only allowed outside of hardirq context with @@ -849,7 +877,7 @@ void kernel_neon_end(void) { /* Disable the NEON/VFP unit. */ fmxr(FPEXC, fmrx(FPEXC) & ~FPEXC_EN); - local_bh_enable(); + vfp_unlock(); } EXPORT_SYMBOL(kernel_neon_end); From patchwork Thu Jun 15 10:16:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13281024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 18360EB64D9 for ; Thu, 15 Jun 2023 10:17:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=77mrozfZ/VBlmPNGV/e+ckZJXb81CHWksy3ZfK4jZY4=; b=cVWblAUvHFQrYy WZwQFdArt7/NJuhPNhwNLMPhMOiyzMcyrPl2UmAEw9Xz5ItnzVOnMQkOssiEwJc65iBPfOvYXfdA6 bZTDsT3HKUm9qJLcjtXtSmO7pxNciLVbExZ9ZHmpFkHaMsEf3yKH/y8yb43EKt+NiA2orrLgsDuEa TNWEXO2WXKk7Vxf++WJrho8rlWdnhF3OyJQv/6KLxrwu1TMjz3j7C1vHmSWjos3iqalvUoEs/RymK ruY/RBMC2xWvkBT8ReS9wrTQwTuuyWpWQO727A28qqgY7WdQiClO4Ts63LwfvUkrqOkv6+7VX4Orn eWMqqdVCfdOzMJGXSMiQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q9k2u-00EQaD-1Z; Thu, 15 Jun 2023 10:17:40 +0000 Received: from galois.linutronix.de ([193.142.43.55]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q9k2p-00EQYA-2V for linux-arm-kernel@lists.infradead.org; Thu, 15 Jun 2023 10:17:37 +0000 From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1686824251; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HrA5rCi04S6iMZETdNVv7bbSACeaTlVUJZyjM+9n7RM=; b=vUo23kT+1w6VMaGTuGfF/Fw+CkETxv63DNop8RJPS90RlsgjjAdMLvWrDq5m0QQ39/mvQm uDw0qxm//zC/pu8IEtmGvMVzZeLGlWMhWvXj+RXLcG0oVSeDjTpBkuw57/3pq09h8+Gs8N VlxciKXGGXYzIGEv5KBbdTZ4F5gS7QjrWRnMfmbqwFSKt+eyx5YH+agS/Eesti5aWCIrgt idvt7KztXiHg2QfYMo7QQOyfUkYNkmRnS4uLECvayvrL0TFbja2E9NCbwVrnlScvpgjHAp SrNZz9V/wAwfnjdn5pBx8oUWm3TxWIYnnmk8GCFS/arDwiIDRgPKiNggmZQm7g== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1686824251; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HrA5rCi04S6iMZETdNVv7bbSACeaTlVUJZyjM+9n7RM=; b=6Yea8nnCgPjwbmDHMNQFgufJg0d8bYQxTc2xXr8ndlevTu59kXgsrHvCC9ZNBvt9w+vNK6 9lXPfO+moACPZxCA== To: linux-arm-kernel@lists.infradead.org Cc: Russell King , Ard Biesheuvel , Thomas Gleixner , Sebastian Andrzej Siewior Subject: [PATCH 2/3] ARM: vfp: Use vfp_lock() in vfp_sync_hwstate(). Date: Thu, 15 Jun 2023 12:16:55 +0200 Message-Id: <20230615101656.1308942-3-bigeasy@linutronix.de> In-Reply-To: <20230615101656.1308942-1-bigeasy@linutronix.de> References: <20230615101656.1308942-1-bigeasy@linutronix.de> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230615_031735_952420_1065617C X-CRM114-Status: GOOD ( 12.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org vfp_sync_hwstate() uses preempt_disable() followed by local_bh_disable() to ensure that it won't get interrupted while checking the VFP state. This harms PREEMPT_RT because softirq handling can get preempted and local_bh_disable() synchronizes the related section with a sleeping lock which does not work with disabled preemption. Use the vfp_lock() to synchronize the access. Signed-off-by: Sebastian Andrzej Siewior --- arch/arm/vfp/vfpmodule.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c index 0a21e13095809..524aec81134ba 100644 --- a/arch/arm/vfp/vfpmodule.c +++ b/arch/arm/vfp/vfpmodule.c @@ -539,11 +539,9 @@ static inline void vfp_pm_init(void) { } */ void vfp_sync_hwstate(struct thread_info *thread) { - unsigned int cpu = get_cpu(); + vfp_lock(); - local_bh_disable(); - - if (vfp_state_in_hw(cpu, thread)) { + if (vfp_state_in_hw(raw_smp_processor_id(), thread)) { u32 fpexc = fmrx(FPEXC); /* @@ -554,8 +552,7 @@ void vfp_sync_hwstate(struct thread_info *thread) fmxr(FPEXC, fpexc); } - local_bh_enable(); - put_cpu(); + vfp_unlock(); } /* Ensure that the thread reloads the hardware VFP state on the next use. */ From patchwork Thu Jun 15 10:16:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13281023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8C2ECEB64DB for ; Thu, 15 Jun 2023 10:17:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LZMGw9XBFondlJkvHQbB7mmV/Nn4xMPX4XhRKrqgztY=; b=pzeZ3G5RH3Y7MI t0oKNztMrFQrj7oSl26+JLJ8N3O+vnLLN/tdsMq7U10FNgX0pYszwL0K/AfXCkJgXmL4s9lEvXuw9 lO8k2+dEtWE2ZSSjNkSD5RZu8s+lEm4fZOUwHx1j8JemywK+s0Mgiv3LPjYLY7dAhsOziv5NOoK1l D/LpZeKfvCrLAFdw+RrUJ4Q2Bh3UEoM1qUfXu8Y3+bDtXmWN9a0AcBbCpXNkWDftlD/kmimBItSPl Is9dzuo7YSBdK5+j2BZq7fMyLHwfjpwwC98hYW/PGVNDiuht7LazQZR01RbzqTohmsoCMiVxsgzXB 7JW28cXyfdisa/6IaFcQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q9k2u-00EQZt-04; Thu, 15 Jun 2023 10:17:40 +0000 Received: from galois.linutronix.de ([2a0a:51c0:0:12e:550::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q9k2m-00EQYB-2E for linux-arm-kernel@lists.infradead.org; Thu, 15 Jun 2023 10:17:34 +0000 From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1686824251; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=u0qXv0B7KfGpR37XgJ0r3BNRqqkIsJyhJ++FYc4Lnb8=; b=rsjlbE5x+fuHTEemgMLYfAV6xDt8NLjlFgpaIFx5ULchPiAI0OEmiGGYbNGv8EHld8Nrgx 6Q2dB70o9eu/DKrjQIoZLPIyQZg3QaM2VMGjGeVB0yvWqRNYZZ0f80yUicAPpOwi1xWUPm VQxAuxwVEuzXVflwEKlJEFtMC9j/EcZ+TP7h3IXxKL9SR3thITUlIEPPRFDuaWD6Mvr4ni DAn0YJuA9G28HmVGEJhS4hPMPJ1MeTz5/uDGy6wW7ar2QtfYW8gaShNKd+fkC8u0vqyYC0 3Ayo1v91GnBjF8fpdAapkYRS9wuKIox2Cr+w+z4W1ptTZWs2hfAwYFDR5FkEHA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1686824251; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=u0qXv0B7KfGpR37XgJ0r3BNRqqkIsJyhJ++FYc4Lnb8=; b=LmZF/LI6DN6wwOmOBDA0Vdk4KUKVbkTDTCF02b1DbJJx8BwZdR78V0Jvo6kjUxuRFlHfrq y175kDVBkApj/CDA== To: linux-arm-kernel@lists.infradead.org Cc: Russell King , Ard Biesheuvel , Thomas Gleixner , Sebastian Andrzej Siewior Subject: [PATCH 3/3] ARM: vfp: Use vfp_lock() in vfp_entry(). Date: Thu, 15 Jun 2023 12:16:56 +0200 Message-Id: <20230615101656.1308942-4-bigeasy@linutronix.de> In-Reply-To: <20230615101656.1308942-1-bigeasy@linutronix.de> References: <20230615101656.1308942-1-bigeasy@linutronix.de> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230615_031733_012596_6976122C X-CRM114-Status: GOOD ( 14.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org vfp_entry() is invoked from exception handler and is fully preemptible. It uses local_bh_disable() to remain uninterrupted while checking the VFP state. This is not working on PREEMPT_RT because local_bh_disable() synchronizes the relevant section but the context remains fully preemptible. Use vfp_lock() for uninterrupted access. VFP_bounce() is invoked from within vfp_entry() and may send a signal. Sending a signal uses spinlock_t which becomes a sleeping lock on PREEMPT_RT and must not be acquired within a preempt-disabled section. Move the vfp_raise_sigfpe() block outside of the preempt-disabled section. Signed-off-by: Sebastian Andrzej Siewior --- arch/arm/vfp/vfpmodule.c | 33 ++++++++++++++++++++------------- 1 file changed, 20 insertions(+), 13 deletions(-) diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c index 524aec81134ba..67d7042bc3d5c 100644 --- a/arch/arm/vfp/vfpmodule.c +++ b/arch/arm/vfp/vfpmodule.c @@ -267,7 +267,7 @@ static void vfp_panic(char *reason, u32 inst) /* * Process bitmask of exception conditions. */ -static void vfp_raise_exceptions(u32 exceptions, u32 inst, u32 fpscr, struct pt_regs *regs) +static int vfp_raise_exceptions(u32 exceptions, u32 inst, u32 fpscr) { int si_code = 0; @@ -275,8 +275,7 @@ static void vfp_raise_exceptions(u32 exceptions, u32 inst, u32 fpscr, struct pt_ if (exceptions == VFP_EXCEPTION_ERROR) { vfp_panic("unhandled bounce", inst); - vfp_raise_sigfpe(FPE_FLTINV, regs); - return; + return FPE_FLTINV; } /* @@ -304,8 +303,7 @@ static void vfp_raise_exceptions(u32 exceptions, u32 inst, u32 fpscr, struct pt_ RAISE(FPSCR_OFC, FPSCR_OFE, FPE_FLTOVF); RAISE(FPSCR_IOC, FPSCR_IOE, FPE_FLTINV); - if (si_code) - vfp_raise_sigfpe(si_code, regs); + return si_code; } /* @@ -351,6 +349,8 @@ static u32 vfp_emulate_instruction(u32 inst, u32 fpscr, struct pt_regs *regs) static void VFP_bounce(u32 trigger, u32 fpexc, struct pt_regs *regs) { u32 fpscr, orig_fpscr, fpsid, exceptions; + int si_code2 = 0; + int si_code = 0; pr_debug("VFP: bounce: trigger %08x fpexc %08x\n", trigger, fpexc); @@ -396,8 +396,8 @@ static void VFP_bounce(u32 trigger, u32 fpexc, struct pt_regs *regs) * unallocated VFP instruction but with FPSCR.IXE set and not * on VFP subarch 1. */ - vfp_raise_exceptions(VFP_EXCEPTION_ERROR, trigger, fpscr, regs); - return; + si_code = vfp_raise_exceptions(VFP_EXCEPTION_ERROR, trigger, fpscr); + goto exit; } /* @@ -421,14 +421,14 @@ static void VFP_bounce(u32 trigger, u32 fpexc, struct pt_regs *regs) */ exceptions = vfp_emulate_instruction(trigger, fpscr, regs); if (exceptions) - vfp_raise_exceptions(exceptions, trigger, orig_fpscr, regs); + si_code2 = vfp_raise_exceptions(exceptions, trigger, orig_fpscr); /* * If there isn't a second FP instruction, exit now. Note that * the FPEXC.FP2V bit is valid only if FPEXC.EX is 1. */ if ((fpexc & (FPEXC_EX | FPEXC_FP2V)) != (FPEXC_EX | FPEXC_FP2V)) - return; + goto exit; /* * The barrier() here prevents fpinst2 being read @@ -440,7 +440,13 @@ static void VFP_bounce(u32 trigger, u32 fpexc, struct pt_regs *regs) emulate: exceptions = vfp_emulate_instruction(trigger, orig_fpscr, regs); if (exceptions) - vfp_raise_exceptions(exceptions, trigger, orig_fpscr, regs); + si_code = vfp_raise_exceptions(exceptions, trigger, orig_fpscr); +exit: + vfp_unlock(); + if (si_code2) + vfp_raise_sigfpe(si_code2, regs); + if (si_code) + vfp_raise_sigfpe(si_code, regs); } static void vfp_enable(void *unused) @@ -707,7 +713,7 @@ static int vfp_support_entry(struct pt_regs *regs, u32 trigger) if (!user_mode(regs)) return vfp_kmode_exception(regs, trigger); - local_bh_disable(); + vfp_lock(); fpexc = fmrx(FPEXC); /* @@ -772,6 +778,7 @@ static int vfp_support_entry(struct pt_regs *regs, u32 trigger) * replay the instruction that trapped. */ fmxr(FPEXC, fpexc); + vfp_unlock(); } else { /* Check for synchronous or asynchronous exceptions */ if (!(fpexc & (FPEXC_EX | FPEXC_DEX))) { @@ -786,17 +793,17 @@ static int vfp_support_entry(struct pt_regs *regs, u32 trigger) if (!(fpscr & FPSCR_IXE)) { if (!(fpscr & FPSCR_LENGTH_MASK)) { pr_debug("not VFP\n"); - local_bh_enable(); + vfp_unlock(); return -ENOEXEC; } fpexc |= FPEXC_DEX; } } bounce: regs->ARM_pc += 4; + /* VFP_bounce() will invoke vfp_unlock() */ VFP_bounce(trigger, fpexc, regs); } - local_bh_enable(); return 0; }