From patchwork Thu Jun 15 10:16:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13281025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A3AB2C0015E for ; Thu, 15 Jun 2023 10:17:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=soR7ESgMuuFw2vCg0clif28yAKpbVYazlRhzjC/LicI=; b=r/PBn6v7rDUmoI Yem5RhkX89J40Q2NRsqc+MSoNvnH3s4YiGePIDJJ9D1668nEY4dWP6dakVqXyKrJP/evuO+CiduGb nylsi7FxascAEjIdL+CQz1snhnD6fLkkcxX5Jv71e0YeIwz1Gll5XO81oQ0JmO+AVeDdiqeEr7uLW x/t5GgO5cTEU+MMNKRDNUyEECiZFumCpKPWqeiRbw8CrFnuh29z+dKHppsIYxxeykwnGK6jE4lR5v 7B39Fi+k7NMI+ecLPk/qGVmSrak5opggAoL1S3xbrDydIZvIfk8q7q7GYMTKM54Jas4UeTAniADBI KObsV9e09gMVzi986FwA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q9k2u-00EQaY-35; Thu, 15 Jun 2023 10:17:40 +0000 Received: from galois.linutronix.de ([193.142.43.55]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q9k2p-00EQY9-2V for linux-arm-kernel@lists.infradead.org; Thu, 15 Jun 2023 10:17:37 +0000 From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1686824251; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qfXLkJc4J7D887ljO74gBwu9WWMz7jLm1uwGGp6XxxQ=; b=L/j2VmTqTXiqnwCsuBAI1T/FjXdicfOCbuEfYLIyOBZeLllkpwOrW1Tg5Y1Xywbjftlu/T PHktoZYyXKMnzCiSJ2Y2UsR479EOIr5PfQCVp+1baY0VoHROoouqfQWc9Tt5coRgINjqqS TozI6dm4TohjndoYZJj5PKtfcIR9MIufvtavnx5P7XtZrWawjIyaGVccQfen02W2dmOCo0 ltc8ZCDSUgSXhTk0C6X1WRBTmTuspmo01RSe/N0ab9fdWfd2Y4ShkuWV3MvixW3YKV6ftS xitna1UYk2yJ/vXMq4ZVcCDiM8ddNp+Sz8W3QZpr4z6s/00WCoQj12xU/4U/GQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1686824251; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qfXLkJc4J7D887ljO74gBwu9WWMz7jLm1uwGGp6XxxQ=; b=MgNsRv+sNtuDdUNJKyUMGQHdO7ud2qbwWTHZY4hmeh8eAMKQJTEcgiNu0IXzafydtt1L3a ZPsgC8WK3Ew4beAg== To: linux-arm-kernel@lists.infradead.org Cc: Russell King , Ard Biesheuvel , Thomas Gleixner , Sebastian Andrzej Siewior Subject: [PATCH 1/3] ARM: vfp: Provide vfp_lock() for VFP locking. Date: Thu, 15 Jun 2023 12:16:54 +0200 Message-Id: <20230615101656.1308942-2-bigeasy@linutronix.de> In-Reply-To: <20230615101656.1308942-1-bigeasy@linutronix.de> References: <20230615101656.1308942-1-bigeasy@linutronix.de> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230615_031735_954203_349B131A X-CRM114-Status: GOOD ( 13.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org kernel_neon_begin() uses local_bh_disable() to ensure exclusive access to the VFP unit. This is broken on PREEMPT_RT because a BH disabled section remains preemptible on PREEMPT_RT. Introduce vfp_lock() which uses local_bh_disable() and preempt_disable() on PREEMPT_RT. Since softirqs are processed always in thread context, disabling preemption is enough to ensure that the current context won't get interrupted by something that is using the VFP. Use it in kernel_neon_begin(). Signed-off-by: Sebastian Andrzej Siewior --- arch/arm/vfp/vfpmodule.c | 32 ++++++++++++++++++++++++++++++-- 1 file changed, 30 insertions(+), 2 deletions(-) diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c index 58a9442add24b..0a21e13095809 100644 --- a/arch/arm/vfp/vfpmodule.c +++ b/arch/arm/vfp/vfpmodule.c @@ -54,6 +54,34 @@ extern unsigned int VFP_arch_feroceon __alias(VFP_arch); */ union vfp_state *vfp_current_hw_state[NR_CPUS]; +/* + * Claim ownership of the VFP unit. + * + * The caller may change VFP registers until vfp_unlock() is called. + * + * local_bh_disable() is used to disable preemption and to disable VFP + * processing in softirq context. On PREEMPT_RT kernels local_bh_disable() is + * not sufficient because it only serializes soft interrupt related sections + * via a local lock, but stays preemptible. Disabling preemption is the right + * choice here as bottom half processing is always in thread context on RT + * kernels so it implicitly prevents bottom half processing as well. + */ +static void vfp_lock(void) +{ + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + local_bh_disable(); + else + preempt_disable(); +} + +static void vfp_unlock(void) +{ + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + local_bh_enable(); + else + preempt_enable(); +} + /* * Is 'thread's most up to date state stored in this CPUs hardware? * Must be called from non-preemptible context. @@ -818,7 +846,7 @@ void kernel_neon_begin(void) unsigned int cpu; u32 fpexc; - local_bh_disable(); + vfp_lock(); /* * Kernel mode NEON is only allowed outside of hardirq context with @@ -849,7 +877,7 @@ void kernel_neon_end(void) { /* Disable the NEON/VFP unit. */ fmxr(FPEXC, fmrx(FPEXC) & ~FPEXC_EN); - local_bh_enable(); + vfp_unlock(); } EXPORT_SYMBOL(kernel_neon_end);