From patchwork Thu May 5 16:32:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 12839778 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B68BDC433F5 for ; Thu, 5 May 2022 16:33:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Uy11VzhVhuES2RYlIQOwAIgU1fS4vUDcVcbm55ew7Ag=; b=fYtcPH8PXyDOtP MOg6yBuodkp6LycwJB+dwc4VUH93H/FiOzvi4EWzM15ZvhFPmZtJSGqVF8nwcvpOiHz+XS/ijdZCs qUry49IMA37yOaesNNVyFcrS2ayPdSmDvSmgeIaFgiCLYNG3hYIBHUqVH4lHkwVYqgOWyxKsEEaxW 2EIVKSS7yD6O0jVuzY0ebnUL6BzYZi0H6ftQdwyGBcmRzETIDNiJzeYaiIIYTUqUXASiv6vrOKYz0 BpYPYp+vlW4rBnGLkWpIL6+r88NLZ4krkxCj8yNR42MitDkPclDxnm7lEEMKerXk7+XxQFiuTRHqh FPPp0kjsw3ht8P+/wJJQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmeP7-00GsIE-Px; Thu, 05 May 2022 16:32:37 +0000 Received: from galois.linutronix.de ([2a0a:51c0:0:12e:550::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmeOs-00Gs8w-4B for linux-arm-kernel@lists.infradead.org; Thu, 05 May 2022 16:32:23 +0000 From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1651768333; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IJvShu7gr0khpxTgyZyp96OVeMnXvb3D1pvN4Rr8j4Y=; b=k5HuLeGOt70xdjdv5NA878FQV6B+366+NRpjb9xg4cqkEAbLp4QaKSCQZyeYFYZnvsF5Hk iTFA8ps8Py+7Nv5kazqPs6dEBePwpA0CIwkkKb1bnJj+/a64cpf/Yfohw8ybQlzlkxVdfN MoxlJ+1bs+WdZPNEOKoNuZUQRfISrOMIufaYvgpdf5dyA6nTggMo6LnEd2nLM1wLMxlQ+M yB2/IVQ5p6/DsiJkTbun6Sr1tQ1fO2GXXGcVm9cBwbKqMrsJvgDymzmj0LHsUkSGb0V8pJ Gthy8IJubKKbkouNLjsaWnrRRO4eyRN0A1Si9gJg9hWavdwyTfd1s/qVysyelg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1651768333; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IJvShu7gr0khpxTgyZyp96OVeMnXvb3D1pvN4Rr8j4Y=; b=j6+oNW75kAYrlmqJK6MmRvlXYMVwZ0d/5ZoD39SAup31X6J7jgRBucch7myiGewugHQuV/ nS0It2FSSLjXZaBQ== To: linux-arm-kernel@lists.infradead.org Cc: Catalin Marinas , Will Deacon , Thomas Gleixner , Sebastian Andrzej Siewior Subject: [PATCH 2/3] arm64/sve: Make kernel FPU protection RT friendly Date: Thu, 5 May 2022 18:32:06 +0200 Message-Id: <20220505163207.85751-3-bigeasy@linutronix.de> In-Reply-To: <20220505163207.85751-1-bigeasy@linutronix.de> References: <20220505163207.85751-1-bigeasy@linutronix.de> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220505_093222_348127_99AE84B9 X-CRM114-Status: UNSURE ( 9.61 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Non RT kernels need to protect FPU against preemption and bottom half processing. This is achieved by disabling bottom halves via local_bh_disable() which implictly disables preemption. On RT kernels this protection mechanism is not sufficient because local_bh_disable() does not disable preemption. It serializes bottom half related processing via a CPU local lock. As bottom halves are running always in thread context on RT kernels disabling preemption is the proper choice as it implicitly prevents bottom half processing. Signed-off-by: Sebastian Andrzej Siewior Acked-by: Mark Brown --- arch/arm64/kernel/fpsimd.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 475939beb0167..ce4ee36b1da88 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -237,10 +237,19 @@ static void __get_cpu_fpsimd_context(void) * * The double-underscore version must only be called if you know the task * can't be preempted. + * + * On RT kernels local_bh_disable() is not sufficient because it only + * serializes soft interrupt related sections via a local lock, but stays + * preemptible. Disabling preemption is the right choice here as bottom + * half processing is always in thread context on RT kernels so it + * implicitly prevents bottom half processing as well. */ static void get_cpu_fpsimd_context(void) { - local_bh_disable(); + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + local_bh_disable(); + else + preempt_disable(); __get_cpu_fpsimd_context(); } @@ -261,7 +270,10 @@ static void __put_cpu_fpsimd_context(void) static void put_cpu_fpsimd_context(void) { __put_cpu_fpsimd_context(); - local_bh_enable(); + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + local_bh_enable(); + else + preempt_enable(); } static bool have_cpu_fpsimd_context(void)