From patchwork Thu Oct 5 09:50:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409956 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 962E4E82CDE for ; Thu, 5 Oct 2023 09:53:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Ki49at0MU52Mj2lhqD7gZJmVeuvIWFUwHrKhLWF2Yns=; b=p20v4CuGslWw+1 D+r5jXd7Q/KFMNhWagJ2+RdienUGkyO1T1JySMy5LOfzmPlFM1JVC5NevzxA1oPjuJ8H9diPd2Uv3 gzYaYLff5ZfgPwKbxbqkBnCf3X1o1pcJz+7ShH5XEX6/PoenyPhsM6Rxn+NPhDYAMisZXmn25F3IU N9kHzmOkbGSzgXQTUMXHXt6ir4y+af/ABYSpulxdUgwhEmtbzdAV6Zhx7TEIFs1RT7THiURmz5Hqa VoMrtZNNFBYVwLO466IuJcWn9L6JS25oiEnFqmKPknCDzFCSt6L93azOy/ERoiqgGPahBt0aHgtUp YiDxMnAsvFcN4U5EfkYg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL2M-001p4E-1f; Thu, 05 Oct 2023 09:52:54 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL29-001orQ-0k for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:52:43 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E0E08152B; Thu, 5 Oct 2023 02:53:18 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 07DDA3F5A1; Thu, 5 Oct 2023 02:52:37 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 37/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_REPEAT_TLBI Date: Thu, 5 Oct 2023 10:50:24 +0100 Message-Id: <20231005095025.1872048-39-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025241_390835_7F16B66C X-CRM114-Status: GOOD ( 17.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In arch_tlbbatch_should_defer() we use cpus_have_const_cap() to check for ARM64_WORKAROUND_REPEAT_TLBI, but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The cpus_have_const_cap() check in arch_tlbbatch_should_defer() is an optimization to avoid some redundant work when the ARM64_WORKAROUND_REPEAT_TLBI cpucap is detected and forces the immediate use of TLBI + DSB ISH. In the window between detecting the ARM64_WORKAROUND_REPEAT_TLBI cpucap and patching alternatives this is not a big concern and there's no need to optimize this window at the expsense of subsequent usage at runtime. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. The ARM64_WORKAROUND_REPEAT_TLBI cpucap is added to cpucap_is_possible() so that code can be elided entirely when this is not possible without requiring ifdeffery or IS_ENABLED() checks at each usage. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpucaps.h | 2 ++ arch/arm64/include/asm/tlbflush.h | 5 ++--- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index 7ddb79c235c27..270680e2b5c4a 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -56,6 +56,8 @@ cpucap_is_possible(const unsigned int cap) return IS_ENABLED(CONFIG_CAVIUM_ERRATUM_23154); case ARM64_WORKAROUND_NVIDIA_CARMEL_CNP: return IS_ENABLED(CONFIG_NVIDIA_CARMEL_CNP_ERRATUM); + case ARM64_WORKAROUND_REPEAT_TLBI: + return IS_ENABLED(CONFIG_ARM64_WORKAROUND_REPEAT_TLBI); } return true; diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 53ed194626e1a..7aa476a52180a 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -284,16 +284,15 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm) { -#ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI /* * TLB flush deferral is not required on systems which are affected by * ARM64_WORKAROUND_REPEAT_TLBI, as __tlbi()/__tlbi_user() implementation * will have two consecutive TLBI instructions with a dsb(ish) in between * defeating the purpose (i.e save overall 'dsb ish' cost). */ - if (unlikely(cpus_have_const_cap(ARM64_WORKAROUND_REPEAT_TLBI))) + if (alternative_has_cap_unlikely(ARM64_WORKAROUND_REPEAT_TLBI)) return false; -#endif + return true; }