From patchwork Wed Jun 7 16:48:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13270998 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D0118C83005 for ; Wed, 7 Jun 2023 16:49:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uwugHnczE7xF2oQMx7rTyjHEm4J5Zr3XZ6RzOQPzH5I=; b=YBWik4GjTiREhj h3dzlSw4MxXz1CutdJlfyuIccCb3bb4uBpC/4kutZ5SxrcN/DA2gfPooCpOS300prkyTYC/3uTcG2 7SiJrYWqZdF1fu9d5843Rr7l35yzHDnatgIxpOMupCcXnukTV2AbOnTccuZV3OsWOF8nPrrcWuirb K1d9lvdbHvN+iO9UwbuykScIzYYnhTonVDuqFofs8FSCgMwDPLeseq/4k3wWSXTgSyd7swvITprYD cqDs7GUiXZ2pJqsxFj/i4WjwvLB576pjbsuNCo7Vv0U62QejmFbJupmWYqdRWCvggT1zPfzfNG4lR E1a+H9ACMrWAg/Vm0UJA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q6wLI-006bHK-00; Wed, 07 Jun 2023 16:49:04 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q6wLA-006bDy-2B for linux-arm-kernel@lists.infradead.org; Wed, 07 Jun 2023 16:49:00 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 55CD01042; Wed, 7 Jun 2023 09:49:38 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D7CC83F71E; Wed, 7 Jun 2023 09:48:51 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: broonie@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com, maz@kernel.org, suzuki.poulose@arm.com, will@kernel.org Subject: [PATCH v2 1/4] arm64: standardise cpucap bitmap names Date: Wed, 7 Jun 2023 17:48:43 +0100 Message-Id: <20230607164846.3967305-2-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230607164846.3967305-1-mark.rutland@arm.com> References: <20230607164846.3967305-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230607_094856_812400_B412557A X-CRM114-Status: GOOD ( 21.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The 'cpu_hwcaps' and 'boot_capabilities' bitmaps are bitmaps have the same enumerated bits, but are named wildly differently for no good reason. The terms 'hwcaps' and 'capabilities' have become ambiguous over time (e.g. due to clashes with ELF hwcaps and the structures used to manage feature detection), and it would be nicer to use 'cpucaps', matching the header the enumerated bit indices are defined in. While this isn't a functional problem, it makes the code harder than necessary to understand, and hard to extend with related functionality (e.g. per-cpu cpucap bitmaps). To that end, this patch renames `boot_capabilities` to `boot_cpucaps` and `cpu_hwcaps` to `system_cpucaps`. This more clearly indicates the relationship between the two and aligns with terminology used elsewhere in our feature management code. This change was scripted with: | find . -type f -name '*.[chS]' -print0 | \ | xargs -0 sed -i 's/\/boot_cpucaps/' | find . -type f -name '*.[chS]' -print0 | \ | xargs -0 sed -i 's/\/system_cpucaps/' ... and the instance of "cpu_hwcap" (without a trailing "s") in corrected manually to "system_cpucaps". Subsequent patches will adjust the naming of related functions to better align with the `cpucap` naming. There should be no functional change as a result of this patch; this is purely a renaming exercise. Signed-off-by: Mark Rutland Reviewed-by: Mark Brown Reviewed-by: Suzuki K Poulose Cc: Catalin Marinas Cc: Marc Zyngier Cc: Will Deacon --- arch/arm64/include/asm/cpufeature.h | 12 ++++++------ arch/arm64/include/asm/mmu_context.h | 2 +- arch/arm64/kernel/alternative.c | 6 +++--- arch/arm64/kernel/cpufeature.c | 12 ++++++------ 4 files changed, 16 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 6bf013fb110d7..04e11cdeda14b 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -107,7 +107,7 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0; * CPU capabilities: * * We use arm64_cpu_capabilities to represent system features, errata work - * arounds (both used internally by kernel and tracked in cpu_hwcaps) and + * arounds (both used internally by kernel and tracked in system_cpucaps) and * ELF HWCAPs (which are exposed to user). * * To support systems with heterogeneous CPUs, we need to make sure that we @@ -419,12 +419,12 @@ static __always_inline bool is_hyp_code(void) return is_vhe_hyp_code() || is_nvhe_hyp_code(); } -extern DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS); +extern DECLARE_BITMAP(system_cpucaps, ARM64_NCAPS); -extern DECLARE_BITMAP(boot_capabilities, ARM64_NCAPS); +extern DECLARE_BITMAP(boot_cpucaps, ARM64_NCAPS); #define for_each_available_cap(cap) \ - for_each_set_bit(cap, cpu_hwcaps, ARM64_NCAPS) + for_each_set_bit(cap, system_cpucaps, ARM64_NCAPS) bool this_cpu_has_cap(unsigned int cap); void cpu_set_feature(unsigned int num); @@ -449,7 +449,7 @@ static __always_inline bool cpus_have_cap(unsigned int num) { if (num >= ARM64_NCAPS) return false; - return arch_test_bit(num, cpu_hwcaps); + return arch_test_bit(num, system_cpucaps); } /* @@ -510,7 +510,7 @@ static inline void cpus_set_cap(unsigned int num) pr_warn("Attempt to set an illegal CPU capability (%d >= %d)\n", num, ARM64_NCAPS); } else { - __set_bit(num, cpu_hwcaps); + __set_bit(num, system_cpucaps); } } diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 56911691bef05..5e0402946c35b 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -164,7 +164,7 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap) * up (i.e. cpufeature framework is not up yet) and * latter only when we enable CNP via cpufeature's * enable() callback. - * Also we rely on the cpu_hwcap bit being set before + * Also we rely on the system_cpucaps bit being set before * calling the enable() function. */ ttbr1 |= TTBR_CNP_BIT; diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c index d32d4ed5519bd..51a5dbd7f324e 100644 --- a/arch/arm64/kernel/alternative.c +++ b/arch/arm64/kernel/alternative.c @@ -192,7 +192,7 @@ static void __apply_alternatives(const struct alt_region *region, bitmap_or(applied_alternatives, applied_alternatives, feature_mask, ARM64_NCAPS); bitmap_and(applied_alternatives, applied_alternatives, - cpu_hwcaps, ARM64_NCAPS); + system_cpucaps, ARM64_NCAPS); } } @@ -239,7 +239,7 @@ static int __init __apply_alternatives_multi_stop(void *unused) } else { DECLARE_BITMAP(remaining_capabilities, ARM64_NCAPS); - bitmap_complement(remaining_capabilities, boot_capabilities, + bitmap_complement(remaining_capabilities, boot_cpucaps, ARM64_NCAPS); BUG_ON(all_alternatives_applied); @@ -274,7 +274,7 @@ void __init apply_boot_alternatives(void) pr_info("applying boot alternatives\n"); __apply_alternatives(&kernel_alternatives, false, - &boot_capabilities[0]); + &boot_cpucaps[0]); } #ifdef CONFIG_MODULES diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 7d7128c651614..68e012f4ea080 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -105,11 +105,11 @@ unsigned int compat_elf_hwcap __read_mostly = COMPAT_ELF_HWCAP_DEFAULT; unsigned int compat_elf_hwcap2 __read_mostly; #endif -DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS); -EXPORT_SYMBOL(cpu_hwcaps); +DECLARE_BITMAP(system_cpucaps, ARM64_NCAPS); +EXPORT_SYMBOL(system_cpucaps); static struct arm64_cpu_capabilities const __ro_after_init *cpu_hwcaps_ptrs[ARM64_NCAPS]; -DECLARE_BITMAP(boot_capabilities, ARM64_NCAPS); +DECLARE_BITMAP(boot_cpucaps, ARM64_NCAPS); bool arm64_use_ng_mappings = false; EXPORT_SYMBOL(arm64_use_ng_mappings); @@ -137,7 +137,7 @@ static cpumask_var_t cpu_32bit_el0_mask __cpumask_var_read_mostly; void dump_cpu_features(void) { /* file-wide pr_fmt adds "CPU features: " prefix */ - pr_emerg("0x%*pb\n", ARM64_NCAPS, &cpu_hwcaps); + pr_emerg("0x%*pb\n", ARM64_NCAPS, &system_cpucaps); } #define ARM64_CPUID_FIELDS(reg, field, min_value) \ @@ -2906,7 +2906,7 @@ static void update_cpu_capabilities(u16 scope_mask) cpus_set_cap(caps->capability); if ((scope_mask & SCOPE_BOOT_CPU) && (caps->type & SCOPE_BOOT_CPU)) - set_bit(caps->capability, boot_capabilities); + set_bit(caps->capability, boot_cpucaps); } } @@ -3207,7 +3207,7 @@ EXPORT_SYMBOL_GPL(this_cpu_has_cap); /* * This helper function is used in a narrow window when, * - The system wide safe registers are set with all the SMP CPUs and, - * - The SYSTEM_FEATURE cpu_hwcaps may not have been set. + * - The SYSTEM_FEATURE system_cpucaps may not have been set. * In all other cases cpus_have_{const_}cap() should be used. */ static bool __maybe_unused __system_matches_cap(unsigned int n) From patchwork Wed Jun 7 16:48:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13270999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 637AEC77B7A for ; Wed, 7 Jun 2023 16:49:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=L2+YGXOqqeDGFFNKzNxkfd7cqj+hLQZ1CWc7uGWQDvk=; b=BhpsDHrTI6lYPV HDrlz85mdppQNm8AW1mtCl53Kirb6SOzF2EttK5T1QODLzhUPLccVr2YapgNRrdfKYp6nO++HIlIf f7xsJogA64fxK6akWEpHyXi84rY6gD135bbSJnZ75xYz0ODsvKZARDoUHtoAjx+zYnKUo4Dp2flvm 5/yZr60VPPkhuyrdAwGdKI207dg/ccgGR4ArDfgNiGinQsLo5sGqeu86GGFDh9ChxjXc8ZFg6T4Wk EDs6A36RqcdgdNkc/9UH6YG+rnGEp1/N7/FsOwmoFhIxXZaof8aH/Ub7maj43uoi2AJvmB/gfJHTl bCvkl2OLv0JqL0qI1BAw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q6wLG-006bGY-2n; Wed, 07 Jun 2023 16:49:02 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q6wLA-006bDx-23 for linux-arm-kernel@lists.infradead.org; Wed, 07 Jun 2023 16:48:58 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1684F1474; Wed, 7 Jun 2023 09:49:40 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 980F63F71E; Wed, 7 Jun 2023 09:48:53 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: broonie@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com, maz@kernel.org, suzuki.poulose@arm.com, will@kernel.org Subject: [PATCH v2 2/4] arm64: alternatives: use cpucap naming Date: Wed, 7 Jun 2023 17:48:44 +0100 Message-Id: <20230607164846.3967305-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230607164846.3967305-1-mark.rutland@arm.com> References: <20230607164846.3967305-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230607_094856_775809_EF0D8D9C X-CRM114-Status: GOOD ( 18.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org To more clearly align the various users of the cpucap enumeration, this patch changes the alternative code to use the term `cpucap` in favour of `feature`. The alternative_has_feature_{likely,unlikely}() functions are renamed to alternative_has_cap_ Reviewed-by: Suzuki K Poulose Cc: Catalin Marinas Cc: Marc Zyngier Cc: Mark Brown Cc: Will Deacon --- arch/arm64/include/asm/alternative-macros.h | 54 ++++++++++----------- arch/arm64/include/asm/alternative.h | 4 +- arch/arm64/include/asm/cpufeature.h | 4 +- arch/arm64/include/asm/irqflags.h | 2 +- arch/arm64/include/asm/lse.h | 2 +- arch/arm64/kernel/alternative.c | 17 +++---- 6 files changed, 41 insertions(+), 42 deletions(-) diff --git a/arch/arm64/include/asm/alternative-macros.h b/arch/arm64/include/asm/alternative-macros.h index bdf1f6bcd0103..94b486192e1f1 100644 --- a/arch/arm64/include/asm/alternative-macros.h +++ b/arch/arm64/include/asm/alternative-macros.h @@ -23,17 +23,17 @@ #include -#define ALTINSTR_ENTRY(feature) \ +#define ALTINSTR_ENTRY(cpucap) \ " .word 661b - .\n" /* label */ \ " .word 663f - .\n" /* new instruction */ \ - " .hword " __stringify(feature) "\n" /* feature bit */ \ + " .hword " __stringify(cpucap) "\n" /* cpucap */ \ " .byte 662b-661b\n" /* source len */ \ " .byte 664f-663f\n" /* replacement len */ -#define ALTINSTR_ENTRY_CB(feature, cb) \ +#define ALTINSTR_ENTRY_CB(cpucap, cb) \ " .word 661b - .\n" /* label */ \ - " .word " __stringify(cb) "- .\n" /* callback */ \ - " .hword " __stringify(feature) "\n" /* feature bit */ \ + " .word " __stringify(cb) "- .\n" /* callback */ \ + " .hword " __stringify(cpucap) "\n" /* cpucap */ \ " .byte 662b-661b\n" /* source len */ \ " .byte 664f-663f\n" /* replacement len */ @@ -53,13 +53,13 @@ * * Alternatives with callbacks do not generate replacement instructions. */ -#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled) \ +#define __ALTERNATIVE_CFG(oldinstr, newinstr, cpucap, cfg_enabled) \ ".if "__stringify(cfg_enabled)" == 1\n" \ "661:\n\t" \ oldinstr "\n" \ "662:\n" \ ".pushsection .altinstructions,\"a\"\n" \ - ALTINSTR_ENTRY(feature) \ + ALTINSTR_ENTRY(cpucap) \ ".popsection\n" \ ".subsection 1\n" \ "663:\n\t" \ @@ -70,31 +70,31 @@ ".previous\n" \ ".endif\n" -#define __ALTERNATIVE_CFG_CB(oldinstr, feature, cfg_enabled, cb) \ +#define __ALTERNATIVE_CFG_CB(oldinstr, cpucap, cfg_enabled, cb) \ ".if "__stringify(cfg_enabled)" == 1\n" \ "661:\n\t" \ oldinstr "\n" \ "662:\n" \ ".pushsection .altinstructions,\"a\"\n" \ - ALTINSTR_ENTRY_CB(feature, cb) \ + ALTINSTR_ENTRY_CB(cpucap, cb) \ ".popsection\n" \ "663:\n\t" \ "664:\n\t" \ ".endif\n" -#define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...) \ - __ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg)) +#define _ALTERNATIVE_CFG(oldinstr, newinstr, cpucap, cfg, ...) \ + __ALTERNATIVE_CFG(oldinstr, newinstr, cpucap, IS_ENABLED(cfg)) -#define ALTERNATIVE_CB(oldinstr, feature, cb) \ - __ALTERNATIVE_CFG_CB(oldinstr, (1 << ARM64_CB_SHIFT) | (feature), 1, cb) +#define ALTERNATIVE_CB(oldinstr, cpucap, cb) \ + __ALTERNATIVE_CFG_CB(oldinstr, (1 << ARM64_CB_SHIFT) | (cpucap), 1, cb) #else #include -.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len +.macro altinstruction_entry orig_offset alt_offset cpucap orig_len alt_len .word \orig_offset - . .word \alt_offset - . - .hword (\feature) + .hword (\cpucap) .byte \orig_len .byte \alt_len .endm @@ -210,9 +210,9 @@ alternative_endif #endif /* __ASSEMBLY__ */ /* - * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature)); + * Usage: asm(ALTERNATIVE(oldinstr, newinstr, cpucap)); * - * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature, CONFIG_FOO)); + * Usage: asm(ALTERNATIVE(oldinstr, newinstr, cpucap, CONFIG_FOO)); * N.B. If CONFIG_FOO is specified, but not selected, the whole block * will be omitted, including oldinstr. */ @@ -224,15 +224,15 @@ alternative_endif #include static __always_inline bool -alternative_has_feature_likely(const unsigned long feature) +alternative_has_cap_likely(const unsigned long cpucap) { - compiletime_assert(feature < ARM64_NCAPS, - "feature must be < ARM64_NCAPS"); + compiletime_assert(cpucap < ARM64_NCAPS, + "cpucap must be < ARM64_NCAPS"); asm_volatile_goto( - ALTERNATIVE_CB("b %l[l_no]", %[feature], alt_cb_patch_nops) + ALTERNATIVE_CB("b %l[l_no]", %[cpucap], alt_cb_patch_nops) : - : [feature] "i" (feature) + : [cpucap] "i" (cpucap) : : l_no); @@ -242,15 +242,15 @@ alternative_has_feature_likely(const unsigned long feature) } static __always_inline bool -alternative_has_feature_unlikely(const unsigned long feature) +alternative_has_cap_unlikely(const unsigned long cpucap) { - compiletime_assert(feature < ARM64_NCAPS, - "feature must be < ARM64_NCAPS"); + compiletime_assert(cpucap < ARM64_NCAPS, + "cpucap must be < ARM64_NCAPS"); asm_volatile_goto( - ALTERNATIVE("nop", "b %l[l_yes]", %[feature]) + ALTERNATIVE("nop", "b %l[l_yes]", %[cpucap]) : - : [feature] "i" (feature) + : [cpucap] "i" (cpucap) : : l_yes); diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h index a38b92e11811e..fe657e878757e 100644 --- a/arch/arm64/include/asm/alternative.h +++ b/arch/arm64/include/asm/alternative.h @@ -13,7 +13,7 @@ struct alt_instr { s32 orig_offset; /* offset to original instruction */ s32 alt_offset; /* offset to replacement instruction */ - u16 cpufeature; /* cpufeature bit set for replacement */ + u16 cpucap; /* cpucap bit set for replacement */ u8 orig_len; /* size of original instruction(s) */ u8 alt_len; /* size of new instruction(s), <= orig_len */ }; @@ -23,7 +23,7 @@ typedef void (*alternative_cb_t)(struct alt_instr *alt, void __init apply_boot_alternatives(void); void __init apply_alternatives_all(void); -bool alternative_is_applied(u16 cpufeature); +bool alternative_is_applied(u16 cpucap); #ifdef CONFIG_MODULES void apply_alternatives_module(void *start, size_t length); diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 04e11cdeda14b..6de2a5b232fce 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -437,7 +437,7 @@ unsigned long cpu_get_elf_hwcap2(void); static __always_inline bool system_capabilities_finalized(void) { - return alternative_has_feature_likely(ARM64_ALWAYS_SYSTEM); + return alternative_has_cap_likely(ARM64_ALWAYS_SYSTEM); } /* @@ -464,7 +464,7 @@ static __always_inline bool __cpus_have_const_cap(int num) { if (num >= ARM64_NCAPS) return false; - return alternative_has_feature_unlikely(num); + return alternative_has_cap_unlikely(num); } /* diff --git a/arch/arm64/include/asm/irqflags.h b/arch/arm64/include/asm/irqflags.h index e0f5f6b73edd7..1f31ec146d161 100644 --- a/arch/arm64/include/asm/irqflags.h +++ b/arch/arm64/include/asm/irqflags.h @@ -24,7 +24,7 @@ static __always_inline bool __irqflags_uses_pmr(void) { return IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && - alternative_has_feature_unlikely(ARM64_HAS_GIC_PRIO_MASKING); + alternative_has_cap_unlikely(ARM64_HAS_GIC_PRIO_MASKING); } static __always_inline void __daif_local_irq_enable(void) diff --git a/arch/arm64/include/asm/lse.h b/arch/arm64/include/asm/lse.h index f99d74826a7ef..cbbcdc35c4cd7 100644 --- a/arch/arm64/include/asm/lse.h +++ b/arch/arm64/include/asm/lse.h @@ -18,7 +18,7 @@ static __always_inline bool system_uses_lse_atomics(void) { - return alternative_has_feature_likely(ARM64_HAS_LSE_ATOMICS); + return alternative_has_cap_likely(ARM64_HAS_LSE_ATOMICS); } #define __lse_ll_sc_body(op, ...) \ diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c index 51a5dbd7f324e..53d13b2e5f59e 100644 --- a/arch/arm64/kernel/alternative.c +++ b/arch/arm64/kernel/alternative.c @@ -24,8 +24,8 @@ #define ALT_ORIG_PTR(a) __ALT_PTR(a, orig_offset) #define ALT_REPL_PTR(a) __ALT_PTR(a, alt_offset) -#define ALT_CAP(a) ((a)->cpufeature & ~ARM64_CB_BIT) -#define ALT_HAS_CB(a) ((a)->cpufeature & ARM64_CB_BIT) +#define ALT_CAP(a) ((a)->cpucap & ~ARM64_CB_BIT) +#define ALT_HAS_CB(a) ((a)->cpucap & ARM64_CB_BIT) /* Volatile, as we may be patching the guts of READ_ONCE() */ static volatile int all_alternatives_applied; @@ -37,12 +37,12 @@ struct alt_region { struct alt_instr *end; }; -bool alternative_is_applied(u16 cpufeature) +bool alternative_is_applied(u16 cpucap) { - if (WARN_ON(cpufeature >= ARM64_NCAPS)) + if (WARN_ON(cpucap >= ARM64_NCAPS)) return false; - return test_bit(cpufeature, applied_alternatives); + return test_bit(cpucap, applied_alternatives); } /* @@ -141,7 +141,7 @@ static void clean_dcache_range_nopatch(u64 start, u64 end) static void __apply_alternatives(const struct alt_region *region, bool is_module, - unsigned long *feature_mask) + unsigned long *cpucap_mask) { struct alt_instr *alt; __le32 *origptr, *updptr; @@ -151,7 +151,7 @@ static void __apply_alternatives(const struct alt_region *region, int nr_inst; int cap = ALT_CAP(alt); - if (!test_bit(cap, feature_mask)) + if (!test_bit(cap, cpucap_mask)) continue; if (!cpus_have_cap(cap)) @@ -188,9 +188,8 @@ static void __apply_alternatives(const struct alt_region *region, icache_inval_all_pou(); isb(); - /* Ignore ARM64_CB bit from feature mask */ bitmap_or(applied_alternatives, applied_alternatives, - feature_mask, ARM64_NCAPS); + cpucap_mask, ARM64_NCAPS); bitmap_and(applied_alternatives, applied_alternatives, system_cpucaps, ARM64_NCAPS); } From patchwork Wed Jun 7 16:48:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13270996 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C4D26C77B7A for ; Wed, 7 Jun 2023 16:49:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rMnaWXuWm3X8IXcT+hO3ehuI1bFahBzADP4R5w+ChMk=; b=lRTjTOA6Uy6VCx Erl5aYxoljDUKgUVbMCpXEFL77+4JsLrmgY+h+EQmiLLcjuBKWH5QuUgzX9uiyvEA3dKRTM0Zvucj dvf62BO2NDK1hLTOM2X4vF950+IVxNR+Wac4eFNA9euNCTecKYDEq3ihhkQhBuTdDMRsobVkQVwwi +cFjewlRwlrZkNipV4hc7JH3fr/fk2RLgy/PHG2FY2PpqfpvJNSiTv56SpSQJsgf84oCY5Oa5b+Cm 4R2JtCOaL0P28G/CU3C7Wui/qmxfpzVubQWwwYs+QwTOAWEHkEm4Us1DQ1ABMbwdOcdPe95nQRdvD upNILZJ2M6TKI93T8mEQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q6wLH-006bGz-1N; Wed, 07 Jun 2023 16:49:03 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q6wLC-006bEr-28 for linux-arm-kernel@lists.infradead.org; Wed, 07 Jun 2023 16:49:00 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 171BF1477; Wed, 7 Jun 2023 09:49:42 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 81CE83F71E; Wed, 7 Jun 2023 09:48:55 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: broonie@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com, maz@kernel.org, suzuki.poulose@arm.com, will@kernel.org Subject: [PATCH v2 3/4] arm64: cpufeature: use cpucap naming Date: Wed, 7 Jun 2023 17:48:45 +0100 Message-Id: <20230607164846.3967305-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230607164846.3967305-1-mark.rutland@arm.com> References: <20230607164846.3967305-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230607_094858_792852_A63635CA X-CRM114-Status: GOOD ( 16.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org To more clearly align the various users of the cpucap enumeration, this patch changes the cpufeature code to use the term `cpucap` in favour of `cpu_hwcap`. This more clearly aligns with other users of the cpucaps, and avoids confusion with the ELF hwcaps. There should be no functional change as a result of this patch; this is purely a renaming exercise. Signed-off-by: Mark Rutland Reviewed-by: Mark Brown Reviewed-by: Suzuki K Poulose Cc: Catalin Marinas Cc: Marc Zyngier Cc: Will Deacon --- arch/arm64/kernel/cpufeature.c | 38 +++++++++++++++++----------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 68e012f4ea080..82e6819528347 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -107,7 +107,7 @@ unsigned int compat_elf_hwcap2 __read_mostly; DECLARE_BITMAP(system_cpucaps, ARM64_NCAPS); EXPORT_SYMBOL(system_cpucaps); -static struct arm64_cpu_capabilities const __ro_after_init *cpu_hwcaps_ptrs[ARM64_NCAPS]; +static struct arm64_cpu_capabilities const __ro_after_init *cpucap_ptrs[ARM64_NCAPS]; DECLARE_BITMAP(boot_cpucaps, ARM64_NCAPS); @@ -954,24 +954,24 @@ extern const struct arm64_cpu_capabilities arm64_errata[]; static const struct arm64_cpu_capabilities arm64_features[]; static void __init -init_cpu_hwcaps_indirect_list_from_array(const struct arm64_cpu_capabilities *caps) +init_cpucap_indirect_list_from_array(const struct arm64_cpu_capabilities *caps) { for (; caps->matches; caps++) { if (WARN(caps->capability >= ARM64_NCAPS, "Invalid capability %d\n", caps->capability)) continue; - if (WARN(cpu_hwcaps_ptrs[caps->capability], + if (WARN(cpucap_ptrs[caps->capability], "Duplicate entry for capability %d\n", caps->capability)) continue; - cpu_hwcaps_ptrs[caps->capability] = caps; + cpucap_ptrs[caps->capability] = caps; } } -static void __init init_cpu_hwcaps_indirect_list(void) +static void __init init_cpucap_indirect_list(void) { - init_cpu_hwcaps_indirect_list_from_array(arm64_features); - init_cpu_hwcaps_indirect_list_from_array(arm64_errata); + init_cpucap_indirect_list_from_array(arm64_features); + init_cpucap_indirect_list_from_array(arm64_errata); } static void __init setup_boot_cpu_capabilities(void); @@ -1049,10 +1049,10 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info) init_cpu_ftr_reg(SYS_GMID_EL1, info->reg_gmid); /* - * Initialize the indirect array of CPU hwcaps capabilities pointers - * before we handle the boot CPU below. + * Initialize the indirect array of CPU capabilities pointers before we + * handle the boot CPU below. */ - init_cpu_hwcaps_indirect_list(); + init_cpucap_indirect_list(); /* * Detect and enable early CPU capabilities based on the boot CPU, @@ -2048,9 +2048,9 @@ static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, static bool has_address_auth_metacap(const struct arm64_cpu_capabilities *entry, int scope) { - bool api = has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_IMP_DEF], scope); - bool apa = has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH_QARMA5], scope); - bool apa3 = has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH_QARMA3], scope); + bool api = has_address_auth_cpucap(cpucap_ptrs[ARM64_HAS_ADDRESS_AUTH_IMP_DEF], scope); + bool apa = has_address_auth_cpucap(cpucap_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH_QARMA5], scope); + bool apa3 = has_address_auth_cpucap(cpucap_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH_QARMA3], scope); return apa || apa3 || api; } @@ -2895,7 +2895,7 @@ static void update_cpu_capabilities(u16 scope_mask) scope_mask &= ARM64_CPUCAP_SCOPE_MASK; for (i = 0; i < ARM64_NCAPS; i++) { - caps = cpu_hwcaps_ptrs[i]; + caps = cpucap_ptrs[i]; if (!caps || !(caps->type & scope_mask) || cpus_have_cap(caps->capability) || !caps->matches(caps, cpucap_default_scope(caps))) @@ -2920,7 +2920,7 @@ static int cpu_enable_non_boot_scope_capabilities(void *__unused) u16 non_boot_scope = SCOPE_ALL & ~SCOPE_BOOT_CPU; for_each_available_cap(i) { - const struct arm64_cpu_capabilities *cap = cpu_hwcaps_ptrs[i]; + const struct arm64_cpu_capabilities *cap = cpucap_ptrs[i]; if (WARN_ON(!cap)) continue; @@ -2950,7 +2950,7 @@ static void __init enable_cpu_capabilities(u16 scope_mask) for (i = 0; i < ARM64_NCAPS; i++) { unsigned int num; - caps = cpu_hwcaps_ptrs[i]; + caps = cpucap_ptrs[i]; if (!caps || !(caps->type & scope_mask)) continue; num = caps->capability; @@ -2995,7 +2995,7 @@ static void verify_local_cpu_caps(u16 scope_mask) scope_mask &= ARM64_CPUCAP_SCOPE_MASK; for (i = 0; i < ARM64_NCAPS; i++) { - caps = cpu_hwcaps_ptrs[i]; + caps = cpucap_ptrs[i]; if (!caps || !(caps->type & scope_mask)) continue; @@ -3194,7 +3194,7 @@ static void __init setup_boot_cpu_capabilities(void) bool this_cpu_has_cap(unsigned int n) { if (!WARN_ON(preemptible()) && n < ARM64_NCAPS) { - const struct arm64_cpu_capabilities *cap = cpu_hwcaps_ptrs[n]; + const struct arm64_cpu_capabilities *cap = cpucap_ptrs[n]; if (cap) return cap->matches(cap, SCOPE_LOCAL_CPU); @@ -3213,7 +3213,7 @@ EXPORT_SYMBOL_GPL(this_cpu_has_cap); static bool __maybe_unused __system_matches_cap(unsigned int n) { if (n < ARM64_NCAPS) { - const struct arm64_cpu_capabilities *cap = cpu_hwcaps_ptrs[n]; + const struct arm64_cpu_capabilities *cap = cpucap_ptrs[n]; if (cap) return cap->matches(cap, SCOPE_SYSTEM); From patchwork Wed Jun 7 16:48:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13271000 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C1980C7EE2F for ; Wed, 7 Jun 2023 16:49:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TK6oXqpq41WRIB8eibcdEqVHUFSxN2wrZPZhFHDl1Mg=; b=l4B+zyX7g0dY6P kCLAhVbi3PJut+Gp7ilvV1rvISSCHUiKQu1TKlrEQP1C+bMGkJXYjTMtlVob1WNdKWO4MN7/x75aV d09p/xHIm23vOYoLgtfJY/fvezLAvVOOG8mCs9U134CJ/MuxD9gxr2/aOhe/PdUPW+PdBdySSdM3w kkuz1qpq16Kuk2G64pDuIep8C2TPvneD/dEtwJOhF/lYzNVWMwtQPOQskKW4PfkfC/ymCoNx2WbI8 OX9/UxhnGdJY9S2o2PYPth1Q1fNj6acQg0ay+emi1EmD2vcc3MjjF8hXvzbriHsM0RW1XZRXcsPV4 d1CpvdAMQOmuqdaBcPJg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q6wLN-006bIr-2I; Wed, 07 Jun 2023 16:49:09 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q6wLF-006bG1-1j for linux-arm-kernel@lists.infradead.org; Wed, 07 Jun 2023 16:49:03 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 93CBA15BF; Wed, 7 Jun 2023 09:49:43 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2170D3F71E; Wed, 7 Jun 2023 09:48:57 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: broonie@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com, maz@kernel.org, suzuki.poulose@arm.com, will@kernel.org Subject: [PATCH v2 4/4] arm64: cpufeature: fold cpus_set_cap() into update_cpu_capabilities() Date: Wed, 7 Jun 2023 17:48:46 +0100 Message-Id: <20230607164846.3967305-5-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230607164846.3967305-1-mark.rutland@arm.com> References: <20230607164846.3967305-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230607_094901_618473_D42F8013 X-CRM114-Status: GOOD ( 12.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We only use cpus_set_cap() in update_cpu_capabilities(), where we open-code an analgous update to boot_cpucaps. Due to the way the cpucap_ptrs[] array is initialized, we know that the capability number cannot be greater than or equal to ARM64_NCAPS, so the warning is superfluous. Fold cpus_set_cap() into update_cpu_capabilities(), matching what we do for the boot_cpucaps, and making the relationship between the two a bit clearer. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Suzuki K Poulose Cc: Catalin Marinas Cc: Marc Zyngier Cc: Mark Brown Cc: Will Deacon --- arch/arm64/include/asm/cpufeature.h | 10 ---------- arch/arm64/kernel/cpufeature.c | 3 ++- 2 files changed, 2 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 6de2a5b232fce..7a95c324e52a4 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -504,16 +504,6 @@ static __always_inline bool cpus_have_const_cap(int num) return cpus_have_cap(num); } -static inline void cpus_set_cap(unsigned int num) -{ - if (num >= ARM64_NCAPS) { - pr_warn("Attempt to set an illegal CPU capability (%d >= %d)\n", - num, ARM64_NCAPS); - } else { - __set_bit(num, system_cpucaps); - } -} - static inline int __attribute_const__ cpuid_feature_extract_signed_field_width(u64 features, int field, int width) { diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 82e6819528347..bfadac361ac14 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2903,7 +2903,8 @@ static void update_cpu_capabilities(u16 scope_mask) if (caps->desc) pr_info("detected: %s\n", caps->desc); - cpus_set_cap(caps->capability); + + __set_bit(caps->capability, system_cpucaps); if ((scope_mask & SCOPE_BOOT_CPU) && (caps->type & SCOPE_BOOT_CPU)) set_bit(caps->capability, boot_cpucaps);