From patchwork Tue Jan 26 09:23:43 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Murzin X-Patchwork-Id: 8119191 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id B84C99F8AA for ; Tue, 26 Jan 2016 09:26:28 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B7E7720254 for ; Tue, 26 Jan 2016 09:26:27 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 963DD2024F for ; Tue, 26 Jan 2016 09:26:26 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aNzrR-0001iH-3R; Tue, 26 Jan 2016 09:24:29 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aNzrG-0001XU-Gh for linux-arm-kernel@lists.infradead.org; Tue, 26 Jan 2016 09:24:19 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9E0FF59F; Tue, 26 Jan 2016 01:23:18 -0800 (PST) Received: from login1.euhpc.arm.com (login1.euhpc.arm.com [10.6.26.143]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E2E9E3F21A; Tue, 26 Jan 2016 01:23:58 -0800 (PST) From: Vladimir Murzin To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 4/4] ARM: use proper helper while extracting cpu features Date: Tue, 26 Jan 2016 09:23:43 +0000 Message-Id: <1453800223-18590-5-git-send-email-vladimir.murzin@arm.com> X-Mailer: git-send-email 2.0.0 In-Reply-To: <1453800223-18590-1-git-send-email-vladimir.murzin@arm.com> References: <1453800223-18590-1-git-send-email-vladimir.murzin@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160126_012418_635212_1765A802 X-CRM114-Status: GOOD ( 14.31 ) X-Spam-Score: -6.9 (------) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux@arm.linux.org.uk, ard.biesheuvel@linaro.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Update current users of cpu feature helpers to use the proper helper depends on how feature bits should be handled. We follow the arm64 scheme [1] (slightly rephrased): We have three types of fields: a) A precise value or a value from which you derive some precise value. b) Fields defining the presence of a feature (1, 2, 3). These would always be positive since the absence of such feature would mean a value of 0 c) Fields defining the absence of a feature by setting 0xf. These are usually fields that were initial RAZ and turned to -1. So we can treat (a) and (b) as unsigned permanently and (c) as as signed, [1] https://lkml.org/lkml/2015/11/19/549 Signed-off-by: Vladimir Murzin Acked-by: Ard Biesheuvel --- arch/arm/include/asm/smp_plat.h | 4 ++-- arch/arm/kernel/setup.c | 34 ++++++++++++++++++++-------------- arch/arm/kernel/thumbee.c | 4 +--- arch/arm/mm/mmu.c | 2 +- 4 files changed, 24 insertions(+), 20 deletions(-) diff --git a/arch/arm/include/asm/smp_plat.h b/arch/arm/include/asm/smp_plat.h index f908071..03bbe02 100644 --- a/arch/arm/include/asm/smp_plat.h +++ b/arch/arm/include/asm/smp_plat.h @@ -49,7 +49,7 @@ static inline int tlb_ops_need_broadcast(void) if (!is_smp()) return 0; - return ((read_cpuid_ext(CPUID_EXT_MMFR3) >> 12) & 0xf) < 2; + return cpuid_feature_extract_unsigned(CPUID_EXT_MMFR3, 12) < 2; } #endif @@ -61,7 +61,7 @@ static inline int cache_ops_need_broadcast(void) if (!is_smp()) return 0; - return ((read_cpuid_ext(CPUID_EXT_MMFR3) >> 12) & 0xf) < 1; + return cpuid_feature_extract_unsigned(CPUID_EXT_MMFR3, 12) < 1; } #endif diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index fde041b..e696553 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -257,11 +257,15 @@ static int __get_cpu_architecture(void) /* Revised CPUID format. Read the Memory Model Feature * Register 0 and check for VMSAv7 or PMSAv7 */ unsigned int mmfr0 = read_cpuid_ext(CPUID_EXT_MMFR0); - if ((mmfr0 & 0x0000000f) >= 0x00000003 || - (mmfr0 & 0x000000f0) >= 0x00000030) + unsigned int block; +#ifdef CONFIG_MMU + block = cpuid_feature_extract_unsigned_field(mmfr0, 0); +#else + block = cpuid_feature_extract_unsigned_field(mmfr0, 4); +#endif + if (mmfr0 >= 3) cpu_arch = CPU_ARCH_ARMv7; - else if ((mmfr0 & 0x0000000f) == 0x00000002 || - (mmfr0 & 0x000000f0) == 0x00000020) + else if (mmfr0 == 2) cpu_arch = CPU_ARCH_ARMv6; else cpu_arch = CPU_ARCH_UNKNOWN; @@ -446,41 +450,41 @@ static inline void patch_aeabi_idiv(void) { } static void __init cpuid_init_hwcaps(void) { - int block; + unsigned int block; u32 isar5; if (cpu_architecture() < CPU_ARCH_ARMv7) return; - block = cpuid_feature_extract(CPUID_EXT_ISAR0, 24); + block = cpuid_feature_extract_unsigned(CPUID_EXT_ISAR0, 24); if (block >= 2) elf_hwcap |= HWCAP_IDIVA; if (block >= 1) elf_hwcap |= HWCAP_IDIVT; /* LPAE implies atomic ldrd/strd instructions */ - block = cpuid_feature_extract(CPUID_EXT_MMFR0, 0); + block = cpuid_feature_extract_unsigned(CPUID_EXT_MMFR0, 0); if (block >= 5) elf_hwcap |= HWCAP_LPAE; /* check for supported v8 Crypto instructions */ isar5 = read_cpuid_ext(CPUID_EXT_ISAR5); - block = cpuid_feature_extract_field(isar5, 4); + block = cpuid_feature_extract_unsigned_field(isar5, 4); if (block >= 2) elf_hwcap2 |= HWCAP2_PMULL; if (block >= 1) elf_hwcap2 |= HWCAP2_AES; - block = cpuid_feature_extract_field(isar5, 8); + block = cpuid_feature_extract_unsigned_field(isar5, 8); if (block >= 1) elf_hwcap2 |= HWCAP2_SHA1; - block = cpuid_feature_extract_field(isar5, 12); + block = cpuid_feature_extract_unsigned_field(isar5, 12); if (block >= 1) elf_hwcap2 |= HWCAP2_SHA2; - block = cpuid_feature_extract_field(isar5, 16); + block = cpuid_feature_extract_unsigned_field(isar5, 16); if (block >= 1) elf_hwcap2 |= HWCAP2_CRC32; } @@ -488,6 +492,7 @@ static void __init cpuid_init_hwcaps(void) static void __init elf_hwcap_fixup(void) { unsigned id = read_cpuid_id(); + unsigned int block; /* * HWCAP_TLS is available only on 1136 r1p0 and later, @@ -508,9 +513,10 @@ static void __init elf_hwcap_fixup(void) * avoid advertising SWP; it may not be atomic with * multiprocessing cores. */ - if (cpuid_feature_extract(CPUID_EXT_ISAR3, 12) > 1 || - (cpuid_feature_extract(CPUID_EXT_ISAR3, 12) == 1 && - cpuid_feature_extract(CPUID_EXT_ISAR4, 20) >= 3)) + block = cpuid_feature_extract_unsigned(CPUID_EXT_ISAR3, 12); + + if (block > 1 || (block == 1 && + cpuid_feature_extract_unsigned(CPUID_EXT_ISAR4, 20) >= 3)) elf_hwcap &= ~HWCAP_SWP; } diff --git a/arch/arm/kernel/thumbee.c b/arch/arm/kernel/thumbee.c index 8ff8dbf..b5cac80 100644 --- a/arch/arm/kernel/thumbee.c +++ b/arch/arm/kernel/thumbee.c @@ -62,14 +62,12 @@ static struct notifier_block thumbee_notifier_block = { static int __init thumbee_init(void) { - unsigned long pfr0; unsigned int cpu_arch = cpu_architecture(); if (cpu_arch < CPU_ARCH_ARMv7) return 0; - pfr0 = read_cpuid_ext(CPUID_EXT_PFR0); - if ((pfr0 & 0x0000f000) != 0x00001000) + if (cpuid_feature_extract_unsigned(CPUID_EXT_PFR0, 12) != 1) return 0; pr_info("ThumbEE CPU extension supported.\n"); diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 434d76f..06723a9 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -572,7 +572,7 @@ static void __init build_mem_type_table(void) * in the Short-descriptor translation table format descriptors. */ if (cpu_arch == CPU_ARCH_ARMv7 && - (read_cpuid_ext(CPUID_EXT_MMFR0) & 0xF) >= 4) { + (cpuid_feature_extract_unsigned(CPUID_EXT_MMFR0, 0) >= 4)) { user_pmd_table |= PMD_PXNTABLE; } #endif