From patchwork Thu Apr 21 08:18:18 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Murzin X-Patchwork-Id: 8897061 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id A36FBBF29F for ; Thu, 21 Apr 2016 08:22:00 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 50B3E202FE for ; Thu, 21 Apr 2016 08:21:58 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 08B38202EB for ; Thu, 21 Apr 2016 08:21:57 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1at9qm-0001HP-Kw; Thu, 21 Apr 2016 08:20:36 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1at9pi-0007Jy-J5 for linux-arm-kernel@lists.infradead.org; Thu, 21 Apr 2016 08:19:34 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5D55D52B; Thu, 21 Apr 2016 01:17:37 -0700 (PDT) Received: from login1.euhpc.arm.com (login1.euhpc.arm.com [10.6.26.143]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 375B63F218; Thu, 21 Apr 2016 01:18:53 -0700 (PDT) From: Vladimir Murzin To: linux@arm.linux.org.uk Subject: [PATCH RFC 06/10] ARM: V7M: Implement cache macros for V7M Date: Thu, 21 Apr 2016 09:18:18 +0100 Message-Id: <1461226702-27160-7-git-send-email-vladimir.murzin@arm.com> X-Mailer: git-send-email 2.0.0 In-Reply-To: <1461226702-27160-1-git-send-email-vladimir.murzin@arm.com> References: <1461226702-27160-1-git-send-email-vladimir.murzin@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160421_011930_829392_747EC946 X-CRM114-Status: GOOD ( 19.41 ) X-Spam-Score: -7.9 (-------) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mcoquelin.stm32@gmail.com, manabian@gmail.com, stefan@agner.ch, kbuild-all@01.org, kernel@pengutronix.de, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jonathan Austin This commit implements the cache operation macros for V7M, paving the way for caches to be used on V7 with a future commit. Because the cache operations in V7M are memory mapped, in most operations an extra register is required compared to the V7 version, where the type of operation is encoded in the instruction, not the address that is written to. Thus, an extra register argument has been added to the cache operation macros, that is required in V7M but ignored/unused in V7. In almost all cases there was a spare temporary register, but in places where the register allocation was tighter the ARM/THUMB macros have been used to avoid clobbering new registers. Signed-off-by: Jonathan Austin Signed-off-by: Vladimir Murzin --- arch/arm/mm/cache-v7.S | 41 +++++++----- arch/arm/mm/v7-cache-macros.S | 23 ++++--- arch/arm/mm/v7m-cache-macros.S | 140 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 179 insertions(+), 25 deletions(-) create mode 100644 arch/arm/mm/v7m-cache-macros.S diff --git a/arch/arm/mm/cache-v7.S b/arch/arm/mm/cache-v7.S index 53a802e..a0c89c6 100644 --- a/arch/arm/mm/cache-v7.S +++ b/arch/arm/mm/cache-v7.S @@ -17,7 +17,11 @@ #include #include "proc-macros.S" +#ifdef CONFIG_CPU_V7M +#include "v7m-cache-macros.S" +#else #include "v7-cache-macros.S" +#endif /* * The secondary kernel init calls v7_flush_dcache_all before it enables @@ -35,7 +39,7 @@ ENTRY(v7_invalidate_l1) mov r0, #0 - write_csselr r0 + write_csselr r0, r1 read_ccsidr r0 movw r1, #0x7fff and r2, r1, r0, lsr #13 @@ -56,7 +60,7 @@ ENTRY(v7_invalidate_l1) mov r5, r3, lsl r1 mov r6, r2, lsl r0 orr r5, r5, r6 @ Reg = (Temp< + * + * The 'unused' parameters are to keep the macro signatures in sync with the + * V7M versions, which require a tmp register for certain operations (see + * v7m-cache-macros.S). GAS supports omitting optional arguments but doesn't + * happily ignore additional undefined ones. */ .macro read_ctr, rt @@ -29,56 +34,56 @@ mrc p15, 1, \rt, c0, c0, 1 .endm -.macro write_csselr, rt +.macro write_csselr, rt, unused mcr p15, 2, \rt, c0, c0, 0 .endm /* * dcisw: invalidate data cache by set/way */ -.macro dcisw, rt +.macro dcisw, rt, unused mcr p15, 0, \rt, c7, c6, 2 .endm /* * dccisw: clean and invalidate data cache by set/way */ -.macro dccisw, rt +.macro dccisw, rt, unused mcr p15, 0, \rt, c7, c14, 2 .endm /* * dccimvac: Clean and invalidate data cache line by MVA to PoC. */ -.macro dccimvac, rt, cond = al - mcr\cond p15, 0, \rt, c7, c14, 1 +.macro dccimvac, rt, unused + mcr p15, 0, \rt, c7, c14, 1 .endm /* * dcimvac: Invalidate data cache line by MVA to PoC */ -.macro dcimvac, rt +.macro dcimvac, rt, unused mcr p15, 0, r0, c7, c6, 1 .endm /* * dccmvau: Clean data cache line by MVA to PoU */ -.macro dccmvau, rt +.macro dccmvau, rt, unused mcr p15, 0, \rt, c7, c11, 1 .endm /* * dccmvac: Clean data cache line by MVA to PoC */ -.macro dccmvac, rt +.macro dccmvac, rt, unused mcr p15, 0, \rt, c7, c10, 1 .endm /* * icimvau: Invalidate instruction caches by MVA to PoU */ -.macro icimvau, rt +.macro icimvau, rt, unused mcr p15, 0, \rt, c7, c5, 1 .endm diff --git a/arch/arm/mm/v7m-cache-macros.S b/arch/arm/mm/v7m-cache-macros.S new file mode 100644 index 0000000..9a07c15 --- /dev/null +++ b/arch/arm/mm/v7m-cache-macros.S @@ -0,0 +1,140 @@ +/* + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * + * Copyright (C) 2012 ARM Limited + * + * Author: Jonathan Austin + */ +#include "asm/v7m.h" +#include "asm/assembler.h" + +/* Generic V7M read/write macros for memory mapped cache operations */ +.macro v7m_cache_read, rt, reg + movw \rt, #:lower16:BASEADDR_V7M_SCB + \reg + movt \rt, #:upper16:BASEADDR_V7M_SCB + \reg + ldr \rt, [\rt] +.endm + +.macro v7m_cacheop, rt, tmp, op + movw \tmp, #:lower16:BASEADDR_V7M_SCB + \op + movt \tmp, #:upper16:BASEADDR_V7M_SCB + \op + str \rt, [\tmp] +.endm + +/* read/write cache properties */ +.macro read_ctr, rt + v7m_cache_read \rt, V7M_SCB_CTR +.endm + +.macro read_ccsidr, rt + v7m_cache_read \rt, V7M_SCB_CCSIDR +.endm + +.macro read_clidr, rt + v7m_cache_read \rt, V7M_SCB_CLIDR +.endm + +.macro write_csselr, rt, tmp + v7m_cacheop \rt, \tmp, V7M_SCB_CSSELR +.endm + +/* + * dcisw: Invalidate data cache by set/way + */ +.macro dcisw, rt, tmp + v7m_cacheop \rt, \tmp, V7M_SCB_DCISW +.endm + +/* + * dccisw: Clean and invalidate data cache by set/way + */ +.macro dccisw, rt, tmp + v7m_cacheop \rt, \tmp, V7M_SCB_DCCISW +.endm + +/* + * dccimvac: Clean and invalidate data cache line by MVA to PoC. + */ +.macro dccimvac, rt, tmp + v7m_cacheop \rt, \tmp, V7M_SCB_DCCIMVAC +.endm + +/* + * dcimvac: Invalidate data cache line by MVA to PoC + */ +.macro dcimvac, rt, tmp + v7m_cacheop \rt, \tmp, V7M_SCB_DCIMVAC +.endm + +/* + * dccmvau: Clean data cache line by MVA to PoU + */ +.macro dccmvau, rt, tmp + v7m_cacheop \rt, \tmp, V7M_SCB_DCCMVAU +.endm + +/* + * dccmvac: Clean data cache line by MVA to PoC + */ +.macro dccmvac, rt, tmp + v7m_cacheop \rt, \tmp, V7M_SCB_DCCMVAC +.endm + +/* + * icimvau: Invalidate instruction caches by MVA to PoU + */ +.macro icimvau, rt, tmp + v7m_cacheop \rt, \tmp, V7M_SCB_ICIMVAU +.endm + +/* + * Invalidate the icache, inner shareable if SMP, invalidate BTB for UP. + * rt data ignored by ICIALLU(IS), so can be used for the address + */ +.macro invalidate_icache, rt + v7m_cacheop \rt, \rt, V7M_SCB_ICIALLU + mov \rt, #0 +.endm + +/* + * Invalidate the BTB, inner shareable if SMP. + * rt data ignored by BPIALL, so it can be used for the address + */ +.macro invalidate_bp, rt + v7m_cacheop \rt, \rt, V7M_SCB_BPIALL + mov \rt, #0 +.endm + +/* + * dcache_line_size - get the minimum D-cache line size from the CTR register + * on ARMv7. + */ +.macro dcache_line_size, reg, tmp + read_ctr \tmp + lsr \tmp, \tmp, #16 + and \tmp, \tmp, #0xf @ cache line size encoding + mov \reg, #4 @ bytes per word + mov \reg, \reg, lsl \tmp @ actual cache line size +.endm + +/* + * icache_line_size - get the minimum I-cache line size from the CTR register + * on ARMv7. + */ +.macro icache_line_size, reg, tmp + read_ctr \tmp + and \tmp, \tmp, #0xf @ cache line size encoding + mov \reg, #4 @ bytes per word + mov \reg, \reg, lsl \tmp @ actual cache line size +.endm