From patchwork Thu Oct 17 17:45:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11196669 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A6D81668 for ; Thu, 17 Oct 2019 17:50:00 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 290EA2089C for ; Thu, 17 Oct 2019 17:50:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Cu5m/4Jp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 290EA2089C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CMN+elgFqfnaq8N+57F3GKa+tBf8oUtlGUNNH1WEpxg=; b=Cu5m/4JpPjL3Go oMd3UdTSeBjFNzEDS8FlEx5k/h2rdnLMHdLkXjQebWBn8a/ksaoYjWE7q6eioUPleoTw6yoo08SNH bpA5pxWX360sPxMOHH4HFhzFW5jswe3Iok443QTK/Fxo9RNICSJ1/ytRVMB+sywd7GQg4xVX5HVfP bkx7cA8y+FPIfwljzqbIsQomvDGVawJX7IvKVI0he9bVm0GzKhCyJWg+Jyz/nSKu6b6P5btv6+EPB Y8b7608lnopS+K6VGISWGUEfpvW3l3PUHQ7D1aYL4GkwYVvx4/X8QzM9ysYHwKVe7RLUx0y6Z4vmB Uk5lqE97PZSfmHqIbvpA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iL9uP-0000ke-3G; Thu, 17 Oct 2019 17:49:57 +0000 Received: from [2001:4bb8:18c:d7b:c70:4a89:bc61:3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1iL9ql-0005fL-Vc; Thu, 17 Oct 2019 17:46:12 +0000 From: Christoph Hellwig To: Arnd Bergmann , Guo Ren , Michal Simek , Greentime Hu , Vincent Chen , Guan Xuetao , x86@kernel.org Subject: [PATCH 06/21] nios2: remove __ioremap Date: Thu, 17 Oct 2019 19:45:39 +0200 Message-Id: <20191017174554.29840-7-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191017174554.29840-1-hch@lst.de> References: <20191017174554.29840-1-hch@lst.de> MIME-Version: 1.0 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, linux-s390@vger.kernel.org, linux-ia64@vger.kernel.org, linux-parisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-mips@vger.kernel.org, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, openrisc@lists.librecores.org, linux-mtd@lists.infradead.org, linux-alpha@vger.kernel.org, sparclinux@vger.kernel.org, nios2-dev@lists.rocketboards.org, linux-riscv@lists.infradead.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org The cacheflag argument to __ioremap is always 0, so just implement ioremap directly. Signed-off-by: Christoph Hellwig --- arch/nios2/include/asm/io.h | 20 ++++---------------- arch/nios2/mm/ioremap.c | 17 +++-------------- 2 files changed, 7 insertions(+), 30 deletions(-) diff --git a/arch/nios2/include/asm/io.h b/arch/nios2/include/asm/io.h index 9010243077ab..74ab34aa6731 100644 --- a/arch/nios2/include/asm/io.h +++ b/arch/nios2/include/asm/io.h @@ -25,29 +25,17 @@ #define writew_relaxed(x, addr) writew(x, addr) #define writel_relaxed(x, addr) writel(x, addr) -extern void __iomem *__ioremap(unsigned long physaddr, unsigned long size, - unsigned long cacheflag); +void __iomem *ioremap(unsigned long physaddr, unsigned long size); extern void __iounmap(void __iomem *addr); -static inline void __iomem *ioremap(unsigned long physaddr, unsigned long size) -{ - return __ioremap(physaddr, size, 0); -} - -static inline void __iomem *ioremap_nocache(unsigned long physaddr, - unsigned long size) -{ - return __ioremap(physaddr, size, 0); -} - static inline void iounmap(void __iomem *addr) { __iounmap(addr); } -#define ioremap_nocache ioremap_nocache -#define ioremap_wc ioremap_nocache -#define ioremap_wt ioremap_nocache +#define ioremap_nocache ioremap +#define ioremap_wc ioremap +#define ioremap_wt ioremap /* Pages to physical address... */ #define page_to_phys(page) virt_to_phys(page_to_virt(page)) diff --git a/arch/nios2/mm/ioremap.c b/arch/nios2/mm/ioremap.c index 3a28177a01eb..7a1a27f3daa3 100644 --- a/arch/nios2/mm/ioremap.c +++ b/arch/nios2/mm/ioremap.c @@ -112,8 +112,7 @@ static int remap_area_pages(unsigned long address, unsigned long phys_addr, /* * Map some physical address range into the kernel address space. */ -void __iomem *__ioremap(unsigned long phys_addr, unsigned long size, - unsigned long cacheflag) +void __iomem *ioremap(unsigned long phys_addr, unsigned long size) { struct vm_struct *area; unsigned long offset; @@ -139,15 +138,6 @@ void __iomem *__ioremap(unsigned long phys_addr, unsigned long size, return NULL; } - /* - * Map uncached objects in the low part of address space to - * CONFIG_NIOS2_IO_REGION_BASE - */ - if (IS_MAPPABLE_UNCACHEABLE(phys_addr) && - IS_MAPPABLE_UNCACHEABLE(last_addr) && - !(cacheflag & _PAGE_CACHED)) - return (void __iomem *)(CONFIG_NIOS2_IO_REGION_BASE + phys_addr); - /* Mappings have to be page-aligned */ offset = phys_addr & ~PAGE_MASK; phys_addr &= PAGE_MASK; @@ -158,14 +148,13 @@ void __iomem *__ioremap(unsigned long phys_addr, unsigned long size, if (!area) return NULL; addr = area->addr; - if (remap_area_pages((unsigned long) addr, phys_addr, size, - cacheflag)) { + if (remap_area_pages((unsigned long) addr, phys_addr, size, 0)) { vunmap(addr); return NULL; } return (void __iomem *) (offset + (char *)addr); } -EXPORT_SYMBOL(__ioremap); +EXPORT_SYMBOL(ioremap); /* * __iounmap unmaps nearly everything, so be careful