From patchwork Tue Jul 24 12:01:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10541975 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AC2C3112B for ; Tue, 24 Jul 2018 12:02:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9B11028306 for ; Tue, 24 Jul 2018 12:02:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8F4E728806; Tue, 24 Jul 2018 12:02:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F342C28306 for ; Tue, 24 Jul 2018 12:02:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388289AbeGXNIl (ORCPT ); Tue, 24 Jul 2018 09:08:41 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:56552 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388306AbeGXNIb (ORCPT ); Tue, 24 Jul 2018 09:08:31 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=GLMhO7THQNYQMGNR69BhDpHrUeAsFM6JGv9ROEpAST4=; b=N7A0APkITThTCD5T+K1EnSPdU X81Kh2ovusDeViuh3LCZE85IOTZaPlC+DHNZSJXiGbDYwoSekv+PcuCTkCFiWcAmOwcxqk4JeyQKF AkC/jBgnU6rxCBs6482oh15Q3NCS798XPIQu7ty9Y2FaHEvkLXDHX3rdFiO5ThF6ndGU0YkjWd1Dl 5iIFome2CUc31vTEiTStCMtH/diILeEbW7iDbgtNPcdgJasKupnGW3fh3Sp8zXodWnqPfEEGoOOc+ LgHWLO7+X3O+LYoqfR7WFiV9QeJCBWv7EnkQKG1sbKx8K5K6DoTMLprUn+RhZnOI08p/l3EA/lHGT 9TwiKm2QA==; Received: from 089144194224.atnat0003.highway.a1.net ([89.144.194.224] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1fhw0w-0001bi-K5; Tue, 24 Jul 2018 12:02:03 +0000 From: Christoph Hellwig To: Yoshinori Sato , Rich Felker Cc: Jacopo Mondi , Thomas Petazzoni , linux-sh@vger.kernel.org, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/5] sh: split arch/sh/mm/consistent.c Date: Tue, 24 Jul 2018 14:01:46 +0200 Message-Id: <20180724120147.15096-5-hch@lst.de> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180724120147.15096-1-hch@lst.de> References: <20180724120147.15096-1-hch@lst.de> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Half of the file just contains platform device memory setup code which is required for all builds, and half contains helpers for dma coherent allocation, which is only needed if CONFIG_DMA_NONCOHERENT is enabled. Signed-off-by: Christoph Hellwig --- arch/sh/kernel/Makefile | 2 +- arch/sh/kernel/dma-coherent.c | 85 +++++++++++++++++++++++++++++++++++ arch/sh/mm/consistent.c | 80 --------------------------------- 3 files changed, 86 insertions(+), 81 deletions(-) create mode 100644 arch/sh/kernel/dma-coherent.c diff --git a/arch/sh/kernel/Makefile b/arch/sh/kernel/Makefile index cb5f1bfb52de..d5ddb64bfffe 100644 --- a/arch/sh/kernel/Makefile +++ b/arch/sh/kernel/Makefile @@ -45,7 +45,7 @@ obj-$(CONFIG_DUMP_CODE) += disassemble.o obj-$(CONFIG_HIBERNATION) += swsusp.o obj-$(CONFIG_DWARF_UNWINDER) += dwarf.o obj-$(CONFIG_PERF_EVENTS) += perf_event.o perf_callchain.o -obj-$(CONFIG_DMA_NONCOHERENT) += dma-nommu.o +obj-$(CONFIG_DMA_NONCOHERENT) += dma-nommu.o dma-coherent.o obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o ccflags-y := -Werror diff --git a/arch/sh/kernel/dma-coherent.c b/arch/sh/kernel/dma-coherent.c new file mode 100644 index 000000000000..763ba10fbd3e --- /dev/null +++ b/arch/sh/kernel/dma-coherent.c @@ -0,0 +1,85 @@ +/* + * Copyright (C) 2004 - 2007 Paul Mundt + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + */ +#include +#include +#include +#include +#include +#include + +void *dma_generic_alloc_coherent(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, + unsigned long attrs) +{ + void *ret, *ret_nocache; + int order = get_order(size); + + gfp |= __GFP_ZERO; + + ret = (void *)__get_free_pages(gfp, order); + if (!ret) + return NULL; + + /* + * Pages from the page allocator may have data present in + * cache. So flush the cache before using uncached memory. + */ + sh_sync_dma_for_device(ret, size, DMA_BIDIRECTIONAL); + + ret_nocache = (void __force *)ioremap_nocache(virt_to_phys(ret), size); + if (!ret_nocache) { + free_pages((unsigned long)ret, order); + return NULL; + } + + split_page(pfn_to_page(virt_to_phys(ret) >> PAGE_SHIFT), order); + + *dma_handle = virt_to_phys(ret); + if (!WARN_ON(!dev)) + *dma_handle -= PFN_PHYS(dev->dma_pfn_offset); + + return ret_nocache; +} + +void dma_generic_free_coherent(struct device *dev, size_t size, + void *vaddr, dma_addr_t dma_handle, + unsigned long attrs) +{ + int order = get_order(size); + unsigned long pfn = (dma_handle >> PAGE_SHIFT); + int k; + + if (!WARN_ON(!dev)) + pfn += dev->dma_pfn_offset; + + for (k = 0; k < (1 << order); k++) + __free_pages(pfn_to_page(pfn + k), 0); + + iounmap(vaddr); +} + +void sh_sync_dma_for_device(void *vaddr, size_t size, + enum dma_data_direction direction) +{ + void *addr = sh_cacheop_vaddr(vaddr); + + switch (direction) { + case DMA_FROM_DEVICE: /* invalidate only */ + __flush_invalidate_region(addr, size); + break; + case DMA_TO_DEVICE: /* writeback only */ + __flush_wback_region(addr, size); + break; + case DMA_BIDIRECTIONAL: /* writeback and invalidate */ + __flush_purge_region(addr, size); + break; + default: + BUG(); + } +} +EXPORT_SYMBOL(sh_sync_dma_for_device); diff --git a/arch/sh/mm/consistent.c b/arch/sh/mm/consistent.c index 1622ae6b9dbd..792f36129062 100644 --- a/arch/sh/mm/consistent.c +++ b/arch/sh/mm/consistent.c @@ -1,10 +1,6 @@ /* - * arch/sh/mm/consistent.c - * * Copyright (C) 2004 - 2007 Paul Mundt * - * Declared coherent memory functions based on arch/x86/kernel/pci-dma_32.c - * * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. @@ -13,83 +9,7 @@ #include #include #include -#include #include -#include -#include -#include -#include - -void *dma_generic_alloc_coherent(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, - unsigned long attrs) -{ - void *ret, *ret_nocache; - int order = get_order(size); - - gfp |= __GFP_ZERO; - - ret = (void *)__get_free_pages(gfp, order); - if (!ret) - return NULL; - - /* - * Pages from the page allocator may have data present in - * cache. So flush the cache before using uncached memory. - */ - sh_sync_dma_for_device(ret, size, DMA_BIDIRECTIONAL); - - ret_nocache = (void __force *)ioremap_nocache(virt_to_phys(ret), size); - if (!ret_nocache) { - free_pages((unsigned long)ret, order); - return NULL; - } - - split_page(pfn_to_page(virt_to_phys(ret) >> PAGE_SHIFT), order); - - *dma_handle = virt_to_phys(ret); - if (!WARN_ON(!dev)) - *dma_handle -= PFN_PHYS(dev->dma_pfn_offset); - - return ret_nocache; -} - -void dma_generic_free_coherent(struct device *dev, size_t size, - void *vaddr, dma_addr_t dma_handle, - unsigned long attrs) -{ - int order = get_order(size); - unsigned long pfn = dma_handle >> PAGE_SHIFT; - int k; - - if (!WARN_ON(!dev)) - pfn += dev->dma_pfn_offset; - - for (k = 0; k < (1 << order); k++) - __free_pages(pfn_to_page(pfn + k), 0); - - iounmap(vaddr); -} - -void sh_sync_dma_for_device(void *vaddr, size_t size, - enum dma_data_direction direction) -{ - void *addr = sh_cacheop_vaddr(vaddr); - - switch (direction) { - case DMA_FROM_DEVICE: /* invalidate only */ - __flush_invalidate_region(addr, size); - break; - case DMA_TO_DEVICE: /* writeback only */ - __flush_wback_region(addr, size); - break; - case DMA_BIDIRECTIONAL: /* writeback and invalidate */ - __flush_purge_region(addr, size); - break; - default: - BUG(); - } -} static int __init memchunk_setup(char *str) {