From patchwork Mon Jul 4 08:49:16 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell King - ARM Linux X-Patchwork-Id: 941822 Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by demeter2.kernel.org (8.14.4/8.14.4) with ESMTP id p648oFjp025596 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Mon, 4 Jul 2011 08:50:38 GMT Received: from canuck.infradead.org ([2001:4978:20e::1]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1QdeqQ-0001iC-Vl; Mon, 04 Jul 2011 08:49:33 +0000 Received: from localhost ([127.0.0.1] helo=canuck.infradead.org) by canuck.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1QdeqP-0003EE-Hh; Mon, 04 Jul 2011 08:49:29 +0000 Received: from [2002:4e20:1eda::1] (helo=caramon.arm.linux.org.uk) by canuck.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1QdeqJ-0003C9-LQ for linux-arm-kernel@lists.infradead.org; Mon, 04 Jul 2011 08:49:25 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=arm.linux.org.uk; s=caramon; h=Date:Sender:Message-Id:Content-Type:MIME-Version:Cc:Subject:Cc:To:From:References:In-Reply-To; bh=LQl5O5n2SFuYWfq+yj72Ieup7WyurZ0xMSaZKQo3kFQ=; b=fUIe7rwnSIGd94CaYEiJQDofejgvzvXgmtiRsjmiXztANupEZ8yDjve3h9sG0fdJrqzXZu4MG5V1v6N5mHnv+Qusa+rx5baYjZj8oz/400vRZsiNeBUVlzIYg1ANjFEbSBzQjho/BiG5UpGbPeIr6HBGm2+Z9saj+RyNSvJZk0I=; Received: from e0022681537dd.dyn.arm.linux.org.uk ([2002:4e20:1eda:1:222:68ff:fe15:37dd] helo=rmk-PC.arm.linux.org.uk) by caramon.arm.linux.org.uk with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.72) (envelope-from ) id 1QdeqE-0001Bg-N1; Mon, 04 Jul 2011 09:49:19 +0100 Received: from rmk by rmk-PC.arm.linux.org.uk with local (Exim 4.76) (envelope-from ) id 1QdeqC-0000ZM-LK; Mon, 04 Jul 2011 09:49:16 +0100 In-Reply-To: <20110704084711.GN21898@n2100.arm.linux.org.uk> References: <20110704084711.GN21898@n2100.arm.linux.org.uk> From: Russell King - ARM Linux To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 05/10] ARM: dmabounce: move decision for bouncing into __dma_map_page() MIME-Version: 1.0 Content-Disposition: inline Message-Id: Date: Mon, 04 Jul 2011 09:49:16 +0100 X-CRM114-Version: 20090807-BlameThorstenAndJenny ( TRE 0.7.6 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20110704_044924_359451_81654527 X-CRM114-Status: GOOD ( 18.33 ) X-Spam-Score: 1.2 (+) X-Spam-Report: SpamAssassin version 3.3.1 on canuck.infradead.org summary: Content analysis details: (1.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 1.3 RDNS_NONE Delivered to internal network by a host with no rDNS 0.0 TO_NO_BRKTS_NORDNS To: misformatted and no rDNS Cc: Imre Kaloz , Krzysztof Halasa , Mike Rapoport , Eric Miao , Marek Szyprowski X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter2.kernel.org [140.211.167.43]); Mon, 04 Jul 2011 08:50:38 +0000 (UTC) Move the decision whether to bounce into __dma_map_page(), before the check for high pages. This avoids triggering the high page check for devices which aren't using dmabounce. Fix the unmap path to cope too. Signed-off-by: Russell King --- arch/arm/common/dmabounce.c | 124 ++++++++++++++++++++----------------------- 1 files changed, 58 insertions(+), 66 deletions(-) diff --git a/arch/arm/common/dmabounce.c b/arch/arm/common/dmabounce.c index 643e1d6..6ae292c 100644 --- a/arch/arm/common/dmabounce.c +++ b/arch/arm/common/dmabounce.c @@ -246,88 +246,58 @@ static inline dma_addr_t map_single(struct device *dev, void *ptr, size_t size, enum dma_data_direction dir) { struct dmabounce_device_info *device_info = dev->archdata.dmabounce; - dma_addr_t dma_addr; - int ret; + struct safe_buffer *buf; if (device_info) DO_STATS ( device_info->map_op_count++ ); - dma_addr = virt_to_dma(dev, ptr); - - ret = needs_bounce(dev, dma_addr, size); - if (ret < 0) + buf = alloc_safe_buffer(device_info, ptr, size, dir); + if (buf == 0) { + dev_err(dev, "%s: unable to map unsafe buffer %p!\n", + __func__, ptr); return ~0; + } - if (ret > 0) { - struct safe_buffer *buf; - - buf = alloc_safe_buffer(device_info, ptr, size, dir); - if (buf == 0) { - dev_err(dev, "%s: unable to map unsafe buffer %p!\n", - __func__, ptr); - return ~0; - } - - dev_dbg(dev, - "%s: unsafe buffer %p (dma=%#x) mapped to %p (dma=%#x)\n", - __func__, buf->ptr, virt_to_dma(dev, buf->ptr), - buf->safe, buf->safe_dma_addr); - - if ((dir == DMA_TO_DEVICE) || - (dir == DMA_BIDIRECTIONAL)) { - dev_dbg(dev, "%s: copy unsafe %p to safe %p, size %d\n", - __func__, ptr, buf->safe, size); - memcpy(buf->safe, ptr, size); - } - ptr = buf->safe; + dev_dbg(dev, "%s: unsafe buffer %p (dma=%#x) mapped to %p (dma=%#x)\n", + __func__, buf->ptr, virt_to_dma(dev, buf->ptr), + buf->safe, buf->safe_dma_addr); - dma_addr = buf->safe_dma_addr; - } else { - /* - * We don't need to sync the DMA buffer since - * it was allocated via the coherent allocators. - */ - __dma_single_cpu_to_dev(ptr, size, dir); + if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL) { + dev_dbg(dev, "%s: copy unsafe %p to safe %p, size %d\n", + __func__, ptr, buf->safe, size); + memcpy(buf->safe, ptr, size); } - return dma_addr; + return buf->safe_dma_addr; } -static inline void unmap_single(struct device *dev, dma_addr_t dma_addr, +static inline void unmap_single(struct device *dev, struct safe_buffer *buf, size_t size, enum dma_data_direction dir) { - struct safe_buffer *buf = find_safe_buffer_dev(dev, dma_addr, "unmap"); - - if (buf) { - BUG_ON(buf->size != size); - BUG_ON(buf->direction != dir); + BUG_ON(buf->size != size); + BUG_ON(buf->direction != dir); - dev_dbg(dev, - "%s: unsafe buffer %p (dma=%#x) mapped to %p (dma=%#x)\n", - __func__, buf->ptr, virt_to_dma(dev, buf->ptr), - buf->safe, buf->safe_dma_addr); + dev_dbg(dev, "%s: unsafe buffer %p (dma=%#x) mapped to %p (dma=%#x)\n", + __func__, buf->ptr, virt_to_dma(dev, buf->ptr), + buf->safe, buf->safe_dma_addr); - DO_STATS(dev->archdata.dmabounce->bounce_count++); + DO_STATS(dev->archdata.dmabounce->bounce_count++); - if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) { - void *ptr = buf->ptr; + if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) { + void *ptr = buf->ptr; - dev_dbg(dev, - "%s: copy back safe %p to unsafe %p size %d\n", - __func__, buf->safe, ptr, size); - memcpy(ptr, buf->safe, size); + dev_dbg(dev, "%s: copy back safe %p to unsafe %p size %d\n", + __func__, buf->safe, ptr, size); + memcpy(ptr, buf->safe, size); - /* - * Since we may have written to a page cache page, - * we need to ensure that the data will be coherent - * with user mappings. - */ - __cpuc_flush_dcache_area(ptr, size); - } - free_safe_buffer(dev->archdata.dmabounce, buf); - } else { - __dma_single_dev_to_cpu(dma_to_virt(dev, dma_addr), size, dir); + /* + * Since we may have written to a page cache page, + * we need to ensure that the data will be coherent + * with user mappings. + */ + __cpuc_flush_dcache_area(ptr, size); } + free_safe_buffer(dev->archdata.dmabounce, buf); } /* ************************************************** */ @@ -341,12 +311,25 @@ static inline void unmap_single(struct device *dev, dma_addr_t dma_addr, dma_addr_t __dma_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir) { + dma_addr_t dma_addr; + int ret; + dev_dbg(dev, "%s(page=%p,off=%#lx,size=%zx,dir=%x)\n", __func__, page, offset, size, dir); + dma_addr = pfn_to_dma(dev, page_to_pfn(page)) + offset; + + ret = needs_bounce(dev, dma_addr, size); + if (ret < 0) + return ~0; + + if (ret == 0) { + __dma_page_cpu_to_dev(page, offset, size, dir); + return dma_addr; + } + if (PageHighMem(page)) { - dev_err(dev, "DMA buffer bouncing of HIGHMEM pages " - "is not supported\n"); + dev_err(dev, "DMA buffer bouncing of HIGHMEM pages is not supported\n"); return ~0; } @@ -363,10 +346,19 @@ EXPORT_SYMBOL(__dma_map_page); void __dma_unmap_page(struct device *dev, dma_addr_t dma_addr, size_t size, enum dma_data_direction dir) { + struct safe_buffer *buf; + dev_dbg(dev, "%s(ptr=%p,size=%d,dir=%x)\n", __func__, (void *) dma_addr, size, dir); - unmap_single(dev, dma_addr, size, dir); + buf = find_safe_buffer_dev(dev, dma_addr, __func__); + if (!buf) { + __dma_page_dev_to_cpu(pfn_to_page(dma_to_pfn(dev, dma_addr)), + dma_addr & ~PAGE_MASK, size, dir); + return; + } + + unmap_single(dev, buf, size, dir); } EXPORT_SYMBOL(__dma_unmap_page);