From patchwork Fri May 1 13:56:43 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akinobu Mita X-Patchwork-Id: 6309571 Return-Path: X-Original-To: patchwork-linux-parisc@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 3021DBEEE1 for ; Fri, 1 May 2015 13:58:07 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4F07420435 for ; Fri, 1 May 2015 13:58:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 49045203F7 for ; Fri, 1 May 2015 13:58:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753895AbbEAN5c (ORCPT ); Fri, 1 May 2015 09:57:32 -0400 Received: from mail-pd0-f174.google.com ([209.85.192.174]:35828 "EHLO mail-pd0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753955AbbEAN5a (ORCPT ); Fri, 1 May 2015 09:57:30 -0400 Received: by pdbqd1 with SMTP id qd1so93031661pdb.2; Fri, 01 May 2015 06:57:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Fkqu8b/NUCwrXf/mHScnyLdia9yLhhTu13rm12kXMpo=; b=ePGm4ETiwFtRGcxN72WHt+9IraqEiwo06DkuNAZ0Zom0Jbmoh3JGtljxGOF6565llz tOeItEVGmovIay+lq3Z62E9R6g85f3e2DFQEcrkxXldLta79RpqgSzCadyptKvL7Fxg6 RrD1hXVPdBjEqUlddIJlsU17ijxA+P83Ct5FogZCv2H/mVRzzBDE1aAhHU72xdTk/Ww8 Gv1FyvNwnSSMQnKXHLMnJMMKBRt8a1N00X9guH7covT3Q/LnAtaetes8IQx5zmbmHHVw KerXJyvxVMF1QGpj+2Pea5oZX1AjQWcfcgogOxjrO9BodyxJYGSNB2sSXE4095H6PHRP 9WGg== X-Received: by 10.68.167.66 with SMTP id zm2mr18111642pbb.164.1430488649567; Fri, 01 May 2015 06:57:29 -0700 (PDT) Received: from localhost.localdomain (KD106168100169.ppp-bb.dion.ne.jp. [106.168.100.169]) by mx.google.com with ESMTPSA id pd10sm4970781pdb.66.2015.05.01.06.57.26 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 01 May 2015 06:57:28 -0700 (PDT) From: Akinobu Mita To: linux-kernel@vger.kernel.org, akpm@linux-foundation.org Cc: Akinobu Mita , "James E.J. Bottomley" , Helge Deller , linux-parisc@vger.kernel.org, linux-arch@vger.kernel.org Subject: [PATCH v2 10/10] parisc: use for_each_sg() Date: Fri, 1 May 2015 22:56:43 +0900 Message-Id: <1430488603-11055-10-git-send-email-akinobu.mita@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1430488603-11055-1-git-send-email-akinobu.mita@gmail.com> References: <1430488603-11055-1-git-send-email-akinobu.mita@gmail.com> Sender: linux-parisc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This replaces the plain loop over the sglist array with for_each_sg() macro which consists of sg_next() function calls. Since parisc doesn't select ARCH_HAS_SG_CHAIN, it is not necessary to use for_each_sg() in order to loop over each sg element. But this can help find problems with drivers that do not properly initialize their sg tables when CONFIG_DEBUG_SG is enabled. Signed-off-by: Akinobu Mita Cc: "James E.J. Bottomley" Cc: Helge Deller Cc: linux-parisc@vger.kernel.org Cc: linux-arch@vger.kernel.org --- * New patch from v2 arch/parisc/kernel/pci-dma.c | 27 ++++++++++++++++----------- 1 file changed, 16 insertions(+), 11 deletions(-) diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c index ff834fd..b9402c9 100644 --- a/arch/parisc/kernel/pci-dma.c +++ b/arch/parisc/kernel/pci-dma.c @@ -478,14 +478,16 @@ static void pa11_dma_unmap_single(struct device *dev, dma_addr_t dma_handle, siz static int pa11_dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction) { int i; + struct scatterlist *sg; BUG_ON(direction == DMA_NONE); - for (i = 0; i < nents; i++, sglist++ ) { - unsigned long vaddr = (unsigned long)sg_virt(sglist); - sg_dma_address(sglist) = (dma_addr_t) virt_to_phys(vaddr); - sg_dma_len(sglist) = sglist->length; - flush_kernel_dcache_range(vaddr, sglist->length); + for_each_sg(sglist, sg, nents, i) { + unsigned long vaddr = (unsigned long)sg_virt(sg); + + sg_dma_address(sg) = (dma_addr_t) virt_to_phys(vaddr); + sg_dma_len(sg) = sg->length; + flush_kernel_dcache_range(vaddr, sg->length); } return nents; } @@ -493,6 +495,7 @@ static int pa11_dma_map_sg(struct device *dev, struct scatterlist *sglist, int n static void pa11_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction) { int i; + struct scatterlist *sg; BUG_ON(direction == DMA_NONE); @@ -501,8 +504,8 @@ static void pa11_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, in /* once we do combining we'll need to use phys_to_virt(sg_dma_address(sglist)) */ - for (i = 0; i < nents; i++, sglist++ ) - flush_kernel_vmap_range(sg_virt(sglist), sglist->length); + for_each_sg(sglist, sg, nents, i) + flush_kernel_vmap_range(sg_virt(sg), sg->length); return; } @@ -523,21 +526,23 @@ static void pa11_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_h static void pa11_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction) { int i; + struct scatterlist *sg; /* once we do combining we'll need to use phys_to_virt(sg_dma_address(sglist)) */ - for (i = 0; i < nents; i++, sglist++ ) - flush_kernel_vmap_range(sg_virt(sglist), sglist->length); + for_each_sg(sglist, sg, nents, i) + flush_kernel_vmap_range(sg_virt(sg), sg->length); } static void pa11_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction) { int i; + struct scatterlist *sg; /* once we do combining we'll need to use phys_to_virt(sg_dma_address(sglist)) */ - for (i = 0; i < nents; i++, sglist++ ) - flush_kernel_vmap_range(sg_virt(sglist), sglist->length); + for_each_sg(sglist, sg, nents, i) + flush_kernel_vmap_range(sg_virt(sg), sg->length); } struct hppa_dma_ops pcxl_dma_ops = {