From patchwork Sat Aug 10 16:52:22 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Mack X-Patchwork-Id: 2842490 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 9E11ABF546 for ; Sat, 10 Aug 2013 17:30:12 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BBDC52015A for ; Sat, 10 Aug 2013 17:30:11 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CCBCE20149 for ; Sat, 10 Aug 2013 17:30:10 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V8CRQ-00048D-O9; Sat, 10 Aug 2013 16:55:02 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1V8CQe-00007O-GW; Sat, 10 Aug 2013 16:54:12 +0000 Received: from svenfoo.org ([82.94.215.22] helo=mail.zonque.de) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V8CPn-0008SO-5w for linux-arm-kernel@lists.infradead.org; Sat, 10 Aug 2013 16:53:27 +0000 Received: from localhost (localhost [127.0.0.1]) by mail.zonque.de (Postfix) with ESMTP id BB7DBC08E3; Sat, 10 Aug 2013 18:52:38 +0200 (CEST) Received: from mail.zonque.de ([127.0.0.1]) by localhost (rambrand.bugwerft.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 5hbbqQg5QaDe; Sat, 10 Aug 2013 18:52:38 +0200 (CEST) Received: from tamtam.fritz.box (p5DDC5C11.dip0.t-ipconnect.de [93.220.92.17]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.zonque.de (Postfix) with ESMTPSA id D4DD1C059A; Sat, 10 Aug 2013 18:52:37 +0200 (CEST) From: Daniel Mack To: vinod.koul@intel.com, djbw@fb.com Subject: [PATCH v2 08/11] dma: mmp_pdma: add support for byte-aligned transfers Date: Sat, 10 Aug 2013 18:52:22 +0200 Message-Id: <1376153545-14361-9-git-send-email-zonque@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1376153545-14361-1-git-send-email-zonque@gmail.com> References: <1376153545-14361-1-git-send-email-zonque@gmail.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130810_125319_933907_4AE18AC0 X-CRM114-Status: GOOD ( 12.88 ) X-Spam-Score: 1.6 (+) Cc: wangx@marvell.com, eric.y.miao@gmail.com, arnd@arndb.de, nico@fluxnic.net, haojian.zhuang@gmail.com, Daniel Mack , cxie4@marvell.com, ezequiel.garcia@free-electrons.com, linux@arm.linux.org.uk, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, FREEMAIL_FROM,KHOP_BIG_TO_CC,RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The PXA DMA controller has a DALGN register which allows for byte-aligned DMA transfers. Use it in case any of the transfer descriptors is not aligned to a mask of ~0x7. Signed-off-by: Daniel Mack --- drivers/dma/mmp_pdma.c | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c index d7ccc90..579f79a 100644 --- a/drivers/dma/mmp_pdma.c +++ b/drivers/dma/mmp_pdma.c @@ -109,6 +109,7 @@ struct mmp_pdma_chan { struct list_head chain_pending; /* Link descriptors queue for pending */ struct list_head chain_running; /* Link descriptors queue for running */ bool idle; /* channel statue machine */ + bool byte_align; struct dma_pool *desc_pool; /* Descriptors pool */ }; @@ -142,7 +143,7 @@ static void set_desc(struct mmp_pdma_phy *phy, dma_addr_t addr) static void enable_chan(struct mmp_pdma_phy *phy) { - u32 reg; + u32 reg, dalgn; if (!phy->vchan) return; @@ -150,6 +151,13 @@ static void enable_chan(struct mmp_pdma_phy *phy) reg = DRCMR(phy->vchan->drcmr); writel(DRCMR_MAPVLD | phy->idx, phy->base + reg); + dalgn = readl(phy->base + DALGN); + if (phy->vchan->byte_align) + dalgn |= 1 << phy->idx; + else + dalgn &= ~(1 << phy->idx); + writel(dalgn, phy->base + DALGN); + reg = (phy->idx << 2) + DCSR; writel(readl(phy->base + reg) | DCSR_RUN, phy->base + reg); @@ -454,6 +462,7 @@ mmp_pdma_prep_memcpy(struct dma_chan *dchan, return NULL; chan = to_mmp_pdma_chan(dchan); + chan->byte_align = false; if (!chan->dir) { chan->dir = DMA_MEM_TO_MEM; @@ -470,6 +479,8 @@ mmp_pdma_prep_memcpy(struct dma_chan *dchan, } copy = min_t(size_t, len, PDMA_MAX_DESC_BYTES); + if (dma_src & 0x7 || dma_dst & 0x7) + chan->byte_align = true; new->desc.dcmd = chan->dcmd | (DCMD_LENGTH & copy); new->desc.dsadr = dma_src; @@ -529,12 +540,16 @@ mmp_pdma_prep_slave_sg(struct dma_chan *dchan, struct scatterlist *sgl, if ((sgl == NULL) || (sg_len == 0)) return NULL; + chan->byte_align = false; + for_each_sg(sgl, sg, sg_len, i) { addr = sg_dma_address(sg); avail = sg_dma_len(sgl); do { len = min_t(size_t, avail, PDMA_MAX_DESC_BYTES); + if (addr & 0x7) + chan->byte_align = true; /* allocate and populate the descriptor */ new = mmp_pdma_alloc_descriptor(chan);