From patchwork Tue Oct 22 21:08:07 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 3084951 X-Patchwork-Delegate: dan.j.williams@gmail.com Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id B9D14BF924 for ; Tue, 22 Oct 2013 21:11:10 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 77DBF2046F for ; Tue, 22 Oct 2013 21:11:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1A60B2046B for ; Tue, 22 Oct 2013 21:11:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755347Ab3JVVIL (ORCPT ); Tue, 22 Oct 2013 17:08:11 -0400 Received: from mga09.intel.com ([134.134.136.24]:21671 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755342Ab3JVVII (ORCPT ); Tue, 22 Oct 2013 17:08:08 -0400 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP; 22 Oct 2013 14:04:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.93,535,1378882800"; d="scan'208";a="423109125" Received: from viggo.jf.intel.com ([10.23.232.61]) by orsmga002.jf.intel.com with ESMTP; 22 Oct 2013 14:08:07 -0700 Subject: [PATCH v2 03/13] dmaengine: prepare for generic 'unmap' data From: Dan Williams To: dmaengine@vger.kernel.org Cc: Andy Shevchenko , dave.jiang@intel.com, b.zolnierkie@samsung.com, vinod.koul@intel.com, t.figa@samsung.com, Nicolas Ferre , linux-kernel@vger.kernel.org, Zhang Wei , kyungmin.park@samsung.com, Viresh Kumar , linux@arm.linux.org.uk, Li Yang Date: Tue, 22 Oct 2013 14:08:07 -0700 Message-ID: <20131022210807.31158.44789.stgit@viggo.jf.intel.com> In-Reply-To: <1382117733-16720-4-git-send-email-b.zolnierkie@samsung.com> References: <1382117733-16720-4-git-send-email-b.zolnierkie@samsung.com> User-Agent: StGit/0.17-1-g7c57 MIME-Version: 1.0 Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-7.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add a hook for a common dma unmap implementation to enable removal of the per driver custom unmap code. (A reworked version of Bartlomiej Zolnierkiewicz's patches to remove the custom callbacks and the size increase of dma_async_tx_descriptor for drivers that don't care about raid). Cc: Vinod Koul Cc: Tomasz Figa Cc: Dave Jiang Cc: Nicolas Ferre Cc: Viresh Kumar Cc: Andy Shevchenko Cc: Li Yang Cc: Zhang Wei Signed-off-by: Dan Williams [bzolnier: prepare pl330 driver for adding missing unmap while at it] Signed-off-by: Bartlomiej Zolnierkiewicz Signed-off-by: Kyungmin Park --- Resend to: 1/ add it to the new dmaengine patchwork 2/ cc maintainers of affected drivers 3/ fixup some mail addresses drivers/dma/amba-pl08x.c | 1 + drivers/dma/at_hdmac.c | 1 + drivers/dma/dw/core.c | 1 + drivers/dma/ep93xx_dma.c | 1 + drivers/dma/fsldma.c | 1 + drivers/dma/ioat/dma.c | 1 + drivers/dma/ioat/dma_v2.c | 1 + drivers/dma/ioat/dma_v3.c | 1 + drivers/dma/iop-adma.c | 1 + drivers/dma/mv_xor.c | 1 + drivers/dma/pl330.c | 2 ++ drivers/dma/ppc4xx/adma.c | 1 + drivers/dma/timb_dma.c | 1 + drivers/dma/txx9dmac.c | 1 + include/linux/dmaengine.h | 26 ++++++++++++++++++++++++++ 15 files changed, 41 insertions(+), 0 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/dma/amba-pl08x.c b/drivers/dma/amba-pl08x.c index fce46c5bf1c7..7f9846464b77 100644 --- a/drivers/dma/amba-pl08x.c +++ b/drivers/dma/amba-pl08x.c @@ -1197,6 +1197,7 @@ static void pl08x_desc_free(struct virt_dma_desc *vd) struct pl08x_txd *txd = to_pl08x_txd(&vd->tx); struct pl08x_dma_chan *plchan = to_pl08x_chan(vd->tx.chan); + dma_descriptor_unmap(txd); if (!plchan->slave) pl08x_unmap_buffers(txd); diff --git a/drivers/dma/at_hdmac.c b/drivers/dma/at_hdmac.c index c787f38a186a..cc7098ddf9d4 100644 --- a/drivers/dma/at_hdmac.c +++ b/drivers/dma/at_hdmac.c @@ -345,6 +345,7 @@ atc_chain_complete(struct at_dma_chan *atchan, struct at_desc *desc) list_move(&desc->desc_node, &atchan->free_list); /* unmap dma addresses (not on slave channels) */ + dma_descriptor_unmap(txd); if (!atchan->chan_common.private) { struct device *parent = chan2parent(&atchan->chan_common); if (!(txd->flags & DMA_COMPL_SKIP_DEST_UNMAP)) { diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c index 89eb89f22284..e3fe1b1a73b1 100644 --- a/drivers/dma/dw/core.c +++ b/drivers/dma/dw/core.c @@ -311,6 +311,7 @@ dwc_descriptor_complete(struct dw_dma_chan *dwc, struct dw_desc *desc, list_splice_init(&desc->tx_list, &dwc->free_list); list_move(&desc->desc_node, &dwc->free_list); + dma_descriptor_unmap(txd); if (!is_slave_direction(dwc->direction)) { struct device *parent = chan2parent(&dwc->chan); if (!(txd->flags & DMA_COMPL_SKIP_DEST_UNMAP)) { diff --git a/drivers/dma/ep93xx_dma.c b/drivers/dma/ep93xx_dma.c index 591cd8c63abb..dcd6bf5d3091 100644 --- a/drivers/dma/ep93xx_dma.c +++ b/drivers/dma/ep93xx_dma.c @@ -791,6 +791,7 @@ static void ep93xx_dma_tasklet(unsigned long data) * For the memcpy channels the API requires us to unmap the * buffers unless requested otherwise. */ + dma_descriptor_unmap(&desc->txd); if (!edmac->chan.private) ep93xx_dma_unmap_buffers(desc); diff --git a/drivers/dma/fsldma.c b/drivers/dma/fsldma.c index b3f3e90054f2..66c4052a1f34 100644 --- a/drivers/dma/fsldma.c +++ b/drivers/dma/fsldma.c @@ -868,6 +868,7 @@ static void fsldma_cleanup_descriptor(struct fsldma_chan *chan, /* Run any dependencies */ dma_run_dependencies(txd); + dma_descriptor_unmap(txd); /* Unmap the dst buffer, if requested */ if (!(txd->flags & DMA_COMPL_SKIP_DEST_UNMAP)) { if (txd->flags & DMA_COMPL_DEST_UNMAP_SINGLE) diff --git a/drivers/dma/ioat/dma.c b/drivers/dma/ioat/dma.c index 5ff6fc1819dc..26f8cfd6bc3f 100644 --- a/drivers/dma/ioat/dma.c +++ b/drivers/dma/ioat/dma.c @@ -602,6 +602,7 @@ static void __cleanup(struct ioat_dma_chan *ioat, dma_addr_t phys_complete) dump_desc_dbg(ioat, desc); if (tx->cookie) { dma_cookie_complete(tx); + dma_descriptor_unmap(tx); ioat_dma_unmap(chan, tx->flags, desc->len, desc->hw); ioat->active -= desc->hw->tx_cnt; if (tx->callback) { diff --git a/drivers/dma/ioat/dma_v2.c b/drivers/dma/ioat/dma_v2.c index b925e1b1d139..fc7b50a813cc 100644 --- a/drivers/dma/ioat/dma_v2.c +++ b/drivers/dma/ioat/dma_v2.c @@ -148,6 +148,7 @@ static void __cleanup(struct ioat2_dma_chan *ioat, dma_addr_t phys_complete) tx = &desc->txd; dump_desc_dbg(ioat, desc); if (tx->cookie) { + dma_descriptor_unmap(tx); ioat_dma_unmap(chan, tx->flags, desc->len, desc->hw); dma_cookie_complete(tx); if (tx->callback) { diff --git a/drivers/dma/ioat/dma_v3.c b/drivers/dma/ioat/dma_v3.c index d8ececaf1b57..57a2901b917a 100644 --- a/drivers/dma/ioat/dma_v3.c +++ b/drivers/dma/ioat/dma_v3.c @@ -577,6 +577,7 @@ static void __cleanup(struct ioat2_dma_chan *ioat, dma_addr_t phys_complete) tx = &desc->txd; if (tx->cookie) { dma_cookie_complete(tx); + dma_descriptor_unmap(tx); ioat3_dma_unmap(ioat, desc, idx + i); if (tx->callback) { tx->callback(tx->callback_param); diff --git a/drivers/dma/iop-adma.c b/drivers/dma/iop-adma.c index dd8b44a56e5d..8f6e426590eb 100644 --- a/drivers/dma/iop-adma.c +++ b/drivers/dma/iop-adma.c @@ -152,6 +152,7 @@ iop_adma_run_tx_complete_actions(struct iop_adma_desc_slot *desc, if (tx->callback) tx->callback(tx->callback_param); + dma_descriptor_unmap(tx); /* unmap dma addresses * (unmap_single vs unmap_page?) */ diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c index 536dcb8ba5fd..ed1ab1d0875e 100644 --- a/drivers/dma/mv_xor.c +++ b/drivers/dma/mv_xor.c @@ -278,6 +278,7 @@ mv_xor_run_tx_complete_actions(struct mv_xor_desc_slot *desc, desc->async_tx.callback( desc->async_tx.callback_param); + dma_descriptor_unmap(&desc->async_tx); /* unmap dma addresses * (unmap_single vs unmap_page?) */ diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c index a562d24d20bf..ab25e52cd43b 100644 --- a/drivers/dma/pl330.c +++ b/drivers/dma/pl330.c @@ -2268,6 +2268,8 @@ static void pl330_tasklet(unsigned long data) list_move_tail(&desc->node, &pch->dmac->desc_pool); } + dma_descriptor_unmap(&desc->txd); + if (callback) { spin_unlock_irqrestore(&pch->lock, flags); callback(callback_param); diff --git a/drivers/dma/ppc4xx/adma.c b/drivers/dma/ppc4xx/adma.c index 370ff8265630..442492da7415 100644 --- a/drivers/dma/ppc4xx/adma.c +++ b/drivers/dma/ppc4xx/adma.c @@ -1765,6 +1765,7 @@ static dma_cookie_t ppc440spe_adma_run_tx_complete_actions( desc->async_tx.callback( desc->async_tx.callback_param); + dma_descriptor_unmap(&desc->async_tx); /* unmap dma addresses * (unmap_single vs unmap_page?) * diff --git a/drivers/dma/timb_dma.c b/drivers/dma/timb_dma.c index 28af214fce04..1d0c98839087 100644 --- a/drivers/dma/timb_dma.c +++ b/drivers/dma/timb_dma.c @@ -293,6 +293,7 @@ static void __td_finish(struct timb_dma_chan *td_chan) list_move(&td_desc->desc_node, &td_chan->free_list); + dma_descriptor_unmap(txd); if (!(txd->flags & DMA_COMPL_SKIP_SRC_UNMAP)) __td_unmap_descs(td_desc, txd->flags & DMA_COMPL_SRC_UNMAP_SINGLE); diff --git a/drivers/dma/txx9dmac.c b/drivers/dma/txx9dmac.c index 71e8e775189e..22a0b6c78c77 100644 --- a/drivers/dma/txx9dmac.c +++ b/drivers/dma/txx9dmac.c @@ -419,6 +419,7 @@ txx9dmac_descriptor_complete(struct txx9dmac_chan *dc, list_splice_init(&desc->tx_list, &dc->free_list); list_move(&desc->desc_node, &dc->free_list); + dma_descriptor_unmap(txd); if (!ds) { dma_addr_t dmaaddr; if (!(txd->flags & DMA_COMPL_SKIP_DEST_UNMAP)) { diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index 0bc727534108..9070050fbcd8 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -413,6 +413,17 @@ void dma_chan_cleanup(struct kref *kref); typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); typedef void (*dma_async_tx_callback)(void *dma_async_param); + +struct dmaengine_unmap_data { + u8 to_cnt; + u8 from_cnt; + u8 bidi_cnt; + struct device *dev; + struct kref kref; + size_t len; + dma_addr_t addr[0]; +}; + /** * struct dma_async_tx_descriptor - async transaction descriptor * ---dma generic offload fields--- @@ -438,6 +449,7 @@ struct dma_async_tx_descriptor { dma_cookie_t (*tx_submit)(struct dma_async_tx_descriptor *tx); dma_async_tx_callback callback; void *callback_param; + struct dmaengine_unmap_data *unmap; #ifdef CONFIG_ASYNC_TX_ENABLE_CHANNEL_SWITCH struct dma_async_tx_descriptor *next; struct dma_async_tx_descriptor *parent; @@ -445,6 +457,20 @@ struct dma_async_tx_descriptor { #endif }; +static inline void dma_set_unmap(struct dma_async_tx_descriptor *tx, + struct dmaengine_unmap_data *unmap) +{ + kref_get(&unmap->kref); + tx->unmap = unmap; +} + +static inline void dma_descriptor_unmap(struct dma_async_tx_descriptor *tx) +{ + if (tx->unmap) { + tx->unmap = NULL; + } +} + #ifndef CONFIG_ASYNC_TX_ENABLE_CHANNEL_SWITCH static inline void txd_lock(struct dma_async_tx_descriptor *txd) {