From patchwork Wed Aug 5 03:12:05 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Koul X-Patchwork-Id: 6945711 Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id D1EFB9F358 for ; Wed, 5 Aug 2015 03:10:28 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 82373204F6 for ; Wed, 5 Aug 2015 03:10:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 76D1120515 for ; Wed, 5 Aug 2015 03:10:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751682AbbHEDKY (ORCPT ); Tue, 4 Aug 2015 23:10:24 -0400 Received: from mga14.intel.com ([192.55.52.115]:47993 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751546AbbHEDKY (ORCPT ); Tue, 4 Aug 2015 23:10:24 -0400 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP; 04 Aug 2015 20:10:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,613,1432623600"; d="scan'208";a="762154896" Received: from vkoul-udesk7.iind.intel.com ([10.223.84.34]) by fmsmga001.fm.intel.com with ESMTP; 04 Aug 2015 20:10:21 -0700 From: Vinod Koul To: dmaengine@vger.kernel.org Cc: Robert Jarzmik , Lars-Peter Clausen , Jun Nie , Vinod Koul Subject: [PATCH v2 2/3] dmaengine: Add DMA_CTRL_REUSE Date: Wed, 5 Aug 2015 08:42:05 +0530 Message-Id: <1438744326-8400-2-git-send-email-vinod.koul@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1438744326-8400-1-git-send-email-vinod.koul@intel.com> References: <1438744326-8400-1-git-send-email-vinod.koul@intel.com> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-7.0 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This adds new descriptor flag for reusing a descriptor by submitting multiple times by a client, for example video buffer. Add helper APIs for this as well Signed-off-by: Vinod Koul --- include/linux/dmaengine.h | 42 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index e2f5eb419976..345629abaee2 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -183,6 +183,8 @@ struct dma_interleaved_template { * operation it continues the calculation with new sources * @DMA_PREP_FENCE - tell the driver that subsequent operations depend * on the result of this operation + * @DMA_CTRL_REUSE: client can reuse the descriptor and submit again till + * cleared or freed */ enum dma_ctrl_flags { DMA_PREP_INTERRUPT = (1 << 0), @@ -191,6 +193,7 @@ enum dma_ctrl_flags { DMA_PREP_PQ_DISABLE_Q = (1 << 3), DMA_PREP_CONTINUE = (1 << 4), DMA_PREP_FENCE = (1 << 5), + DMA_CTRL_REUSE = (1 << 6), }; /** @@ -400,6 +403,8 @@ enum dma_residue_granularity { * @cmd_pause: true, if pause and thereby resume is supported * @cmd_terminate: true, if terminate cmd is supported * @residue_granularity: granularity of the reported transfer residue + * @descriptor_reuse: if a descriptor can be reused by client and + * resubmitted multiple times */ struct dma_slave_caps { u32 src_addr_widths; @@ -408,6 +413,7 @@ struct dma_slave_caps { bool cmd_pause; bool cmd_terminate; enum dma_residue_granularity residue_granularity; + bool descriptor_reuse; }; static inline const char *dma_chan_name(struct dma_chan *chan) @@ -467,6 +473,7 @@ struct dma_async_tx_descriptor { dma_addr_t phys; struct dma_chan *chan; dma_cookie_t (*tx_submit)(struct dma_async_tx_descriptor *tx); + int (*desc_free)(struct dma_async_tx_descriptor *tx); dma_async_tx_callback callback; void *callback_param; struct dmaengine_unmap_data *unmap; @@ -833,6 +840,41 @@ static inline dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc return desc->tx_submit(desc); } +int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps); + +static inline int dmaengine_desc_set_reuse(struct dma_async_tx_descriptor *tx) +{ + struct dma_slave_caps caps; + + dma_get_slave_caps(tx->chan, &caps); + + if (caps.descriptor_reuse) { + tx->flags |= DMA_CTRL_REUSE; + return 0; + } else { + return -EPERM; + } +} + +static inline void dmaengine_desc_clear_reuse(struct dma_async_tx_descriptor *tx) +{ + tx->flags &= ~DMA_CTRL_REUSE; +} + +static inline bool dmaengine_desc_test_reuse(struct dma_async_tx_descriptor *tx) +{ + return (tx->flags & DMA_CTRL_REUSE) == DMA_CTRL_REUSE; +} + +static inline int dmaengine_desc_free(struct dma_async_tx_descriptor *desc) +{ + /* this is supported for reusable desc, so check that */ + if (dmaengine_desc_test_reuse(desc)) + return desc->desc_free(desc); + else + return -EPERM; +} + static inline bool dmaengine_check_align(u8 align, size_t off1, size_t off2, size_t len) { size_t mask;