From patchwork Mon Feb 4 19:47:02 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matt Porter X-Patchwork-Id: 2094751 Return-Path: X-Original-To: patchwork-linux-mmc@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 1B74B3FC23 for ; Mon, 4 Feb 2013 19:46:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754494Ab3BDTq2 (ORCPT ); Mon, 4 Feb 2013 14:46:28 -0500 Received: from mail-ia0-f171.google.com ([209.85.210.171]:64287 "EHLO mail-ia0-f171.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754141Ab3BDTq0 (ORCPT ); Mon, 4 Feb 2013 14:46:26 -0500 Received: by mail-ia0-f171.google.com with SMTP id z13so8522367iaz.2 for ; Mon, 04 Feb 2013 11:46:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:sender:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references; bh=X1B69KSOwtS4crRr/d0g+2GrfmZL2cr20j9oqDly+QM=; b=BlGGJYFtwSwh65Cgj/6y3gdmF1jRLBjlZP3wHGJ6Kq4hwpF2ZP5zV2Z84TS6Kp30S7 eXy+NPT8F4zkIds3szRTA9fAoeRHpznS4Kbzn5LWd2mPKaPQ2LoiruENO+qOT0T3onQQ 327Ls74gjeb+CD/C2um97pi7FMWp7EOj0Uki0JT2fEap2u9ulwOXw64NLrJSAHjCC7LJ dGnLtEcpBWf9ewi3Az4LgjnuphYzlVLt0uCs2CJkCDakdnjqdQoxkANBnHNW3UIOI3e8 +bO6MPJVKOEKEuQ4vKW1/6blTXqiWEmAORmygyQmPo5ytVRqyAPUjKyBio0cUI79E7HF Ncnw== X-Received: by 10.50.237.6 with SMTP id uy6mr8487361igc.31.1360007185869; Mon, 04 Feb 2013 11:46:25 -0800 (PST) Received: from beef.ohporter.com (cpe-24-166-64-7.neo.res.rr.com. [24.166.64.7]) by mx.google.com with ESMTPS id ww6sm16414630igb.2.2013.02.04.11.46.24 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 04 Feb 2013 11:46:25 -0800 (PST) From: Matt Porter To: Vinod Koul Cc: Dan Williams , Chris Ball , Grant Likely , Linux DaVinci Kernel List , Linux Kernel Mailing List , Linux MMC List Subject: [PATCH v3 1/3] dmaengine: add dma_get_slave_sg_caps() Date: Mon, 4 Feb 2013 14:47:02 -0500 Message-Id: <1360007224-21735-2-git-send-email-mporter@ti.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1360007224-21735-1-git-send-email-mporter@ti.com> References: <1360007224-21735-1-git-send-email-mporter@ti.com> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Add a dmaengine API to retrieve slave SG transfer capabilities. The API is optionally implemented by dmaengine drivers and when unimplemented will return a NULL pointer. A client driver using this API provides the required dma channel, address width, and burst size of the transfer. dma_get_slave_sg_caps() returns an SG caps structure with the maximum number and size of SG segments that the given channel can handle. Signed-off-by: Matt Porter --- include/linux/dmaengine.h | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index d3201e4..5b5b220 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -371,6 +371,19 @@ struct dma_slave_config { unsigned int slave_id; }; +/* struct dma_slave_sg_caps - expose SG transfer capability of a + * channel. + * + * @max_seg_nr: maximum number of SG segments supported on a SG/SLAVE + * channel (0 for no maximum or not a SG/SLAVE channel) + * @max_seg_len: maximum length of SG segments supported on a SG/SLAVE + * channel (0 for no maximum or not a SG/SLAVE channel) + */ +struct dma_slave_sg_caps { + u32 max_seg_nr; + u32 max_seg_len; +}; + static inline const char *dma_chan_name(struct dma_chan *chan) { return dev_name(&chan->dev->device); @@ -534,6 +547,7 @@ struct dma_tx_state { * struct with auxiliary transfer status information, otherwise the call * will just return a simple status code * @device_issue_pending: push pending transactions to hardware + * @device_slave_sg_caps: return the slave SG capabilities */ struct dma_device { @@ -602,6 +616,9 @@ struct dma_device { dma_cookie_t cookie, struct dma_tx_state *txstate); void (*device_issue_pending)(struct dma_chan *chan); + struct dma_slave_sg_caps *(*device_slave_sg_caps)( + struct dma_chan *chan, enum dma_slave_buswidth addr_width, + u32 maxburst); }; static inline int dmaengine_device_control(struct dma_chan *chan, @@ -969,6 +986,29 @@ dma_set_tx_state(struct dma_tx_state *st, dma_cookie_t last, dma_cookie_t used, } } +/** + * dma_get_slave_sg_caps - get DMAC SG transfer capabilities + * @chan: target DMA channel + * @addr_width: address width of the DMA transfer + * @maxburst: maximum DMA transfer burst size + * + * Get SG transfer capabilities for a specified channel. If the dmaengine + * driver does not implement SG transfer capabilities then NULL is + * returned. + */ +static inline struct dma_slave_sg_caps +*dma_get_slave_sg_caps(struct dma_chan *chan, + enum dma_slave_buswidth addr_width, + u32 maxburst) +{ + if (chan->device->device_slave_sg_caps) + return chan->device->device_slave_sg_caps(chan, + addr_width, + maxburst); + + return NULL; +} + enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie); #ifdef CONFIG_DMA_ENGINE enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx);