From patchwork Mon Jan 2 10:00:42 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Shevchenko X-Patchwork-Id: 9493307 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 16E7162ABB for ; Mon, 2 Jan 2017 10:01:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EE8C3223B2 for ; Mon, 2 Jan 2017 10:01:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E31C9268AE; Mon, 2 Jan 2017 10:01:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8B3E624560 for ; Mon, 2 Jan 2017 10:01:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751208AbdABKBZ (ORCPT ); Mon, 2 Jan 2017 05:01:25 -0500 Received: from mga02.intel.com ([134.134.136.20]:49452 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754802AbdABKBY (ORCPT ); Mon, 2 Jan 2017 05:01:24 -0500 Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga101.jf.intel.com with ESMTP; 02 Jan 2017 02:01:23 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,432,1477983600"; d="scan'208";a="208772678" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga004.fm.intel.com with ESMTP; 02 Jan 2017 02:01:21 -0800 Received: by black.fi.intel.com (Postfix, from userid 1003) id 547A287; Mon, 2 Jan 2017 12:00:51 +0200 (EET) From: Andy Shevchenko To: dmaengine@vger.kernel.org, Vinod Koul , Eugeniy Paltsev Cc: Jarkko Nikula , Andy Shevchenko Subject: [PATCH v2 1/8] dmaengine: dw: Fix data corruption in large device to memory transfers Date: Mon, 2 Jan 2017 12:00:42 +0200 Message-Id: <20170102100049.127155-2-andriy.shevchenko@linux.intel.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170102100049.127155-1-andriy.shevchenko@linux.intel.com> References: <20170102100049.127155-1-andriy.shevchenko@linux.intel.com> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jarkko Nikula When transferring more data than the maximum block size supported by the HW multiplied by source width the transfer is split into smaller chunks. Currently code calculates the memory width and thus aligment before splitting for both memory to device and device to memory transfers. For memory to device transfers this work fine since alignment is preserved through the splitting and split blocks are still memory width aligned. However in device to memory transfers aligment breaks when maximum block size multiplied by register width doesn't have the same alignment than the buffer. For instance when transferring from an 8-bit register 4100 bytes (32-bit aligned) on a DW DMA that has maximum block size of 4095 elements. An attempt to do such transfers caused data corruption. Fix this by calculating and setting the destination memory width after splitting by using the split block aligment and length. Signed-off-by: Jarkko Nikula Signed-off-by: Andy Shevchenko --- drivers/dma/dw/core.c | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c index e5adf5d1c34f..45bb608a1b7c 100644 --- a/drivers/dma/dw/core.c +++ b/drivers/dma/dw/core.c @@ -789,17 +789,13 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, lli_write(desc, sar, mem); lli_write(desc, dar, reg); - lli_write(desc, ctllo, ctllo | DWC_CTLL_SRC_WIDTH(mem_width)); if ((len >> mem_width) > dwc->block_size) { dlen = dwc->block_size << mem_width; - mem += dlen; - len -= dlen; } else { dlen = len; - len = 0; } - lli_write(desc, ctlhi, dlen >> mem_width); + lli_write(desc, ctllo, ctllo | DWC_CTLL_SRC_WIDTH(mem_width)); desc->len = dlen; if (!first) { @@ -809,6 +805,9 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, list_add_tail(&desc->desc_node, &first->tx_list); } prev = desc; + + mem += dlen; + len -= dlen; total_len += dlen; if (len) @@ -833,8 +832,6 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, mem = sg_dma_address(sg); len = sg_dma_len(sg); - mem_width = __ffs(data_width | mem | len); - slave_sg_fromdev_fill_desc: desc = dwc_desc_get(dwc); if (!desc) @@ -842,16 +839,14 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, lli_write(desc, sar, reg); lli_write(desc, dar, mem); - lli_write(desc, ctllo, ctllo | DWC_CTLL_DST_WIDTH(mem_width)); if ((len >> reg_width) > dwc->block_size) { dlen = dwc->block_size << reg_width; - mem += dlen; - len -= dlen; } else { dlen = len; - len = 0; } lli_write(desc, ctlhi, dlen >> reg_width); + mem_width = __ffs(data_width | mem | dlen); + lli_write(desc, ctllo, ctllo | DWC_CTLL_DST_WIDTH(mem_width)); desc->len = dlen; if (!first) { @@ -861,6 +856,9 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, list_add_tail(&desc->desc_node, &first->tx_list); } prev = desc; + + mem += dlen; + len -= dlen; total_len += dlen; if (len)