From patchwork Tue Jun 21 08:01:31 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Nikula X-Patchwork-Id: 9189747 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 20D546075E for ; Tue, 21 Jun 2016 08:03:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1276B2621B for ; Tue, 21 Jun 2016 08:03:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0703227FA8; Tue, 21 Jun 2016 08:03:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 805362621B for ; Tue, 21 Jun 2016 08:03:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754530AbcFUIDQ (ORCPT ); Tue, 21 Jun 2016 04:03:16 -0400 Received: from mga02.intel.com ([134.134.136.20]:60013 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751625AbcFUICv (ORCPT ); Tue, 21 Jun 2016 04:02:51 -0400 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP; 21 Jun 2016 01:01:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,502,1459839600"; d="scan'208";a="722708799" Received: from mylly.fi.intel.com (HELO mylly.fi.intel.com.) ([10.237.72.52]) by FMSMGA003.fm.intel.com with ESMTP; 21 Jun 2016 01:01:38 -0700 From: Jarkko Nikula To: dmaengine@vger.kernel.org Cc: Viresh Kumar , Andy Shevchenko , Vinod Koul , Jarkko Nikula Subject: [PATCH] dmaengine: dw: Fix data corruption in large device to memory transfers Date: Tue, 21 Jun 2016 11:01:31 +0300 Message-Id: <1466496091-7934-1-git-send-email-jarkko.nikula@linux.intel.com> X-Mailer: git-send-email 2.8.1 Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When transferring more data than the maximum block size supported by the HW multiplied by source width the transfer is split into smaller chunks. Currently code calculates the memory width and thus aligment before splitting for both memory to device and device to memory transfers. For memory to device transfers this work fine since alignment is preserved through the splitting and split blocks are still memory width aligned. However in device to memory transfers aligment breaks when maximum block size multiplied by register width doesn't have the same alignment than the buffer. For instance when transferring from an 8-bit register 4100 bytes (32-bit aligned) on a DW DMA that has maximum block size of 4095 elements. An attempt to do such transfers caused data corruption. Fix this by calculating and setting the destination memory width after splitting by using the split block aligment and length. Signed-off-by: Jarkko Nikula --- I'm not sure is this stable material or not. I'm a bit on not stable side so I didn't Cc it. I noticed the issue by tweaking the spidev to allow bigger than 4 KiB buffer. There were RX corruption when doing 8-bit transfers with even sized buffers of >= 4098 bytes on an HW where max block size is 4095. --- drivers/dma/dw/core.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c index edf053f73a49..878e2bf58233 100644 --- a/drivers/dma/dw/core.c +++ b/drivers/dma/dw/core.c @@ -831,8 +831,6 @@ slave_sg_todev_fill_desc: mem = sg_dma_address(sg); len = sg_dma_len(sg); - mem_width = __ffs(data_width | mem | len); - slave_sg_fromdev_fill_desc: desc = dwc_desc_get(dwc); if (!desc) @@ -840,15 +838,17 @@ slave_sg_fromdev_fill_desc: lli_write(desc, sar, reg); lli_write(desc, dar, mem); - lli_write(desc, ctllo, ctllo | DWC_CTLL_DST_WIDTH(mem_width)); if ((len >> reg_width) > dwc->block_size) { dlen = dwc->block_size << reg_width; - mem += dlen; len -= dlen; } else { dlen = len; len = 0; } + mem_width = __ffs(data_width | mem | dlen); + mem += dlen; + lli_write(desc, ctllo, + ctllo | DWC_CTLL_DST_WIDTH(mem_width)); lli_write(desc, ctlhi, dlen >> reg_width); desc->len = dlen;