From patchwork Tue May 12 15:37:40 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Ripard X-Patchwork-Id: 6389341 Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 3903E9F3E9 for ; Tue, 12 May 2015 15:42:25 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 436742024F for ; Tue, 12 May 2015 15:42:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3927C2041B for ; Tue, 12 May 2015 15:42:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933165AbbELPl7 (ORCPT ); Tue, 12 May 2015 11:41:59 -0400 Received: from down.free-electrons.com ([37.187.137.238]:57293 "EHLO mail.free-electrons.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S932862AbbELPkI (ORCPT ); Tue, 12 May 2015 11:40:08 -0400 Received: by mail.free-electrons.com (Postfix, from userid 106) id 01E91DEB; Tue, 12 May 2015 17:40:06 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from localhost (col31-4-88-188-83-94.fbx.proxad.net [88.188.83.94]) by mail.free-electrons.com (Postfix) with ESMTPSA id 6DC701FB; Tue, 12 May 2015 17:40:06 +0200 (CEST) From: Maxime Ripard To: Vinod Koul , Dan Williams , Gregory Clement , Jason Cooper , Andrew Lunn , Sebastian Hesselbarth Cc: dmaengine@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, Lior Amsalem , Thomas Petazzoni , Herbert Xu , "David S. Miller" , Maxime Ripard Subject: [PATCH 5/8] dmaengine: mv_xor: bug fix for racing condition in descriptors cleanup Date: Tue, 12 May 2015 17:37:40 +0200 Message-Id: <1431445063-20226-6-git-send-email-maxime.ripard@free-electrons.com> X-Mailer: git-send-email 2.4.0 In-Reply-To: <1431445063-20226-1-git-send-email-maxime.ripard@free-electrons.com> References: <1431445063-20226-1-git-send-email-maxime.ripard@free-electrons.com> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Lior Amsalem This patch fixes a bug in the XOR driver where the cleanup function can be called and free descriptors that never been processed by the engine (which result in data errors). The cleanup function will free descriptors based on the ownership bit in the descriptors. Signed-off-by: Lior Amsalem Reviewed-by: Ofer Heifetz Signed-off-by: Maxime Ripard --- drivers/dma/mv_xor.c | 72 +++++++++++++++++++++++++++++++++------------------- drivers/dma/mv_xor.h | 1 + 2 files changed, 47 insertions(+), 26 deletions(-) diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c index d1110442f0d2..28980483eafb 100644 --- a/drivers/dma/mv_xor.c +++ b/drivers/dma/mv_xor.c @@ -293,7 +293,8 @@ static void mv_chan_slot_cleanup(struct mv_xor_chan *mv_chan) dma_cookie_t cookie = 0; int busy = mv_chan_is_busy(mv_chan); u32 current_desc = mv_chan_get_current_desc(mv_chan); - int seen_current = 0; + int current_cleaned = 0; + struct mv_xor_desc *hw_desc; dev_dbg(mv_chan_to_devp(mv_chan), "%s %d\n", __func__, __LINE__); dev_dbg(mv_chan_to_devp(mv_chan), "current_desc %x\n", current_desc); @@ -305,38 +306,57 @@ static void mv_chan_slot_cleanup(struct mv_xor_chan *mv_chan) list_for_each_entry_safe(iter, _iter, &mv_chan->chain, node) { - prefetch(_iter); - prefetch(&_iter->async_tx); - /* do not advance past the current descriptor loaded into the - * hardware channel, subsequent descriptors are either in - * process or have not been submitted - */ - if (seen_current) - break; + /* clean finished descriptors */ + hw_desc = iter->hw_desc; + if (hw_desc->status & XOR_DESC_SUCCESS) { + cookie = mv_desc_run_tx_complete_actions(iter, mv_chan, + cookie); - /* stop the search if we reach the current descriptor and the - * channel is busy - */ - if (iter->async_tx.phys == current_desc) { - seen_current = 1; - if (busy) + /* done processing desc, clean slot */ + mv_desc_clean_slot(iter, mv_chan); + + /* break if we did cleaned the current */ + if (iter->async_tx.phys == current_desc) { + current_cleaned = 1; break; + } + } else { + if (iter->async_tx.phys == current_desc) { + current_cleaned = 0; + break; + } } - - cookie = mv_desc_run_tx_complete_actions(iter, mv_chan, cookie); - - if (mv_desc_clean_slot(iter, mv_chan)) - break; } if ((busy == 0) && !list_empty(&mv_chan->chain)) { - struct mv_xor_desc_slot *chain_head; - chain_head = list_entry(mv_chan->chain.next, - struct mv_xor_desc_slot, - node); - - mv_chan_start_new_chain(mv_chan, chain_head); + if (current_cleaned) { + /* + * current descriptor cleaned and removed, run + * from list head + */ + iter = list_entry(mv_chan->chain.next, + struct mv_xor_desc_slot, + node); + mv_chan_start_new_chain(mv_chan, iter); + } else { + if (!list_is_last(&iter->node, &mv_chan->chain)) { + /* + * descriptors are still waiting after + * current, trigger them + */ + iter = list_entry(iter->node.next, + struct mv_xor_desc_slot, + node); + mv_chan_start_new_chain(mv_chan, iter); + } else { + /* + * some descriptors are still waiting + * to be cleaned + */ + tasklet_schedule(&mv_chan->irq_tasklet); + } + } } if (cookie > 0) diff --git a/drivers/dma/mv_xor.h b/drivers/dma/mv_xor.h index 71684de37ddb..b7455b42137b 100644 --- a/drivers/dma/mv_xor.h +++ b/drivers/dma/mv_xor.h @@ -32,6 +32,7 @@ #define XOR_OPERATION_MODE_MEMCPY 2 #define XOR_OPERATION_MODE_IN_DESC 7 #define XOR_DESCRIPTOR_SWAP BIT(14) +#define XOR_DESC_SUCCESS 0x40000000 #define XOR_DESC_OPERATION_XOR (0 << 24) #define XOR_DESC_OPERATION_CRC32C (1 << 24)