From patchwork Thu Sep 15 17:22:45 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sinan Kaya X-Patchwork-Id: 9334383 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B95796089F for ; Thu, 15 Sep 2016 17:24:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AB86529AA7 for ; Thu, 15 Sep 2016 17:24:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A001529AB1; Thu, 15 Sep 2016 17:24:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1D98429AA9 for ; Thu, 15 Sep 2016 17:24:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754206AbcIORYg (ORCPT ); Thu, 15 Sep 2016 13:24:36 -0400 Received: from smtp.codeaurora.org ([198.145.29.96]:50038 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753722AbcIORXo (ORCPT ); Thu, 15 Sep 2016 13:23:44 -0400 Received: by smtp.codeaurora.org (Postfix, from userid 1000) id 7FBD461AC1; Thu, 15 Sep 2016 17:23:43 +0000 (UTC) Received: from drakthul.qualcomm.com (global_nat1_iad_fw.qualcomm.com [129.46.232.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: okaya@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id BBFD261AF0; Thu, 15 Sep 2016 17:23:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.1 smtp.codeaurora.org BBFD261AF0 Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=pass smtp.mailfrom=okaya@codeaurora.org From: Sinan Kaya To: dmaengine@vger.kernel.org, timur@codeaurora.org, devicetree@vger.kernel.org, cov@codeaurora.org, vinod.koul@intel.com, jcm@redhat.com Cc: agross@codeaurora.org, arnd@arndb.de, linux-arm-msm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Sinan Kaya , Dan Williams , linux-kernel@vger.kernel.org Subject: [PATCH V3 09/10] dmaengine: qcom_hidma: protect common data structures Date: Thu, 15 Sep 2016 13:22:45 -0400 Message-Id: <1473960166-30155-10-git-send-email-okaya@codeaurora.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1473960166-30155-1-git-send-email-okaya@codeaurora.org> References: <1473960166-30155-1-git-send-email-okaya@codeaurora.org> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When MSI interrupts are supported, error and the transfer interrupt can come from multiple processor contexts. Each error interrupt is an MSI interrupt. If the channel is disabled by the first error interrupt, the remaining error interrupts will gracefully return in the interrupt handler. If an error is observed while servicing the completions in success case, the posting of the completions will be aborted as soon as channel disabled state is observed. The error interrupt handler will take it from there and finish the remaining completions. We don't want to create multiple success and error messages to be delivered to the client in mixed order. Also got rid of hidma_post_completed method and moved the locks inside hidma_ll_int_handler_internal function. Rearranged the assignments so that variables are updated only when a lock is held. Signed-off-by: Sinan Kaya --- drivers/dma/qcom/hidma_ll.c | 142 ++++++++++++++++++-------------------------- 1 file changed, 58 insertions(+), 84 deletions(-) diff --git a/drivers/dma/qcom/hidma_ll.c b/drivers/dma/qcom/hidma_ll.c index f0630e0..386a64c 100644 --- a/drivers/dma/qcom/hidma_ll.c +++ b/drivers/dma/qcom/hidma_ll.c @@ -198,18 +198,50 @@ static void hidma_ll_tre_complete(unsigned long arg) } } -static int hidma_post_completed(struct hidma_lldev *lldev, int tre_iterator, - u8 err_info, u8 err_code) +/* + * Called to handle the interrupt for the channel. + * Return a positive number if TRE or EVRE were consumed on this run. + * Return a positive number if there are pending TREs or EVREs. + * Return 0 if there is nothing to consume or no pending TREs/EVREs found. + */ +static int hidma_handle_tre_completion(struct hidma_lldev *lldev, u8 err_info, + u8 err_code) { + u32 *current_evre; struct hidma_tre *tre; unsigned long flags; + u32 evre_write_off; + u32 cfg; + u32 offset; + + evre_write_off = readl_relaxed(lldev->evca + HIDMA_EVCA_WRITE_PTR_REG); + if ((evre_write_off > lldev->evre_ring_size) || + (evre_write_off % HIDMA_EVRE_SIZE)) { + dev_err(lldev->dev, "HW reports invalid EVRE write offset\n"); + return -EINVAL; + } spin_lock_irqsave(&lldev->lock, flags); - tre = lldev->pending_tre_list[tre_iterator / HIDMA_TRE_SIZE]; + if (lldev->evre_processed_off == evre_write_off) { + spin_unlock_irqrestore(&lldev->lock, flags); + return 0; + } + current_evre = lldev->evre_ring + lldev->evre_processed_off; + cfg = current_evre[HIDMA_EVRE_CFG_IDX]; + if (!err_info) { + err_info = cfg >> HIDMA_EVRE_ERRINFO_BIT_POS; + err_info &= HIDMA_EVRE_ERRINFO_MASK; + } + if (!err_code) + err_code = (cfg >> HIDMA_EVRE_CODE_BIT_POS) & + HIDMA_EVRE_CODE_MASK; + + offset = lldev->tre_processed_off; + tre = lldev->pending_tre_list[offset / HIDMA_TRE_SIZE]; if (!tre) { spin_unlock_irqrestore(&lldev->lock, flags); dev_warn(lldev->dev, "tre_index [%d] and tre out of sync\n", - tre_iterator / HIDMA_TRE_SIZE); + lldev->tre_processed_off / HIDMA_TRE_SIZE); return -EINVAL; } lldev->pending_tre_list[tre->tre_index] = NULL; @@ -223,6 +255,14 @@ static int hidma_post_completed(struct hidma_lldev *lldev, int tre_iterator, atomic_set(&lldev->pending_tre_count, 0); } + + HIDMA_INCREMENT_ITERATOR(lldev->tre_processed_off, HIDMA_TRE_SIZE, + lldev->tre_ring_size); + HIDMA_INCREMENT_ITERATOR(lldev->evre_processed_off, HIDMA_EVRE_SIZE, + lldev->evre_ring_size); + + writel(lldev->evre_processed_off, + lldev->evca + HIDMA_EVCA_DOORBELL_REG); spin_unlock_irqrestore(&lldev->lock, flags); tre->err_info = err_info; @@ -232,86 +272,7 @@ static int hidma_post_completed(struct hidma_lldev *lldev, int tre_iterator, kfifo_put(&lldev->handoff_fifo, tre); tasklet_schedule(&lldev->task); - return 0; -} - -/* - * Called to handle the interrupt for the channel. - * Return a positive number if TRE or EVRE were consumed on this run. - * Return a positive number if there are pending TREs or EVREs. - * Return 0 if there is nothing to consume or no pending TREs/EVREs found. - */ -static int hidma_handle_tre_completion(struct hidma_lldev *lldev, u8 err_info, - u8 err_code) -{ - u32 evre_ring_size = lldev->evre_ring_size; - u32 tre_ring_size = lldev->tre_ring_size; - u32 tre_iterator, evre_iterator; - u32 num_completed = 0; - - evre_write_off = readl_relaxed(lldev->evca + HIDMA_EVCA_WRITE_PTR_REG); - tre_iterator = lldev->tre_processed_off; - evre_iterator = lldev->evre_processed_off; - - if ((evre_write_off > evre_ring_size) || - (evre_write_off % HIDMA_EVRE_SIZE)) { - dev_err(lldev->dev, "HW reports invalid EVRE write offset\n"); - return 0; - } - - /* - * By the time control reaches here the number of EVREs and TREs - * may not match. Only consume the ones that hardware told us. - */ - while ((evre_iterator != evre_write_off)) { - u32 *current_evre = lldev->evre_ring + evre_iterator; - u32 cfg; - - cfg = current_evre[HIDMA_EVRE_CFG_IDX]; - if (!err_info) { - err_info = cfg >> HIDMA_EVRE_ERRINFO_BIT_POS; - err_info &= HIDMA_EVRE_ERRINFO_MASK; - } - if (!err_code) - err_code = (cfg >> HIDMA_EVRE_CODE_BIT_POS) & - HIDMA_EVRE_CODE_MASK; - - if (hidma_post_completed(lldev, tre_iterator, err_info, - err_code)) - break; - - HIDMA_INCREMENT_ITERATOR(tre_iterator, HIDMA_TRE_SIZE, - tre_ring_size); - HIDMA_INCREMENT_ITERATOR(evre_iterator, HIDMA_EVRE_SIZE, - evre_ring_size); - - /* - * Read the new event descriptor written by the HW. - * As we are processing the delivered events, other events - * get queued to the SW for processing. - */ - evre_write_off = - readl_relaxed(lldev->evca + HIDMA_EVCA_WRITE_PTR_REG); - num_completed++; - } - - if (num_completed) { - u32 evre_read_off = (lldev->evre_processed_off + - HIDMA_EVRE_SIZE * num_completed); - u32 tre_read_off = (lldev->tre_processed_off + - HIDMA_TRE_SIZE * num_completed); - - evre_read_off = evre_read_off % evre_ring_size; - tre_read_off = tre_read_off % tre_ring_size; - - writel(evre_read_off, lldev->evca + HIDMA_EVCA_DOORBELL_REG); - - /* record the last processed tre offset */ - lldev->tre_processed_off = tre_read_off; - lldev->evre_processed_off = evre_read_off; - } - - return num_completed; + return 1; } void hidma_cleanup_pending_tre(struct hidma_lldev *lldev, u8 err_info, @@ -399,6 +360,16 @@ static int hidma_ll_reset(struct hidma_lldev *lldev) */ static void hidma_ll_int_handler_internal(struct hidma_lldev *lldev, int cause) { + if ((lldev->trch_state == HIDMA_CH_DISABLED) || + (lldev->evch_state == HIDMA_CH_DISABLED)) { + dev_err(lldev->dev, "error 0x%x, already disabled...\n", + cause); + + /* Clear out pending interrupts */ + writel(cause, lldev->evca + HIDMA_EVCA_IRQ_CLR_REG); + return; + } + if (cause & HIDMA_ERR_INT_MASK) { dev_err(lldev->dev, "error 0x%x, disabling...\n", cause); @@ -430,6 +401,9 @@ static void hidma_ll_int_handler_internal(struct hidma_lldev *lldev, int cause) */ if (hidma_handle_tre_completion(lldev, 0, 0)) break; + if ((lldev->trch_state == HIDMA_CH_DISABLED) || + (lldev->evch_state == HIDMA_CH_DISABLED)) + break; } /* We consumed TREs or there are pending TREs or EVREs. */