From patchwork Tue Aug 22 09:56:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 9914733 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B7C2460381 for ; Tue, 22 Aug 2017 10:00:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C59A72883F for ; Tue, 22 Aug 2017 10:00:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B9EAB28854; Tue, 22 Aug 2017 10:00:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_LOW autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1F1B62883F for ; Tue, 22 Aug 2017 10:00:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ZV07BpZf30voKFiMV1aA+33dSZHM6gcFb2/9mlGr/Wk=; b=K6iJ7qgkmTekpbAU+tzHmmY3S2 OxvebalAW21f9CJMkkv2wQ/KWskZAGhStjTxoi9JLXSSQ3uQtOWMzkqvQMRgpFW2qe2UAtx8GFXKP 3VRLFTmh002Pax1aBhs4XDUta30G46s9j5U9EGbsAdAgtvP4YgPhgkxnkxxJpWpbwR/M4jg+KFyOT lwbJveLkFFDmIXmVypGixQpmTsU2bvrwJ0z3i98/a+LeGFUR+/R3GodbCpZVs8ovwhFw3y9PjFMQ+ LpUjiDYCQ1aIY2QakKhJn2AWUT6jOndXzZ+pAMGqy7WwhUIkoWIlVfEgbU+31irLrOzKhSkgvwVD8 ensohJ6g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dk5zD-00015Y-6K; Tue, 22 Aug 2017 10:00:39 +0000 Received: from mail-wr0-x22e.google.com ([2a00:1450:400c:c0c::22e]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1dk5wz-0006Ib-Ac for linux-arm-kernel@lists.infradead.org; Tue, 22 Aug 2017 09:58:33 +0000 Received: by mail-wr0-x22e.google.com with SMTP id z91so118767334wrc.4 for ; Tue, 22 Aug 2017 02:58:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9QJ6Id1Wft+9gu70R50D19uVX8GCUZknAQyAmKSPRGs=; b=W3mOpv8uqesVkrjtHnm8s9pMr0vg6MeZOyZ+4xKzwMVr/5fXKNPrGk0m4SbbtwtEFL N8KctqNICSL7EHxcDsUEGKhYS18pq8IjmLrCss0X5qRZGeum6EJMT6Hv5i7cpZxd1K8p 0AbG/mOSwrsYUQ0Pcr5SerJnGvD4rXzuSggOE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9QJ6Id1Wft+9gu70R50D19uVX8GCUZknAQyAmKSPRGs=; b=SD1+gg+tZgx6394ZuN7uqLTqMgKwxQDYb8zVSKkN3Z6ZjQKOv6O7yk1uIlIw1ML8Vb Ak/eRvQinaesXo9IGN5dq8QYEJM9ggSXwfwnPSa5augmJwE2FsOCfr2naaMBSwr2o8r8 Vase6ygzdD5q7WscHu/pn6Wir6IPcIXUG7YLBwfA3mBWuopsbQLcLd4Xwda6rBjLySti OChLuZbdwILS1x6DwgYf5YHAXpsh5uzTa8oO886+Tl/dQRBYCSZ1wJK0SLxeEzsvYYnC rrw/ZH4iNDttiMwJDbScIkf/QOSoq0SXY+oULqasodPjZhA6G+iVr4wNGpzSBEoF9x2c twig== X-Gm-Message-State: AHYfb5iiYYKcvF+4gVMVqgKeVa0aPAVEk9XrqryyGfXDQRyuWnB77Gmp aeGdF+k/OeDQ2Ms2 X-Received: by 10.28.157.73 with SMTP id g70mr15738wme.73.1503395879415; Tue, 22 Aug 2017 02:57:59 -0700 (PDT) Received: from anup-HP-Compaq-8100-Elite-CMT-PC.dhcp.avagotech.net ([192.19.237.250]) by smtp.gmail.com with ESMTPSA id e137sm12257913wma.29.2017.08.22.02.57.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 22 Aug 2017 02:57:58 -0700 (PDT) From: Anup Patel To: Vinod Koul , Dan Williams Subject: [PATCH v3 05/17] dmaengine: bcm-sba-raid: Remove redundant resp_dma from sba_request Date: Tue, 22 Aug 2017 15:26:54 +0530 Message-Id: <1503395827-19428-6-git-send-email-anup.patel@broadcom.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1503395827-19428-1-git-send-email-anup.patel@broadcom.com> References: <1503395827-19428-1-git-send-email-anup.patel@broadcom.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170822_025822_071874_1E85B464 X-CRM114-Status: GOOD ( 13.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Florian Fainelli , Anup Patel , Scott Branden , Ray Jui , linux-kernel@vger.kernel.org, bcm-kernel-feedback-list@broadcom.com, dmaengine@vger.kernel.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Both resp and resp_dma are redundant in sba_request because resp is unused and resp_dma carries same information present in tx.phys of sba_request. This patch removes both resp and resp_dma from sba_request. Signed-off-by: Anup Patel --- drivers/dma/bcm-sba-raid.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/drivers/dma/bcm-sba-raid.c b/drivers/dma/bcm-sba-raid.c index e8863e9..7d08d4e 100644 --- a/drivers/dma/bcm-sba-raid.c +++ b/drivers/dma/bcm-sba-raid.c @@ -113,8 +113,6 @@ struct sba_request { struct list_head next; atomic_t next_pending_count; /* BRCM message data */ - void *resp; - dma_addr_t resp_dma; struct brcm_sba_command *cmds; struct brcm_message msg; struct dma_async_tx_descriptor tx; @@ -513,6 +511,7 @@ static void sba_fillup_interrupt_msg(struct sba_request *req, { u64 cmd; u32 c_mdata; + dma_addr_t resp_dma = req->tx.phys; struct brcm_sba_command *cmdsp = cmds; /* Type-B command to load dummy data into buf0 */ @@ -528,7 +527,7 @@ static void sba_fillup_interrupt_msg(struct sba_request *req, cmdsp->cmd = cmd; *cmdsp->cmd_dma = cpu_to_le64(cmd); cmdsp->flags = BRCM_SBA_CMD_TYPE_B; - cmdsp->data = req->resp_dma; + cmdsp->data = resp_dma; cmdsp->data_len = req->sba->hw_resp_size; cmdsp++; @@ -549,11 +548,11 @@ static void sba_fillup_interrupt_msg(struct sba_request *req, cmdsp->flags = BRCM_SBA_CMD_TYPE_A; if (req->sba->hw_resp_size) { cmdsp->flags |= BRCM_SBA_CMD_HAS_RESP; - cmdsp->resp = req->resp_dma; + cmdsp->resp = resp_dma; cmdsp->resp_len = req->sba->hw_resp_size; } cmdsp->flags |= BRCM_SBA_CMD_HAS_OUTPUT; - cmdsp->data = req->resp_dma; + cmdsp->data = resp_dma; cmdsp->data_len = req->sba->hw_resp_size; cmdsp++; @@ -600,6 +599,7 @@ static void sba_fillup_memcpy_msg(struct sba_request *req, { u64 cmd; u32 c_mdata; + dma_addr_t resp_dma = req->tx.phys; struct brcm_sba_command *cmdsp = cmds; /* Type-B command to load data into buf0 */ @@ -636,7 +636,7 @@ static void sba_fillup_memcpy_msg(struct sba_request *req, cmdsp->flags = BRCM_SBA_CMD_TYPE_A; if (req->sba->hw_resp_size) { cmdsp->flags |= BRCM_SBA_CMD_HAS_RESP; - cmdsp->resp = req->resp_dma; + cmdsp->resp = resp_dma; cmdsp->resp_len = req->sba->hw_resp_size; } cmdsp->flags |= BRCM_SBA_CMD_HAS_OUTPUT; @@ -719,6 +719,7 @@ static void sba_fillup_xor_msg(struct sba_request *req, u64 cmd; u32 c_mdata; unsigned int i; + dma_addr_t resp_dma = req->tx.phys; struct brcm_sba_command *cmdsp = cmds; /* Type-B command to load data into buf0 */ @@ -774,7 +775,7 @@ static void sba_fillup_xor_msg(struct sba_request *req, cmdsp->flags = BRCM_SBA_CMD_TYPE_A; if (req->sba->hw_resp_size) { cmdsp->flags |= BRCM_SBA_CMD_HAS_RESP; - cmdsp->resp = req->resp_dma; + cmdsp->resp = resp_dma; cmdsp->resp_len = req->sba->hw_resp_size; } cmdsp->flags |= BRCM_SBA_CMD_HAS_OUTPUT; @@ -863,6 +864,7 @@ static void sba_fillup_pq_msg(struct sba_request *req, u64 cmd; u32 c_mdata; unsigned int i; + dma_addr_t resp_dma = req->tx.phys; struct brcm_sba_command *cmdsp = cmds; if (pq_continue) { @@ -956,7 +958,7 @@ static void sba_fillup_pq_msg(struct sba_request *req, cmdsp->flags = BRCM_SBA_CMD_TYPE_A; if (req->sba->hw_resp_size) { cmdsp->flags |= BRCM_SBA_CMD_HAS_RESP; - cmdsp->resp = req->resp_dma; + cmdsp->resp = resp_dma; cmdsp->resp_len = req->sba->hw_resp_size; } cmdsp->flags |= BRCM_SBA_CMD_HAS_OUTPUT; @@ -983,7 +985,7 @@ static void sba_fillup_pq_msg(struct sba_request *req, cmdsp->flags = BRCM_SBA_CMD_TYPE_A; if (req->sba->hw_resp_size) { cmdsp->flags |= BRCM_SBA_CMD_HAS_RESP; - cmdsp->resp = req->resp_dma; + cmdsp->resp = resp_dma; cmdsp->resp_len = req->sba->hw_resp_size; } cmdsp->flags |= BRCM_SBA_CMD_HAS_OUTPUT; @@ -1037,6 +1039,7 @@ static void sba_fillup_pq_single_msg(struct sba_request *req, u64 cmd; u32 c_mdata; u8 pos, dpos = raid6_gflog[scf]; + dma_addr_t resp_dma = req->tx.phys; struct brcm_sba_command *cmdsp = cmds; if (!dst_p) @@ -1115,7 +1118,7 @@ static void sba_fillup_pq_single_msg(struct sba_request *req, cmdsp->flags = BRCM_SBA_CMD_TYPE_A; if (req->sba->hw_resp_size) { cmdsp->flags |= BRCM_SBA_CMD_HAS_RESP; - cmdsp->resp = req->resp_dma; + cmdsp->resp = resp_dma; cmdsp->resp_len = req->sba->hw_resp_size; } cmdsp->flags |= BRCM_SBA_CMD_HAS_OUTPUT; @@ -1236,7 +1239,7 @@ static void sba_fillup_pq_single_msg(struct sba_request *req, cmdsp->flags = BRCM_SBA_CMD_TYPE_A; if (req->sba->hw_resp_size) { cmdsp->flags |= BRCM_SBA_CMD_HAS_RESP; - cmdsp->resp = req->resp_dma; + cmdsp->resp = resp_dma; cmdsp->resp_len = req->sba->hw_resp_size; } cmdsp->flags |= BRCM_SBA_CMD_HAS_OUTPUT; @@ -1458,7 +1461,7 @@ static void sba_receive_message(struct mbox_client *cl, void *msg) static int sba_prealloc_channel_resources(struct sba_device *sba) { - int i, j, p, ret = 0; + int i, j, ret = 0; struct sba_request *req = NULL; sba->resp_base = dma_alloc_coherent(sba->dma_dev.dev, @@ -1492,16 +1495,13 @@ static int sba_prealloc_channel_resources(struct sba_device *sba) goto fail_free_cmds_pool; } - for (i = 0, p = 0; i < sba->max_req; i++) { + for (i = 0; i < sba->max_req; i++) { req = &sba->reqs[i]; INIT_LIST_HEAD(&req->node); req->sba = sba; req->flags = SBA_REQUEST_STATE_FREE; INIT_LIST_HEAD(&req->next); atomic_set(&req->next_pending_count, 0); - req->resp = sba->resp_base + p; - req->resp_dma = sba->resp_dma_base + p; - p += sba->hw_resp_size; req->cmds = devm_kcalloc(sba->dev, sba->max_cmd_per_req, sizeof(*req->cmds), GFP_KERNEL); if (!req->cmds) { @@ -1519,7 +1519,7 @@ static int sba_prealloc_channel_resources(struct sba_device *sba) memset(&req->msg, 0, sizeof(req->msg)); dma_async_tx_descriptor_init(&req->tx, &sba->dma_chan); req->tx.tx_submit = sba_tx_submit; - req->tx.phys = req->resp_dma; + req->tx.phys = sba->resp_dma_base + i * sba->hw_resp_size; list_add_tail(&req->node, &sba->reqs_free_list); }