From patchwork Sat Dec 30 15:28:30 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Himanshu Jha X-Patchwork-Id: 10137787 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B2F2460233 for ; Sat, 30 Dec 2017 15:31:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A496E287A1 for ; Sat, 30 Dec 2017 15:31:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 96732287A3; Sat, 30 Dec 2017 15:31:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RCVD_IN_SBL_CSS autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E3261287A1 for ; Sat, 30 Dec 2017 15:31:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751338AbdL3PaZ (ORCPT ); Sat, 30 Dec 2017 10:30:25 -0500 Received: from mail-pl0-f67.google.com ([209.85.160.67]:42196 "EHLO mail-pl0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751177AbdL3PaX (ORCPT ); Sat, 30 Dec 2017 10:30:23 -0500 Received: by mail-pl0-f67.google.com with SMTP id bd8so24584301plb.9; Sat, 30 Dec 2017 07:30:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=werRVNYLMXI+/UVq08x/ZNKVbnADVMRJ6+fAnezF8AM=; b=K7Tzn/BHmr+mTs4KZJycyY94aRlKKeoQDe4dVeB81WNVMq7Ez+6MigWJrq3Fh/G8T+ 8DS8hbTUx1FW0JC1JCYehshJ+1tk/xHGn19wMsdMAN4yOSteyFBnT8e4mfJVWKRbGz3x O9wdZDElCuCjfpjwEAzftq6c+IwLlAAepqRIxVWMA9LGXHUCh6O0lm9oCAAXCcb4H4aL 4imdBBuLUIUbSwG6ZGgICUFEWGuz1qxQWLI5bwLikktWxojs3rNGhCAWonR5aKQK4GgG FO6/9/7FyY9otyl/PA7OO4tt9A2w1eyIUU4ognT8iGyLDdcUeTzoc3RS35H/bvYDY4Uv Qxgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=werRVNYLMXI+/UVq08x/ZNKVbnADVMRJ6+fAnezF8AM=; b=meWH4MgfTipRKLyJ6eEn+ODTYhnQRw3wx1PjRRf7huH/b2vmNwgF9RRMnCuYZTDGwU sDxBRA+Zko6/P9jTBLoWwR8IqFpkSmtS67oTZU+BuRBc7JnecsJrcpPrfA5vYuhf/sg6 CRGGxiL+e2NAH/a4ksduGVb3Uf7Xr9sF0sOtED+tyIpym9v6kuBkLKJdpbgNNNR+J7oW cZ3nyqlRAmgcb2rqPagW2XELUfmmCM50fDYaR8ZJmnxkKMP7co40bvxSSYaAWvxPm7iU jGQyDT6NiIGH0hLmFjNRSYE/FNwpzjiSy6Njzp5ixifuiCVNiJgki0J2Qu4hixbH2wb9 NV7g== X-Gm-Message-State: AKGB3mLNGcmp31l04BN4G+Of/9xbwWQIqRBbHKVkk+Hte7TL5v1ZIK4X gFlINfvD8NjJK02bDiGgzAM= X-Google-Smtp-Source: ACJfBou5o3KUBRZgWjf5dZnBzrSdf5OIjP/Me/vOO67eVhTzseOm7amaQCRLIm3Xhr8QfF3FlMB1Aw== X-Received: by 10.84.198.67 with SMTP id o61mr38768729pld.229.1514647822431; Sat, 30 Dec 2017 07:30:22 -0800 (PST) Received: from localhost.localdomain ([103.46.193.14]) by smtp.gmail.com with ESMTPSA id f79sm81933715pfd.45.2017.12.30.07.30.16 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 30 Dec 2017 07:30:21 -0800 (PST) From: Himanshu Jha To: jejb@linux.vnet.ibm.com, martin.petersen@oracle.com, aacraid@adaptec.com Cc: anil.gurumurthy@qlogic.com, sudarsana.kalluru@qlogic.com, QLogic-Storage-Upstream@qlogic.com, satishkh@cisco.com, sebaddel@cisco.com, kartilak@cisco.com, QLogic-Storage-Upstream@cavium.com, qla2xxx-upstream@qlogic.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, Himanshu Jha Subject: [PATCH 7/9] scsi: bnx2fc: Use zeroing allocator rather than allocator/memset Date: Sat, 30 Dec 2017 20:58:30 +0530 Message-Id: <1514647712-6332-8-git-send-email-himanshujha199640@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1514647712-6332-1-git-send-email-himanshujha199640@gmail.com> References: <1514647712-6332-1-git-send-email-himanshujha199640@gmail.com> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Use dma_zalloc_coherent instead of dma_alloc_coherent followed by memset 0. Generated-by: scripts/coccinelle/api/alloc/kzalloc-simple.cocci Suggested-by: Luis R. Rodriguez Signed-off-by: Himanshu Jha Acked-by: Chad Dupuis --- drivers/scsi/bnx2fc/bnx2fc_hwi.c | 60 +++++++++++++++++----------------------- drivers/scsi/bnx2fc/bnx2fc_tgt.c | 51 +++++++++++++++------------------- 2 files changed, 47 insertions(+), 64 deletions(-) diff --git a/drivers/scsi/bnx2fc/bnx2fc_hwi.c b/drivers/scsi/bnx2fc/bnx2fc_hwi.c index 26de61d..e8ae4d6 100644 --- a/drivers/scsi/bnx2fc/bnx2fc_hwi.c +++ b/drivers/scsi/bnx2fc/bnx2fc_hwi.c @@ -1857,16 +1857,15 @@ int bnx2fc_setup_task_ctx(struct bnx2fc_hba *hba) * entries. Hence the limit with one page is 8192 task context * entries. */ - hba->task_ctx_bd_tbl = dma_alloc_coherent(&hba->pcidev->dev, - PAGE_SIZE, - &hba->task_ctx_bd_dma, - GFP_KERNEL); + hba->task_ctx_bd_tbl = dma_zalloc_coherent(&hba->pcidev->dev, + PAGE_SIZE, + &hba->task_ctx_bd_dma, + GFP_KERNEL); if (!hba->task_ctx_bd_tbl) { printk(KERN_ERR PFX "unable to allocate task context BDT\n"); rc = -1; goto out; } - memset(hba->task_ctx_bd_tbl, 0, PAGE_SIZE); /* * Allocate task_ctx which is an array of pointers pointing to @@ -1895,16 +1894,15 @@ int bnx2fc_setup_task_ctx(struct bnx2fc_hba *hba) task_ctx_bdt = (struct regpair *)hba->task_ctx_bd_tbl; for (i = 0; i < task_ctx_arr_sz; i++) { - hba->task_ctx[i] = dma_alloc_coherent(&hba->pcidev->dev, - PAGE_SIZE, - &hba->task_ctx_dma[i], - GFP_KERNEL); + hba->task_ctx[i] = dma_zalloc_coherent(&hba->pcidev->dev, + PAGE_SIZE, + &hba->task_ctx_dma[i], + GFP_KERNEL); if (!hba->task_ctx[i]) { printk(KERN_ERR PFX "unable to alloc task context\n"); rc = -1; goto out3; } - memset(hba->task_ctx[i], 0, PAGE_SIZE); addr = (u64)hba->task_ctx_dma[i]; task_ctx_bdt->hi = cpu_to_le32((u64)addr >> 32); task_ctx_bdt->lo = cpu_to_le32((u32)addr); @@ -2033,28 +2031,23 @@ static int bnx2fc_allocate_hash_table(struct bnx2fc_hba *hba) } for (i = 0; i < segment_count; ++i) { - hba->hash_tbl_segments[i] = - dma_alloc_coherent(&hba->pcidev->dev, - BNX2FC_HASH_TBL_CHUNK_SIZE, - &dma_segment_array[i], - GFP_KERNEL); + hba->hash_tbl_segments[i] = + dma_zalloc_coherent(&hba->pcidev->dev, + BNX2FC_HASH_TBL_CHUNK_SIZE, + &dma_segment_array[i], + GFP_KERNEL); if (!hba->hash_tbl_segments[i]) { printk(KERN_ERR PFX "hash segment alloc failed\n"); goto cleanup_dma; } - memset(hba->hash_tbl_segments[i], 0, - BNX2FC_HASH_TBL_CHUNK_SIZE); } - hba->hash_tbl_pbl = dma_alloc_coherent(&hba->pcidev->dev, - PAGE_SIZE, - &hba->hash_tbl_pbl_dma, - GFP_KERNEL); + hba->hash_tbl_pbl = dma_zalloc_coherent(&hba->pcidev->dev, PAGE_SIZE, + &hba->hash_tbl_pbl_dma, + GFP_KERNEL); if (!hba->hash_tbl_pbl) { printk(KERN_ERR PFX "hash table pbl alloc failed\n"); goto cleanup_dma; } - memset(hba->hash_tbl_pbl, 0, PAGE_SIZE); pbl = hba->hash_tbl_pbl; for (i = 0; i < segment_count; ++i) { @@ -2111,27 +2104,26 @@ int bnx2fc_setup_fw_resc(struct bnx2fc_hba *hba) return -ENOMEM; mem_size = BNX2FC_NUM_MAX_SESS * sizeof(struct regpair); - hba->t2_hash_tbl_ptr = dma_alloc_coherent(&hba->pcidev->dev, mem_size, - &hba->t2_hash_tbl_ptr_dma, - GFP_KERNEL); + hba->t2_hash_tbl_ptr = dma_zalloc_coherent(&hba->pcidev->dev, + mem_size, + &hba->t2_hash_tbl_ptr_dma, + GFP_KERNEL); if (!hba->t2_hash_tbl_ptr) { printk(KERN_ERR PFX "unable to allocate t2 hash table ptr\n"); bnx2fc_free_fw_resc(hba); return -ENOMEM; } - memset(hba->t2_hash_tbl_ptr, 0x00, mem_size); mem_size = BNX2FC_NUM_MAX_SESS * sizeof(struct fcoe_t2_hash_table_entry); - hba->t2_hash_tbl = dma_alloc_coherent(&hba->pcidev->dev, mem_size, - &hba->t2_hash_tbl_dma, - GFP_KERNEL); + hba->t2_hash_tbl = dma_zalloc_coherent(&hba->pcidev->dev, mem_size, + &hba->t2_hash_tbl_dma, + GFP_KERNEL); if (!hba->t2_hash_tbl) { printk(KERN_ERR PFX "unable to allocate t2 hash table\n"); bnx2fc_free_fw_resc(hba); return -ENOMEM; } - memset(hba->t2_hash_tbl, 0x00, mem_size); for (i = 0; i < BNX2FC_NUM_MAX_SESS; i++) { addr = (unsigned long) hba->t2_hash_tbl_dma + ((i+1) * sizeof(struct fcoe_t2_hash_table_entry)); @@ -2148,16 +2140,14 @@ int bnx2fc_setup_fw_resc(struct bnx2fc_hba *hba) return -ENOMEM; } - hba->stats_buffer = dma_alloc_coherent(&hba->pcidev->dev, - PAGE_SIZE, - &hba->stats_buf_dma, - GFP_KERNEL); + hba->stats_buffer = dma_zalloc_coherent(&hba->pcidev->dev, PAGE_SIZE, + &hba->stats_buf_dma, + GFP_KERNEL); if (!hba->stats_buffer) { printk(KERN_ERR PFX "unable to alloc Stats Buffer\n"); bnx2fc_free_fw_resc(hba); return -ENOMEM; } - memset(hba->stats_buffer, 0x00, PAGE_SIZE); return 0; } diff --git a/drivers/scsi/bnx2fc/bnx2fc_tgt.c b/drivers/scsi/bnx2fc/bnx2fc_tgt.c index a8ae1a0..e3d1c7c 100644 --- a/drivers/scsi/bnx2fc/bnx2fc_tgt.c +++ b/drivers/scsi/bnx2fc/bnx2fc_tgt.c @@ -672,56 +672,52 @@ static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba, tgt->sq_mem_size = (tgt->sq_mem_size + (CNIC_PAGE_SIZE - 1)) & CNIC_PAGE_MASK; - tgt->sq = dma_alloc_coherent(&hba->pcidev->dev, tgt->sq_mem_size, - &tgt->sq_dma, GFP_KERNEL); + tgt->sq = dma_zalloc_coherent(&hba->pcidev->dev, tgt->sq_mem_size, + &tgt->sq_dma, GFP_KERNEL); if (!tgt->sq) { printk(KERN_ERR PFX "unable to allocate SQ memory %d\n", tgt->sq_mem_size); goto mem_alloc_failure; } - memset(tgt->sq, 0, tgt->sq_mem_size); /* Allocate and map CQ */ tgt->cq_mem_size = tgt->max_cqes * BNX2FC_CQ_WQE_SIZE; tgt->cq_mem_size = (tgt->cq_mem_size + (CNIC_PAGE_SIZE - 1)) & CNIC_PAGE_MASK; - tgt->cq = dma_alloc_coherent(&hba->pcidev->dev, tgt->cq_mem_size, - &tgt->cq_dma, GFP_KERNEL); + tgt->cq = dma_zalloc_coherent(&hba->pcidev->dev, tgt->cq_mem_size, + &tgt->cq_dma, GFP_KERNEL); if (!tgt->cq) { printk(KERN_ERR PFX "unable to allocate CQ memory %d\n", tgt->cq_mem_size); goto mem_alloc_failure; } - memset(tgt->cq, 0, tgt->cq_mem_size); /* Allocate and map RQ and RQ PBL */ tgt->rq_mem_size = tgt->max_rqes * BNX2FC_RQ_WQE_SIZE; tgt->rq_mem_size = (tgt->rq_mem_size + (CNIC_PAGE_SIZE - 1)) & CNIC_PAGE_MASK; - tgt->rq = dma_alloc_coherent(&hba->pcidev->dev, tgt->rq_mem_size, - &tgt->rq_dma, GFP_KERNEL); + tgt->rq = dma_zalloc_coherent(&hba->pcidev->dev, tgt->rq_mem_size, + &tgt->rq_dma, GFP_KERNEL); if (!tgt->rq) { printk(KERN_ERR PFX "unable to allocate RQ memory %d\n", tgt->rq_mem_size); goto mem_alloc_failure; } - memset(tgt->rq, 0, tgt->rq_mem_size); tgt->rq_pbl_size = (tgt->rq_mem_size / CNIC_PAGE_SIZE) * sizeof(void *); tgt->rq_pbl_size = (tgt->rq_pbl_size + (CNIC_PAGE_SIZE - 1)) & CNIC_PAGE_MASK; - tgt->rq_pbl = dma_alloc_coherent(&hba->pcidev->dev, tgt->rq_pbl_size, - &tgt->rq_pbl_dma, GFP_KERNEL); + tgt->rq_pbl = dma_zalloc_coherent(&hba->pcidev->dev, tgt->rq_pbl_size, + &tgt->rq_pbl_dma, GFP_KERNEL); if (!tgt->rq_pbl) { printk(KERN_ERR PFX "unable to allocate RQ PBL %d\n", tgt->rq_pbl_size); goto mem_alloc_failure; } - memset(tgt->rq_pbl, 0, tgt->rq_pbl_size); num_pages = tgt->rq_mem_size / CNIC_PAGE_SIZE; page = tgt->rq_dma; pbl = (u32 *)tgt->rq_pbl; @@ -739,44 +735,43 @@ static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba, tgt->xferq_mem_size = (tgt->xferq_mem_size + (CNIC_PAGE_SIZE - 1)) & CNIC_PAGE_MASK; - tgt->xferq = dma_alloc_coherent(&hba->pcidev->dev, tgt->xferq_mem_size, - &tgt->xferq_dma, GFP_KERNEL); + tgt->xferq = dma_zalloc_coherent(&hba->pcidev->dev, + tgt->xferq_mem_size, &tgt->xferq_dma, + GFP_KERNEL); if (!tgt->xferq) { printk(KERN_ERR PFX "unable to allocate XFERQ %d\n", tgt->xferq_mem_size); goto mem_alloc_failure; } - memset(tgt->xferq, 0, tgt->xferq_mem_size); /* Allocate and map CONFQ & CONFQ PBL */ tgt->confq_mem_size = tgt->max_sqes * BNX2FC_CONFQ_WQE_SIZE; tgt->confq_mem_size = (tgt->confq_mem_size + (CNIC_PAGE_SIZE - 1)) & CNIC_PAGE_MASK; - tgt->confq = dma_alloc_coherent(&hba->pcidev->dev, tgt->confq_mem_size, - &tgt->confq_dma, GFP_KERNEL); + tgt->confq = dma_zalloc_coherent(&hba->pcidev->dev, + tgt->confq_mem_size, &tgt->confq_dma, + GFP_KERNEL); if (!tgt->confq) { printk(KERN_ERR PFX "unable to allocate CONFQ %d\n", tgt->confq_mem_size); goto mem_alloc_failure; } - memset(tgt->confq, 0, tgt->confq_mem_size); tgt->confq_pbl_size = (tgt->confq_mem_size / CNIC_PAGE_SIZE) * sizeof(void *); tgt->confq_pbl_size = (tgt->confq_pbl_size + (CNIC_PAGE_SIZE - 1)) & CNIC_PAGE_MASK; - tgt->confq_pbl = dma_alloc_coherent(&hba->pcidev->dev, - tgt->confq_pbl_size, - &tgt->confq_pbl_dma, GFP_KERNEL); + tgt->confq_pbl = dma_zalloc_coherent(&hba->pcidev->dev, + tgt->confq_pbl_size, + &tgt->confq_pbl_dma, GFP_KERNEL); if (!tgt->confq_pbl) { printk(KERN_ERR PFX "unable to allocate CONFQ PBL %d\n", tgt->confq_pbl_size); goto mem_alloc_failure; } - memset(tgt->confq_pbl, 0, tgt->confq_pbl_size); num_pages = tgt->confq_mem_size / CNIC_PAGE_SIZE; page = tgt->confq_dma; pbl = (u32 *)tgt->confq_pbl; @@ -792,15 +787,14 @@ static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba, /* Allocate and map ConnDB */ tgt->conn_db_mem_size = sizeof(struct fcoe_conn_db); - tgt->conn_db = dma_alloc_coherent(&hba->pcidev->dev, - tgt->conn_db_mem_size, - &tgt->conn_db_dma, GFP_KERNEL); + tgt->conn_db = dma_zalloc_coherent(&hba->pcidev->dev, + tgt->conn_db_mem_size, + &tgt->conn_db_dma, GFP_KERNEL); if (!tgt->conn_db) { printk(KERN_ERR PFX "unable to allocate conn_db %d\n", tgt->conn_db_mem_size); goto mem_alloc_failure; } - memset(tgt->conn_db, 0, tgt->conn_db_mem_size); /* Allocate and map LCQ */ @@ -808,15 +802,14 @@ static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba, tgt->lcq_mem_size = (tgt->lcq_mem_size + (CNIC_PAGE_SIZE - 1)) & CNIC_PAGE_MASK; - tgt->lcq = dma_alloc_coherent(&hba->pcidev->dev, tgt->lcq_mem_size, - &tgt->lcq_dma, GFP_KERNEL); + tgt->lcq = dma_zalloc_coherent(&hba->pcidev->dev, tgt->lcq_mem_size, + &tgt->lcq_dma, GFP_KERNEL); if (!tgt->lcq) { printk(KERN_ERR PFX "unable to allocate lcq %d\n", tgt->lcq_mem_size); goto mem_alloc_failure; } - memset(tgt->lcq, 0, tgt->lcq_mem_size); tgt->conn_db->rq_prod = 0x8000;