From patchwork Sun Oct 27 14:21:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852671 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1AEC71DA61D; Sun, 27 Oct 2024 14:23:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730038985; cv=none; b=ScuX6eGFa49JR7MZsBy9Gml9TIpRPjzebJA+EiYAeRoyG/VP4rB7aT+FVDJ4HH9XMnM60dscr+h2xLZkJF810hTsWttopY+wmZTEdPuSG4htsFxsQVflfSlmCAyUEBdHsY3NdFxYhdcqHWVUYzQH3Bv7lZstUkmtzzUvaU7rkos= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730038985; c=relaxed/simple; bh=tSv2Q/UIDmsmBR/B+0LlfVehWktc2fzMNx0dgOtrb5I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LY41dlfondHJflQD1rd3R0BdhzwTb5sJcfQsnApfRBV6P1wj5pi7g1t3ZtwKSJFTfDWW//i2yxdS+tXqeZnNO2eld53TZUMFcJEDAjjNFQhfTfNyN58N+ZtM0HW7aMLT6E+O4Ya5WhShwRbfY3HIIMWltF/Jlgsujfvuhpl8d3w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fcGU8E7n; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fcGU8E7n" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 30CB6C4CEC3; Sun, 27 Oct 2024 14:23:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038985; bh=tSv2Q/UIDmsmBR/B+0LlfVehWktc2fzMNx0dgOtrb5I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fcGU8E7nqAZ/TN3lckAgvc+gp+UfN38AiQA0fW96+skp6qRwatz6+Tp+Y9GZAXoZv 32gYhCzvwkTALLTPnjKbgOaQPHt8sGsNEdt9KJbNhKApAh7jv24SRFA3hGiLfrQYgb G/Po80vNtP9F8BeE8e4uWTkMLLm5fivWyKQHJxEIt/mCjmNIb97v/FnvfgqfoT4TTh rz7W2/MVey+NJeDPA+D1i7fdC/b2byuhn0NKXfgpQS+wRQ6AryyTiMSut6bA7FGvnc bh/vGMOJ+JsUoHC7zVNhazkcjoWtkM0V5Idz20gkDCG7oEUw/cZU4OFtBrphpw8Tfp o+KYa1jtD4fhg== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH 5/7] nvme-pci: remove struct nvme_descriptor Date: Sun, 27 Oct 2024 16:21:58 +0200 Message-ID: <31fe216877a270c00bc79cddace3a9f4b3ade50c.1730037261.git.leon@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Christoph Hellwig There is no real point in having a union of two pointer types here, just use a void pointer as we mix and match types between the arms of the union between the allocation and freeing side already. Also rename the nr_allocations field to nr_descriptors to better describe what it does. Signed-off-by: Christoph Hellwig Signed-off-by: Leon Romanovsky --- drivers/nvme/host/pci.c | 51 ++++++++++++++++++----------------------- 1 file changed, 22 insertions(+), 29 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 4b9fda0b1d9a..ba077a42cbba 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -43,7 +43,8 @@ */ #define NVME_MAX_KB_SZ 8192 #define NVME_MAX_SEGS 128 -#define NVME_MAX_NR_ALLOCATIONS 5 + +#define NVME_MAX_NR_DESCRIPTORS 5 static int use_threaded_interrupts; module_param(use_threaded_interrupts, int, 0444); @@ -216,28 +217,20 @@ struct nvme_queue { struct completion delete_done; }; -union nvme_descriptor { - struct nvme_sgl_desc *sg_list; - __le64 *prp_list; -}; - /* * The nvme_iod describes the data in an I/O. - * - * The sg pointer contains the list of PRP/SGL chunk allocations in addition - * to the actual struct scatterlist. */ struct nvme_iod { struct nvme_request req; struct nvme_command cmd; bool aborted; - s8 nr_allocations; /* PRP list pool allocations. 0 means small - pool in use */ + /* # of PRP/SGL descriptors: (0 for small pool) */ + s8 nr_descriptors; unsigned int dma_len; /* length of single DMA segment mapping */ dma_addr_t first_dma; dma_addr_t meta_dma; struct sg_table sgt; - union nvme_descriptor list[NVME_MAX_NR_ALLOCATIONS]; + void *descriptors[NVME_MAX_NR_DESCRIPTORS]; }; static inline unsigned int nvme_dbbuf_size(struct nvme_dev *dev) @@ -528,8 +521,8 @@ static void nvme_free_prps(struct nvme_dev *dev, struct request *req) dma_addr_t dma_addr = iod->first_dma; int i; - for (i = 0; i < iod->nr_allocations; i++) { - __le64 *prp_list = iod->list[i].prp_list; + for (i = 0; i < iod->nr_descriptors; i++) { + __le64 *prp_list = iod->descriptors[i]; dma_addr_t next_dma_addr = le64_to_cpu(prp_list[last_prp]); dma_pool_free(dev->prp_page_pool, prp_list, dma_addr); @@ -551,11 +544,11 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) dma_unmap_sgtable(dev->dev, &iod->sgt, rq_dma_dir(req), 0); - if (iod->nr_allocations == 0) - dma_pool_free(dev->prp_small_pool, iod->list[0].sg_list, + if (iod->nr_descriptors == 0) + dma_pool_free(dev->prp_small_pool, iod->descriptors[0], iod->first_dma); - else if (iod->nr_allocations == 1) - dma_pool_free(dev->prp_page_pool, iod->list[0].sg_list, + else if (iod->nr_descriptors == 1) + dma_pool_free(dev->prp_page_pool, iod->descriptors[0], iod->first_dma); else nvme_free_prps(dev, req); @@ -613,18 +606,18 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, nprps = DIV_ROUND_UP(length, NVME_CTRL_PAGE_SIZE); if (nprps <= (256 / 8)) { pool = dev->prp_small_pool; - iod->nr_allocations = 0; + iod->nr_descriptors = 0; } else { pool = dev->prp_page_pool; - iod->nr_allocations = 1; + iod->nr_descriptors = 1; } prp_list = dma_pool_alloc(pool, GFP_ATOMIC, &prp_dma); if (!prp_list) { - iod->nr_allocations = -1; + iod->nr_descriptors = -1; return BLK_STS_RESOURCE; } - iod->list[0].prp_list = prp_list; + iod->descriptors[0] = prp_list; iod->first_dma = prp_dma; i = 0; for (;;) { @@ -633,7 +626,7 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, prp_list = dma_pool_alloc(pool, GFP_ATOMIC, &prp_dma); if (!prp_list) goto free_prps; - iod->list[iod->nr_allocations++].prp_list = prp_list; + iod->descriptors[iod->nr_descriptors++] = prp_list; prp_list[0] = old_prp_list[i - 1]; old_prp_list[i - 1] = cpu_to_le64(prp_dma); i = 1; @@ -703,19 +696,19 @@ static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev, if (entries <= (256 / sizeof(struct nvme_sgl_desc))) { pool = dev->prp_small_pool; - iod->nr_allocations = 0; + iod->nr_descriptors = 0; } else { pool = dev->prp_page_pool; - iod->nr_allocations = 1; + iod->nr_descriptors = 1; } sg_list = dma_pool_alloc(pool, GFP_ATOMIC, &sgl_dma); if (!sg_list) { - iod->nr_allocations = -1; + iod->nr_descriptors = -1; return BLK_STS_RESOURCE; } - iod->list[0].sg_list = sg_list; + iod->descriptors[0] = sg_list; iod->first_dma = sgl_dma; nvme_pci_sgl_set_seg(&cmd->dptr.sgl, sgl_dma, entries); @@ -841,7 +834,7 @@ static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req) blk_status_t ret; iod->aborted = false; - iod->nr_allocations = -1; + iod->nr_descriptors = -1; iod->sgt.nents = 0; ret = nvme_setup_cmd(req->q->queuedata, req); @@ -3626,7 +3619,7 @@ static int __init nvme_init(void) BUILD_BUG_ON(IRQ_AFFINITY_MAX_SETS < 2); BUILD_BUG_ON(NVME_MAX_SEGS > SGES_PER_PAGE); BUILD_BUG_ON(sizeof(struct scatterlist) * NVME_MAX_SEGS > PAGE_SIZE); - BUILD_BUG_ON(nvme_pci_npages_prp() > NVME_MAX_NR_ALLOCATIONS); + BUILD_BUG_ON(nvme_pci_npages_prp() > NVME_MAX_NR_DESCRIPTORS); return pci_register_driver(&nvme_driver); }