Message ID | 20221025135958.6242-5-aaptel@nvidia.com (mailing list archive) |
---|---|
State | Changes Requested |
Headers | show |
Series | nvme-tcp receive offloads | expand |
On Tue, Oct 25, 2022 at 04:59:39PM +0300, Aurelien Aptel wrote: > This reverts commit fb8745d040ef5b9080003325e56b91fefe1022bb. > > The newly added NVMeTCP offload requires the field > nvme_tcp_queue->queue_size in the patch > "nvme-tcp: Add DDP offload control path" in nvme_tcp_offload_socket(). > The queue size is part of struct ulp_ddp_config > parameters. Please never do reverts if you just bring something back for an entirely differenet reason. And I think we need a really good justification of why you have a code path that can get the queue struct and not the controller, which really should not happen.
>> This reverts commit fb8745d040ef5b9080003325e56b91fefe1022bb. >> >> The newly added NVMeTCP offload requires the field >> nvme_tcp_queue->queue_size in the patch >> "nvme-tcp: Add DDP offload control path" in nvme_tcp_offload_socket(). >> The queue size is part of struct ulp_ddp_config >> parameters. > > Please never do reverts if you just bring something back for an entirely > differenet reason. Agreed. > And I think we need a really good justification of > why you have a code path that can get the queue struct and not the > controller, which really should not happen. What is wrong with just using either ctrl->sqsize/NVME_AQ_DEPTH based on the qid?
On Wed, 26 Oct 2022 at 12:02, Sagi Grimberg <sagi@grimberg.me> wrote: > >> This reverts commit fb8745d040ef5b9080003325e56b91fefe1022bb. > >> > >> The newly added NVMeTCP offload requires the field > >> nvme_tcp_queue->queue_size in the patch > >> "nvme-tcp: Add DDP offload control path" in nvme_tcp_offload_socket(). > >> The queue size is part of struct ulp_ddp_config > >> parameters. > > > > Please never do reverts if you just bring something back for an entirely > > differenet reason. > > Agreed. Sure. > > > And I think we need a really good justification of > > why you have a code path that can get the queue struct and not the > > controller, which really should not happen. > > What is wrong with just using either ctrl->sqsize/NVME_AQ_DEPTH based > on the qid? Thanks, we will use ctrl->sqsize. No need to use NVME_AQ_DEPTH as the offload is used only with IO queues.
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 1eed0fc26b3a..42b2d86dcfc2 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -133,6 +133,7 @@ struct nvme_tcp_queue { /* send state */ struct nvme_tcp_request *request; + int queue_size; u32 maxh2cdata; size_t cmnd_capsule_len; struct nvme_tcp_ctrl *ctrl; @@ -1475,7 +1476,8 @@ static void nvme_tcp_set_queue_io_cpu(struct nvme_tcp_queue *queue) queue->io_cpu = cpumask_next_wrap(n - 1, cpu_online_mask, -1, false); } -static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl, int qid) +static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl, + int qid, size_t queue_size) { struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); struct nvme_tcp_queue *queue = &ctrl->queues[qid]; @@ -1487,6 +1489,7 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl, int qid) INIT_LIST_HEAD(&queue->send_list); mutex_init(&queue->send_mutex); INIT_WORK(&queue->io_work, nvme_tcp_io_work); + queue->queue_size = queue_size; if (qid > 0) queue->cmnd_capsule_len = nctrl->ioccsz * 16; @@ -1734,7 +1737,7 @@ static int nvme_tcp_alloc_admin_queue(struct nvme_ctrl *ctrl) { int ret; - ret = nvme_tcp_alloc_queue(ctrl, 0); + ret = nvme_tcp_alloc_queue(ctrl, 0, NVME_AQ_DEPTH); if (ret) return ret; @@ -1754,7 +1757,7 @@ static int __nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl) int i, ret; for (i = 1; i < ctrl->queue_count; i++) { - ret = nvme_tcp_alloc_queue(ctrl, i); + ret = nvme_tcp_alloc_queue(ctrl, i, ctrl->sqsize + 1); if (ret) goto out_free_queues; }