Message ID | 1465415292-9416-3-git-send-email-mlin@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
This needs documentation in the form of: /* * XXX: blk-mq might not map all our hw contexts but this is a must for * us for fabric connects. So until we can fix blk-mq we check that. */ > + hw_queue_mapped = blk_mq_hctx_mapped(ctrl->ctrl.connect_q); > + if (hw_queue_mapped < ctrl->ctrl.connect_q->nr_hw_queues) { > + dev_err(ctrl->ctrl.device, > + "%d hw queues created, but only %d were mapped to sw queues\n", > + ctrl->ctrl.connect_q->nr_hw_queues, > + hw_queue_mapped); > + ret = -EINVAL; > + goto out_cleanup_connect_q; > + } > + > ret = nvme_rdma_connect_io_queues(ctrl); > if (ret) > goto out_cleanup_connect_q; > -- To unsubscribe from this list: send the line "unsubscribe linux-block" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Jun 09, 2016 at 02:19:55PM +0300, Sagi Grimberg wrote: > This needs documentation in the form of: > > /* > * XXX: blk-mq might not map all our hw contexts but this is a must for > * us for fabric connects. So until we can fix blk-mq we check that. > */ I think the right thing to do is to have a member of actually mapped queues in the block layer, and I also don't think we need the XXX comment as there are valid reasons for not mapping all queues. -- To unsubscribe from this list: send the line "unsubscribe linux-block" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Jun 9, 2016 at 7:10 AM, Christoph Hellwig <hch@lst.de> wrote: > On Thu, Jun 09, 2016 at 02:19:55PM +0300, Sagi Grimberg wrote: >> This needs documentation in the form of: >> >> /* >> * XXX: blk-mq might not map all our hw contexts but this is a must for >> * us for fabric connects. So until we can fix blk-mq we check that. >> */ > > I think the right thing to do is to have a member of actually mapped > queues in the block layer, and I also don't think we need the XXX comment > as there are valid reasons for not mapping all queues. I think it is a rare case that we need all hw contexts mapped. Seems unnecessary to add a new field to "struct request_queue" for the rare case. -- To unsubscribe from this list: send the line "unsubscribe linux-block" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 4edc912..2e8f556 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -1771,6 +1771,7 @@ static const struct nvme_ctrl_ops nvme_rdma_ctrl_ops = { static int nvme_rdma_create_io_queues(struct nvme_rdma_ctrl *ctrl) { struct nvmf_ctrl_options *opts = ctrl->ctrl.opts; + int hw_queue_mapped; int ret; ret = nvme_set_queue_count(&ctrl->ctrl, &opts->nr_io_queues); @@ -1819,6 +1820,16 @@ static int nvme_rdma_create_io_queues(struct nvme_rdma_ctrl *ctrl) goto out_free_tag_set; } + hw_queue_mapped = blk_mq_hctx_mapped(ctrl->ctrl.connect_q); + if (hw_queue_mapped < ctrl->ctrl.connect_q->nr_hw_queues) { + dev_err(ctrl->ctrl.device, + "%d hw queues created, but only %d were mapped to sw queues\n", + ctrl->ctrl.connect_q->nr_hw_queues, + hw_queue_mapped); + ret = -EINVAL; + goto out_cleanup_connect_q; + } + ret = nvme_rdma_connect_io_queues(ctrl); if (ret) goto out_cleanup_connect_q;