Message ID | c8150106-764e-f8e9-4c1c-27e60ad96e83@grimberg.me (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, Sep 23, 2016 at 03:21:14PM -0700, Sagi Grimberg wrote: > Question: is using pci_alloc_irq_vectors() obligated for > supplying blk-mq with the device affinity mask? No, but it's very useful. We'll need equivalents for other busses that provide multipl vectors and vector spreading. > If I do this completely-untested [1] what will happen? Everything will be crashing and burning because you call to_pci_dev on something that's not a PCI dev? For the next merge window I plan to wire up the affinity information for the RDMA code, and I will add a counterpart to blk_mq_pci_map_queues that spreads the queues over the completion vectors. -- To unsubscribe from this list: send the line "unsubscribe linux-block" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 8d2875b4c56d..76693d406efe 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -1518,6 +1518,14 @@ static void nvme_rdma_complete_rq(struct request *rq) blk_mq_end_request(rq, error); } +static int nvme_rdma_map_queues(struct blk_mq_tag_set *set) +{ + struct nvme_rdma_ctrl *ctrl = set->driver_data; + struct device *dev = ctrl->device->dev.dma_device; + + return blk_mq_pci_map_queues(set, to_pci_dev(dev)); +} + static struct blk_mq_ops nvme_rdma_mq_ops = { .queue_rq = nvme_rdma_queue_rq, .complete = nvme_rdma_complete_rq, @@ -1528,6 +1536,7 @@ static struct blk_mq_ops nvme_rdma_mq_ops = { .init_hctx = nvme_rdma_init_hctx, .poll = nvme_rdma_poll, .timeout = nvme_rdma_timeout, + .map_queues = nvme_rdma_map_queues, }; static struct blk_mq_ops nvme_rdma_admin_mq_ops = {