diff mbox series

[v2,2/5] nvmef: export nvmef_create_ctrl()

Message ID 20241011121951.90019-3-dlemoal@kernel.org (mailing list archive)
State New
Delegated to: Manivannan Sadhasivam
Headers show
Series NVMe PCI endpoint function driver | expand

Commit Message

Damien Le Moal Oct. 11, 2024, 12:19 p.m. UTC
Export nvmef_create_ctrl() to allow drivers to directly call this
function instead of forcing the creation of a fabrics host controller
with a write to /dev/nvmef. The export is restricted to the NVME_FABRICS
namespace.

Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
---
 drivers/nvme/host/fabrics.c | 4 ++--
 drivers/nvme/host/fabrics.h | 1 +
 2 files changed, 3 insertions(+), 2 deletions(-)

Comments

Chaitanya Kulkarni Oct. 14, 2024, 6:32 a.m. UTC | #1
On 10/11/24 05:19, Damien Le Moal wrote:

nit:- please consider following subject line as nvmef_create_ctrl()
       doesn't exists :-

     nvme-fabrics: export nvmf_create_ctrl()

> Export nvmef_create_ctrl() to allow drivers to directly call this
> function instead of forcing the creation of a fabrics host controller
> with a write to /dev/nvmef. The export is restricted to the NVME_FABRICS

I'm not sure /dev/nvmef exists I think you meant /dev/nvme-fabrics
above ?

> namespace.
>
> Signed-off-by: Damien Le Moal <dlemoal@kernel.org>

With above fix, looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>

-ck
Christoph Hellwig Oct. 14, 2024, 8:42 a.m. UTC | #2
s/nvmef_create_ctrl/nvmf_create_ctrl/

But evem looking at the code later using it I fail how this could
work.

nvmf_create_ctrl is used to implement writes to the /dev/nvme-fabrics
control device to create a fabrics controller out of thin air. The
biggest part of it is parsing the options provided as a string,
which most importantly includes the actual transport used.

But you really need to force a pcie transport type here, which
as far as I can tell isn't even added anywhere.
Damien Le Moal Oct. 14, 2024, 9:10 a.m. UTC | #3
On 10/14/24 17:42, Christoph Hellwig wrote:
> s/nvmef_create_ctrl/nvmf_create_ctrl/
> 
> But evem looking at the code later using it I fail how this could
> work.
> 
> nvmf_create_ctrl is used to implement writes to the /dev/nvme-fabrics
> control device to create a fabrics controller out of thin air. The
> biggest part of it is parsing the options provided as a string,
> which most importantly includes the actual transport used.
> 
> But you really need to force a pcie transport type here, which
> as far as I can tell isn't even added anywhere.

Nope, the PCIe "transport" is the front-end of the endpoint driver.
What this patch does is to allow this driver to be created by passing it a
string that is normally used for "nvme connect" command, which calls
nvmf_create_ctrl().

The entire thing was created to not add a PCIe host transport but rather to use
a locally created fabric host controller which connect to whatever the user
wants. That is: the PCI endpoint driver can implement a PCI interface which uses
a loop nvme device, or a tcp nvme device (and if you have a board that can do
rdma or fc, that can also be used as the backend nvme device used from the pci
endpoint driver).

I only need this function exported so that I can call it from the configfs
writes of the pci endpoint when the user sets up the endpoint.
Christoph Hellwig Oct. 14, 2024, 11:45 a.m. UTC | #4
On Mon, Oct 14, 2024 at 06:10:36PM +0900, Damien Le Moal wrote:
> Nope, the PCIe "transport" is the front-end of the endpoint driver.
> What this patch does is to allow this driver to be created by passing it a
> string that is normally used for "nvme connect" command, which calls
> nvmf_create_ctrl().

That's a pretty wrong and broken layering.  PCIe isn't any different
than RDMA or TCP in being a NVMe transport.

> The entire thing was created to not add a PCIe host transport but rather to use
> a locally created fabric host controller which connect to whatever the user
> wants. That is: the PCI endpoint driver can implement a PCI interface which uses
> a loop nvme device, or a tcp nvme device (and if you have a board that can do
> rdma or fc, that can also be used as the backend nvme device used from the pci
> endpoint driver).

You can always use the nvmet passthrough backend to directly use a
nvme controller to export through nvmet.
diff mbox series

Patch

diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 432efcbf9e2f..e3c990d50704 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -1276,8 +1276,7 @@  EXPORT_SYMBOL_GPL(nvmf_free_options);
 				 NVMF_OPT_FAIL_FAST_TMO | NVMF_OPT_DHCHAP_SECRET |\
 				 NVMF_OPT_DHCHAP_CTRL_SECRET)
 
-static struct nvme_ctrl *
-nvmf_create_ctrl(struct device *dev, const char *buf)
+struct nvme_ctrl *nvmf_create_ctrl(struct device *dev, const char *buf)
 {
 	struct nvmf_ctrl_options *opts;
 	struct nvmf_transport_ops *ops;
@@ -1346,6 +1345,7 @@  nvmf_create_ctrl(struct device *dev, const char *buf)
 	nvmf_free_options(opts);
 	return ERR_PTR(ret);
 }
+EXPORT_SYMBOL_NS_GPL(nvmf_create_ctrl, NVME_FABRICS);
 
 static const struct class nvmf_class = {
 	.name = "nvme-fabrics",
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index 21d75dc4a3a0..2dd3aeb8c53a 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -214,6 +214,7 @@  static inline unsigned int nvmf_nr_io_queues(struct nvmf_ctrl_options *opts)
 		min(opts->nr_poll_queues, num_online_cpus());
 }
 
+struct nvme_ctrl *nvmf_create_ctrl(struct device *dev, const char *buf);
 int nvmf_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val);
 int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val);
 int nvmf_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val);