Message ID | 20190305135120.29284-9-igor.j.konopko@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | lightnvm: bugfixes and improvements | expand |
On 3/5/19 2:51 PM, Igor Konopko wrote: > Current lightnvm and pblk implementation does not care about NVMe max > data transfer size, which can be smaller than 64*K=256K. There are > existing NVMe controllers which NVMe max data transfer size is lower > that 256K (for example 128K, which happens for existing NVMe > controllers which are NVMe spec compliant). Such a controllers are not > able to handle command which contains 64 PPAs, since the the size of > DMAed buffer will be above the capabilities of such a controller. > > Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> > Reviewed-by: Javier González <javier@javigon.com> > --- > drivers/lightnvm/core.c | 9 +++++++-- > drivers/nvme/host/lightnvm.c | 1 + > include/linux/lightnvm.h | 1 + > 3 files changed, 9 insertions(+), 2 deletions(-) > > diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c > index 5f82036..c01f83b 100644 > --- a/drivers/lightnvm/core.c > +++ b/drivers/lightnvm/core.c > @@ -325,6 +325,7 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) > struct nvm_target *t; > struct nvm_tgt_dev *tgt_dev; > void *targetdata; > + unsigned int mdts; > int ret; > > switch (create->conf.type) { > @@ -412,8 +413,12 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) > tdisk->private_data = targetdata; > tqueue->queuedata = targetdata; > > - blk_queue_max_hw_sectors(tqueue, > - (dev->geo.csecs >> 9) * NVM_MAX_VLBA); > + mdts = (dev->geo.csecs >> 9) * NVM_MAX_VLBA; > + if (dev->geo.mdts) { > + mdts = min_t(u32, dev->geo.mdts, > + (dev->geo.csecs >> 9) * NVM_MAX_VLBA); > + } > + blk_queue_max_hw_sectors(tqueue, mdts); > > set_capacity(tdisk, tt->capacity(targetdata)); > add_disk(tdisk); > diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c > index 949e29e..4f20a10 100644 > --- a/drivers/nvme/host/lightnvm.c > +++ b/drivers/nvme/host/lightnvm.c > @@ -977,6 +977,7 @@ int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node) > geo->csecs = 1 << ns->lba_shift; > geo->sos = ns->ms; > geo->ext = ns->ext; > + geo->mdts = ns->ctrl->max_hw_sectors; > > dev->q = q; > memcpy(dev->name, disk_name, DISK_NAME_LEN); > diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h > index 5d865a5..d3b0270 100644 > --- a/include/linux/lightnvm.h > +++ b/include/linux/lightnvm.h > @@ -358,6 +358,7 @@ struct nvm_geo { > u16 csecs; /* sector size */ > u16 sos; /* out-of-band area size */ > bool ext; /* metadata in extended data buffer */ > + u32 mdts; /* Max data transfer size*/ > > /* device write constrains */ > u32 ws_min; /* minimum write size */ > I think I can pick this up. I'll let Javier/Hans rereview. Given Javier's feedback on broken existing mdts implementations. When they are brought to light, we may want to add in a quirk list to update MDTS accordingly. Javier, will this address the issue you have regarding using mdts? -Matias
> On 5 Mar 2019, at 15.34, Matias Bjørling <mb@lightnvm.io> wrote: > > On 3/5/19 2:51 PM, Igor Konopko wrote: >> Current lightnvm and pblk implementation does not care about NVMe max >> data transfer size, which can be smaller than 64*K=256K. There are >> existing NVMe controllers which NVMe max data transfer size is lower >> that 256K (for example 128K, which happens for existing NVMe >> controllers which are NVMe spec compliant). Such a controllers are not >> able to handle command which contains 64 PPAs, since the the size of >> DMAed buffer will be above the capabilities of such a controller. >> Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> >> Reviewed-by: Javier González <javier@javigon.com> >> --- >> drivers/lightnvm/core.c | 9 +++++++-- >> drivers/nvme/host/lightnvm.c | 1 + >> include/linux/lightnvm.h | 1 + >> 3 files changed, 9 insertions(+), 2 deletions(-) >> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c >> index 5f82036..c01f83b 100644 >> --- a/drivers/lightnvm/core.c >> +++ b/drivers/lightnvm/core.c >> @@ -325,6 +325,7 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) >> struct nvm_target *t; >> struct nvm_tgt_dev *tgt_dev; >> void *targetdata; >> + unsigned int mdts; >> int ret; >> switch (create->conf.type) { >> @@ -412,8 +413,12 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) >> tdisk->private_data = targetdata; >> tqueue->queuedata = targetdata; >> - blk_queue_max_hw_sectors(tqueue, >> - (dev->geo.csecs >> 9) * NVM_MAX_VLBA); >> + mdts = (dev->geo.csecs >> 9) * NVM_MAX_VLBA; >> + if (dev->geo.mdts) { >> + mdts = min_t(u32, dev->geo.mdts, >> + (dev->geo.csecs >> 9) * NVM_MAX_VLBA); >> + } >> + blk_queue_max_hw_sectors(tqueue, mdts); >> set_capacity(tdisk, tt->capacity(targetdata)); >> add_disk(tdisk); >> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c >> index 949e29e..4f20a10 100644 >> --- a/drivers/nvme/host/lightnvm.c >> +++ b/drivers/nvme/host/lightnvm.c >> @@ -977,6 +977,7 @@ int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node) >> geo->csecs = 1 << ns->lba_shift; >> geo->sos = ns->ms; >> geo->ext = ns->ext; >> + geo->mdts = ns->ctrl->max_hw_sectors; >> dev->q = q; >> memcpy(dev->name, disk_name, DISK_NAME_LEN); >> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h >> index 5d865a5..d3b0270 100644 >> --- a/include/linux/lightnvm.h >> +++ b/include/linux/lightnvm.h >> @@ -358,6 +358,7 @@ struct nvm_geo { >> u16 csecs; /* sector size */ >> u16 sos; /* out-of-band area size */ >> bool ext; /* metadata in extended data buffer */ >> + u32 mdts; /* Max data transfer size*/ >> /* device write constrains */ >> u32 ws_min; /* minimum write size */ > > I think I can pick this up. I'll let Javier/Hans rereview. > > Given Javier's feedback on broken existing mdts implementations. When > they are brought to light, we may want to add in a quirk list to > update MDTS accordingly. Javier, will this address the issue you have > regarding using mdts? Sounds good. Thanks for considering this. Javier
Looks good to me too, Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> On Wed, Mar 6, 2019 at 8:44 AM Javier González <javier@javigon.com> wrote: > > > On 5 Mar 2019, at 15.34, Matias Bjørling <mb@lightnvm.io> wrote: > > > > On 3/5/19 2:51 PM, Igor Konopko wrote: > >> Current lightnvm and pblk implementation does not care about NVMe max > >> data transfer size, which can be smaller than 64*K=256K. There are > >> existing NVMe controllers which NVMe max data transfer size is lower > >> that 256K (for example 128K, which happens for existing NVMe > >> controllers which are NVMe spec compliant). Such a controllers are not > >> able to handle command which contains 64 PPAs, since the the size of > >> DMAed buffer will be above the capabilities of such a controller. > >> Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> > >> Reviewed-by: Javier González <javier@javigon.com> > >> --- > >> drivers/lightnvm/core.c | 9 +++++++-- > >> drivers/nvme/host/lightnvm.c | 1 + > >> include/linux/lightnvm.h | 1 + > >> 3 files changed, 9 insertions(+), 2 deletions(-) > >> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c > >> index 5f82036..c01f83b 100644 > >> --- a/drivers/lightnvm/core.c > >> +++ b/drivers/lightnvm/core.c > >> @@ -325,6 +325,7 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) > >> struct nvm_target *t; > >> struct nvm_tgt_dev *tgt_dev; > >> void *targetdata; > >> + unsigned int mdts; > >> int ret; > >> switch (create->conf.type) { > >> @@ -412,8 +413,12 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) > >> tdisk->private_data = targetdata; > >> tqueue->queuedata = targetdata; > >> - blk_queue_max_hw_sectors(tqueue, > >> - (dev->geo.csecs >> 9) * NVM_MAX_VLBA); > >> + mdts = (dev->geo.csecs >> 9) * NVM_MAX_VLBA; > >> + if (dev->geo.mdts) { > >> + mdts = min_t(u32, dev->geo.mdts, > >> + (dev->geo.csecs >> 9) * NVM_MAX_VLBA); > >> + } > >> + blk_queue_max_hw_sectors(tqueue, mdts); > >> set_capacity(tdisk, tt->capacity(targetdata)); > >> add_disk(tdisk); > >> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c > >> index 949e29e..4f20a10 100644 > >> --- a/drivers/nvme/host/lightnvm.c > >> +++ b/drivers/nvme/host/lightnvm.c > >> @@ -977,6 +977,7 @@ int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node) > >> geo->csecs = 1 << ns->lba_shift; > >> geo->sos = ns->ms; > >> geo->ext = ns->ext; > >> + geo->mdts = ns->ctrl->max_hw_sectors; > >> dev->q = q; > >> memcpy(dev->name, disk_name, DISK_NAME_LEN); > >> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h > >> index 5d865a5..d3b0270 100644 > >> --- a/include/linux/lightnvm.h > >> +++ b/include/linux/lightnvm.h > >> @@ -358,6 +358,7 @@ struct nvm_geo { > >> u16 csecs; /* sector size */ > >> u16 sos; /* out-of-band area size */ > >> bool ext; /* metadata in extended data buffer */ > >> + u32 mdts; /* Max data transfer size*/ > >> /* device write constrains */ > >> u32 ws_min; /* minimum write size */ > > > > I think I can pick this up. I'll let Javier/Hans rereview. > > > > Given Javier's feedback on broken existing mdts implementations. When > > they are brought to light, we may want to add in a quirk list to > > update MDTS accordingly. Javier, will this address the issue you have > > regarding using mdts? > > Sounds good. Thanks for considering this. > > Javier
diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index 5f82036..c01f83b 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -325,6 +325,7 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) struct nvm_target *t; struct nvm_tgt_dev *tgt_dev; void *targetdata; + unsigned int mdts; int ret; switch (create->conf.type) { @@ -412,8 +413,12 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) tdisk->private_data = targetdata; tqueue->queuedata = targetdata; - blk_queue_max_hw_sectors(tqueue, - (dev->geo.csecs >> 9) * NVM_MAX_VLBA); + mdts = (dev->geo.csecs >> 9) * NVM_MAX_VLBA; + if (dev->geo.mdts) { + mdts = min_t(u32, dev->geo.mdts, + (dev->geo.csecs >> 9) * NVM_MAX_VLBA); + } + blk_queue_max_hw_sectors(tqueue, mdts); set_capacity(tdisk, tt->capacity(targetdata)); add_disk(tdisk); diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index 949e29e..4f20a10 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -977,6 +977,7 @@ int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node) geo->csecs = 1 << ns->lba_shift; geo->sos = ns->ms; geo->ext = ns->ext; + geo->mdts = ns->ctrl->max_hw_sectors; dev->q = q; memcpy(dev->name, disk_name, DISK_NAME_LEN); diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 5d865a5..d3b0270 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -358,6 +358,7 @@ struct nvm_geo { u16 csecs; /* sector size */ u16 sos; /* out-of-band area size */ bool ext; /* metadata in extended data buffer */ + u32 mdts; /* Max data transfer size*/ /* device write constrains */ u32 ws_min; /* minimum write size */