Message ID | 20241204060601.1813514-1-chaitanya.kumar.borah@intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [core-for-CI] nvme-pci: don't use dma_alloc_noncontiguous with 0 merge boundary | expand |
Hi Chaitanya, On Wed, 2024-12-04 at 11:36 +0530, Chaitanya Kumar Borah wrote: > From: Christoph Hellwig <hch@lst.de> > > Only call into nvme_alloc_host_mem_single which uses > dma_alloc_noncontiguous when there is non-null dma merge boundary. > Without this we'll call into dma_alloc_noncontiguous for device using > dma-direct, which can work fine as long as the preferred size is below the > MAX_ORDER of the page allocator, but blows up with a warning if it is > too large. > > Fixes: 63a5c7a4b4c4 ("nvme-pci: use dma_alloc_noncontigous if possible") > Reported-by: Leon Romanovsky <leon@kernel.org> > Reported-by: Chaitanya Kumar Borah <chaitanya.kumar.borah@intel.com> > Signed-off-by: Christoph Hellwig <hch@lst.de> > References: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13227 > Link: https://lore.kernel.org/linux-nvme/39a67024-2926-4a20-8feb-77dd64ab7c39@kernel.dk/T/#mfef47937b20e33aa3cc63a3af930f8a9f9baf8c2 > --- > drivers/nvme/host/pci.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c > index 4c644bb7f069..778f124c2e21 100644 > --- a/drivers/nvme/host/pci.c > +++ b/drivers/nvme/host/pci.c > @@ -2172,6 +2172,7 @@ static int nvme_alloc_host_mem_multi(struct nvme_dev *dev, u64 preferred, > > static int nvme_alloc_host_mem(struct nvme_dev *dev, u64 min, u64 preferred) > { > + unsigned long dma_merge_moundary = dma_get_merge_boundary(dev->dev); > u64 min_chunk = min_t(u64, preferred, PAGE_SIZE * MAX_ORDER_NR_PAGES); > u64 hmminds = max_t(u32, dev->ctrl.hmminds * 4096, PAGE_SIZE * 2); > u64 chunk_size; > @@ -2180,7 +2181,7 @@ static int nvme_alloc_host_mem(struct nvme_dev *dev, u64 min, u64 preferred) > * If there is an IOMMU that can merge pages, try a virtually > * non-contiguous allocation for a single segment first. > */ > - if (!(PAGE_SIZE & dma_get_merge_boundary(dev->dev))) { > + if (dma_merge_moundary && (PAGE_SIZE & dma_merge_moundary) == 0) { > if (!nvme_alloc_host_mem_single(dev, preferred)) > return 0; > } This looks sane and has already been reviewed in the linux-nvme mailing list. So, FWIW, you have my: Acked-by: Luca Coelho <luciano.coelho@intel.com> -- Cheers, Luca.
On Wed, 04 Dec 2024, Luca Coelho <luca@coelho.fi> wrote: > Hi Chaitanya, > > On Wed, 2024-12-04 at 11:36 +0530, Chaitanya Kumar Borah wrote: >> From: Christoph Hellwig <hch@lst.de> >> >> Only call into nvme_alloc_host_mem_single which uses >> dma_alloc_noncontiguous when there is non-null dma merge boundary. >> Without this we'll call into dma_alloc_noncontiguous for device using >> dma-direct, which can work fine as long as the preferred size is below the >> MAX_ORDER of the page allocator, but blows up with a warning if it is >> too large. >> >> Fixes: 63a5c7a4b4c4 ("nvme-pci: use dma_alloc_noncontigous if possible") >> Reported-by: Leon Romanovsky <leon@kernel.org> >> Reported-by: Chaitanya Kumar Borah <chaitanya.kumar.borah@intel.com> >> Signed-off-by: Christoph Hellwig <hch@lst.de> >> References: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13227 >> Link: https://lore.kernel.org/linux-nvme/39a67024-2926-4a20-8feb-77dd64ab7c39@kernel.dk/T/#mfef47937b20e33aa3cc63a3af930f8a9f9baf8c2 >> --- >> drivers/nvme/host/pci.c | 3 ++- >> 1 file changed, 2 insertions(+), 1 deletion(-) >> >> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c >> index 4c644bb7f069..778f124c2e21 100644 >> --- a/drivers/nvme/host/pci.c >> +++ b/drivers/nvme/host/pci.c >> @@ -2172,6 +2172,7 @@ static int nvme_alloc_host_mem_multi(struct nvme_dev *dev, u64 preferred, >> >> static int nvme_alloc_host_mem(struct nvme_dev *dev, u64 min, u64 preferred) >> { >> + unsigned long dma_merge_moundary = dma_get_merge_boundary(dev->dev); >> u64 min_chunk = min_t(u64, preferred, PAGE_SIZE * MAX_ORDER_NR_PAGES); >> u64 hmminds = max_t(u32, dev->ctrl.hmminds * 4096, PAGE_SIZE * 2); >> u64 chunk_size; >> @@ -2180,7 +2181,7 @@ static int nvme_alloc_host_mem(struct nvme_dev *dev, u64 min, u64 preferred) >> * If there is an IOMMU that can merge pages, try a virtually >> * non-contiguous allocation for a single segment first. >> */ >> - if (!(PAGE_SIZE & dma_get_merge_boundary(dev->dev))) { >> + if (dma_merge_moundary && (PAGE_SIZE & dma_merge_moundary) == 0) { >> if (!nvme_alloc_host_mem_single(dev, preferred)) >> return 0; >> } > > This looks sane and has already been reviewed in the linux-nvme mailing > list. So, FWIW, you have my: > > Acked-by: Luca Coelho <luciano.coelho@intel.com> Pushed to core-for-CI. BR, Jani.
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 4c644bb7f069..778f124c2e21 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -2172,6 +2172,7 @@ static int nvme_alloc_host_mem_multi(struct nvme_dev *dev, u64 preferred, static int nvme_alloc_host_mem(struct nvme_dev *dev, u64 min, u64 preferred) { + unsigned long dma_merge_moundary = dma_get_merge_boundary(dev->dev); u64 min_chunk = min_t(u64, preferred, PAGE_SIZE * MAX_ORDER_NR_PAGES); u64 hmminds = max_t(u32, dev->ctrl.hmminds * 4096, PAGE_SIZE * 2); u64 chunk_size; @@ -2180,7 +2181,7 @@ static int nvme_alloc_host_mem(struct nvme_dev *dev, u64 min, u64 preferred) * If there is an IOMMU that can merge pages, try a virtually * non-contiguous allocation for a single segment first. */ - if (!(PAGE_SIZE & dma_get_merge_boundary(dev->dev))) { + if (dma_merge_moundary && (PAGE_SIZE & dma_merge_moundary) == 0) { if (!nvme_alloc_host_mem_single(dev, preferred)) return 0; }