diff mbox series

IB/iser: Pass the correct number of entries for dma mapped SGL

Message ID 1547739945-19095-1-git-send-email-israelr@mellanox.com (mailing list archive)
State Mainlined
Commit 57b26497fabe1b9379b59fbc7e35e608e114df16
Delegated to: Jason Gunthorpe
Headers show
Series IB/iser: Pass the correct number of entries for dma mapped SGL | expand

Commit Message

Israel Rukshin Jan. 17, 2019, 3:45 p.m. UTC
ib_dma_map_sg() augments the SGL into a 'dma mapped SGL'. This process
may change the number of entries and the lengths of each entry.

Code that touches dma_address is iterating over the 'dma mapped SGL'
and must use dma_nents which returned from ib_dma_map_sg().

ib_sg_to_pages() and ib_map_mr_sg() are using dma_address so
they must use dma_nents.

Fixes: 39405885005a ("IB/iser: Port to new fast registration API")
Fixes: bfe066e256d5 ("IB/iser: Reuse ib_sg_to_pages")
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
 drivers/infiniband/ulp/iser/iser_memory.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

Comments

Sagi Grimberg Jan. 17, 2019, 5:43 p.m. UTC | #1
Acked-by: Sagi Grimberg <sagi@grimberg.me>
Jason Gunthorpe Jan. 18, 2019, 9:39 p.m. UTC | #2
On Thu, Jan 17, 2019 at 03:45:45PM +0000, Israel Rukshin wrote:
> ib_dma_map_sg() augments the SGL into a 'dma mapped SGL'. This process
> may change the number of entries and the lengths of each entry.
> 
> Code that touches dma_address is iterating over the 'dma mapped SGL'
> and must use dma_nents which returned from ib_dma_map_sg().
> 
> ib_sg_to_pages() and ib_map_mr_sg() are using dma_address so
> they must use dma_nents.
> 
> Fixes: 39405885005a ("IB/iser: Port to new fast registration API")
> Fixes: bfe066e256d5 ("IB/iser: Reuse ib_sg_to_pages")
> Signed-off-by: Israel Rukshin <israelr@mellanox.com>
> Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
> ---
>  drivers/infiniband/ulp/iser/iser_memory.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)

Applied to for-next

Thanks,
Jason
diff mbox series

Patch

diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c
index 009be8889d71..379bc0dfc388 100644
--- a/drivers/infiniband/ulp/iser/iser_memory.c
+++ b/drivers/infiniband/ulp/iser/iser_memory.c
@@ -240,8 +240,8 @@  int iser_fast_reg_fmr(struct iscsi_iser_task *iser_task,
 	page_vec->npages = 0;
 	page_vec->fake_mr.page_size = SIZE_4K;
 	plen = ib_sg_to_pages(&page_vec->fake_mr, mem->sg,
-			      mem->size, NULL, iser_set_page);
-	if (unlikely(plen < mem->size)) {
+			      mem->dma_nents, NULL, iser_set_page);
+	if (unlikely(plen < mem->dma_nents)) {
 		iser_err("page vec too short to hold this SG\n");
 		iser_data_buf_dump(mem, device->ib_device);
 		iser_dump_page_vec(page_vec);
@@ -451,10 +451,10 @@  static int iser_fast_reg_mr(struct iscsi_iser_task *iser_task,
 
 	ib_update_fast_reg_key(mr, ib_inc_rkey(mr->rkey));
 
-	n = ib_map_mr_sg(mr, mem->sg, mem->size, NULL, SIZE_4K);
-	if (unlikely(n != mem->size)) {
+	n = ib_map_mr_sg(mr, mem->sg, mem->dma_nents, NULL, SIZE_4K);
+	if (unlikely(n != mem->dma_nents)) {
 		iser_err("failed to map sg (%d/%d)\n",
-			 n, mem->size);
+			 n, mem->dma_nents);
 		return n < 0 ? n : -EINVAL;
 	}