Message ID | 1484785346-2991-1-git-send-email-parav@mellanox.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Looks much better, I'm fine with it. Christoph? -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Jan 18, 2017 at 06:22:26PM -0600, Parav Pandit wrote: > This patch performs dma sync operations on nvme_command > and nvme_completion. > > nvme_command is synced > (a) on receiving of the recv queue completion for cpu access. > (b) before posting recv wqe back to rdma adapter for device access. > > nvme_completion is synced > (a) on receiving of the recv queue completion of associated > nvme_command for cpu access. > (b) before posting send wqe to rdma adapter for device access. > > This patch is generated for git://git.infradead.org/nvme-fabrics.git > Branch: nvmf-4.10 > > Signed-off-by: Parav Pandit <parav@mellanox.com> > Reviewed-by: Max Gurtovoy <maxg@mellanox.com> > --- > drivers/nvme/target/rdma.c | 16 ++++++++++++++++ > 1 file changed, 16 insertions(+) > > diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c > index 6c1c368..0599217 100644 > --- a/drivers/nvme/target/rdma.c > +++ b/drivers/nvme/target/rdma.c > @@ -438,6 +438,10 @@ static int nvmet_rdma_post_recv(struct nvmet_rdma_device *ndev, > { > struct ib_recv_wr *bad_wr; > > + ib_dma_sync_single_for_device(ndev->device, > + cmd->sge[0].addr, cmd->sge[0].length, > + DMA_FROM_DEVICE); > + > if (ndev->srq) > return ib_post_srq_recv(ndev->srq, &cmd->wr, &bad_wr); > return ib_post_recv(cmd->queue->cm_id->qp, &cmd->wr, &bad_wr); > @@ -538,6 +542,11 @@ static void nvmet_rdma_queue_response(struct nvmet_req *req) > first_wr = &rsp->send_wr; > > nvmet_rdma_post_recv(rsp->queue->dev, rsp->cmd); > + > + ib_dma_sync_single_for_device(rsp->queue->dev->device, > + rsp->send_sge.addr, rsp->send_sge.length, > + DMA_TO_DEVICE); > + > if (ib_post_send(cm_id->qp, first_wr, &bad_wr)) { > pr_err("sending cmd response failed\n"); > nvmet_rdma_release_rsp(rsp); > @@ -698,6 +707,13 @@ static void nvmet_rdma_handle_command(struct nvmet_rdma_queue *queue, > cmd->n_rdma = 0; > cmd->req.port = queue->port; > > + ib_dma_sync_single_for_cpu(queue->dev->device, > + cmd->cmd->sge[0].addr, cmd->cmd->sge[0].length, > + DMA_FROM_DEVICE); > + ib_dma_sync_single_for_cpu(queue->dev->device, > + cmd->send_sge.addr, cmd->send_sge.length, > + DMA_TO_DEVICE); Why the different indentation here? Both one or two tab indents look fine to me in this context, but don't mix them. Except for that this looks fine: Reviewed-by: Christoph Hellwig <hch@lst.de> -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Christoph, > -----Original Message----- > From: Christoph Hellwig [mailto:hch@lst.de] > Sent: Thursday, January 19, 2017 9:23 AM > To: Parav Pandit <parav@mellanox.com> > Cc: hch@lst.de; sagi@grimberg.me; linux-nvme@lists.infradead.org; linux- > rdma@vger.kernel.org; dledford@redhat.com > Subject: Re: [PATCHv3] nvmet-rdma: Fix missing dma sync to nvme data > structures > > > > > + ib_dma_sync_single_for_cpu(queue->dev->device, > > + cmd->cmd->sge[0].addr, cmd->cmd->sge[0].length, > > + DMA_FROM_DEVICE); > > + ib_dma_sync_single_for_cpu(queue->dev->device, > > + cmd->send_sge.addr, cmd->send_sge.length, > > + DMA_TO_DEVICE); > > Why the different indentation here? Both one or two tab indents look fine to > me in this context, but don't mix them. > I agree. I should have taken care of that. Shall I resend patch with same indentation at all places or this can be taken care during commit? > Except for that this looks fine: > > Reviewed-by: Christoph Hellwig <hch@lst.de> -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Jan 19, 2017 at 03:45:30PM +0000, Parav Pandit wrote: > I agree. I should have taken care of that. > Shall I resend patch with same indentation at all places or this can be taken care during commit? I think Sagi or I can just take care of it on commit. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
After sending I realized that Doug might also need to do that to merge this to Linux-rdma so that Bart's dma_ops series works correctly. So its work for more people. Let me just resend it. Its minor. Parav > -----Original Message----- > From: Christoph Hellwig [mailto:hch@lst.de] > Sent: Thursday, January 19, 2017 9:46 AM > To: Parav Pandit <parav@mellanox.com> > Cc: Christoph Hellwig <hch@lst.de>; sagi@grimberg.me; linux- > nvme@lists.infradead.org; linux-rdma@vger.kernel.org; > dledford@redhat.com > Subject: Re: [PATCHv3] nvmet-rdma: Fix missing dma sync to nvme data > structures > > On Thu, Jan 19, 2017 at 03:45:30PM +0000, Parav Pandit wrote: > > I agree. I should have taken care of that. > > Shall I resend patch with same indentation at all places or this can be taken > care during commit? > > I think Sagi or I can just take care of it on commit. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c index 6c1c368..0599217 100644 --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c @@ -438,6 +438,10 @@ static int nvmet_rdma_post_recv(struct nvmet_rdma_device *ndev, { struct ib_recv_wr *bad_wr; + ib_dma_sync_single_for_device(ndev->device, + cmd->sge[0].addr, cmd->sge[0].length, + DMA_FROM_DEVICE); + if (ndev->srq) return ib_post_srq_recv(ndev->srq, &cmd->wr, &bad_wr); return ib_post_recv(cmd->queue->cm_id->qp, &cmd->wr, &bad_wr); @@ -538,6 +542,11 @@ static void nvmet_rdma_queue_response(struct nvmet_req *req) first_wr = &rsp->send_wr; nvmet_rdma_post_recv(rsp->queue->dev, rsp->cmd); + + ib_dma_sync_single_for_device(rsp->queue->dev->device, + rsp->send_sge.addr, rsp->send_sge.length, + DMA_TO_DEVICE); + if (ib_post_send(cm_id->qp, first_wr, &bad_wr)) { pr_err("sending cmd response failed\n"); nvmet_rdma_release_rsp(rsp); @@ -698,6 +707,13 @@ static void nvmet_rdma_handle_command(struct nvmet_rdma_queue *queue, cmd->n_rdma = 0; cmd->req.port = queue->port; + ib_dma_sync_single_for_cpu(queue->dev->device, + cmd->cmd->sge[0].addr, cmd->cmd->sge[0].length, + DMA_FROM_DEVICE); + ib_dma_sync_single_for_cpu(queue->dev->device, + cmd->send_sge.addr, cmd->send_sge.length, + DMA_TO_DEVICE); + if (!nvmet_req_init(&cmd->req, &queue->nvme_cq, &queue->nvme_sq, &nvmet_rdma_ops)) return;