From patchwork Wed Nov 8 10:06:16 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sagi Grimberg X-Patchwork-Id: 10048207 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0733D6032D for ; Wed, 8 Nov 2017 10:06:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E31C329514 for ; Wed, 8 Nov 2017 10:06:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D83CE29584; Wed, 8 Nov 2017 10:06:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, URIBL_BLACK autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7492929514 for ; Wed, 8 Nov 2017 10:06:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751278AbdKHKGx (ORCPT ); Wed, 8 Nov 2017 05:06:53 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:36577 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751141AbdKHKGv (ORCPT ); Wed, 8 Nov 2017 05:06:51 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=uGgW2WIe+MnO3+UBElCvBFIuwGQ9sfVibEMRoM3aiM4=; b=A4RR0PuM45J4PO9tBTPZyjKzP HLyLhv2+3G2/W7Yx/tJ4TrSVokR4IpudBKoKXysvXtBsY0lEZVBRDYk6GAvCPVUD+Pf+MixK/6y4v se1fjISDjVkTUFba77q4uJuE24eZDRpbtXlUjNQrklrfobkJkQF6fMHYJwP5tFtCeWisgBK8WSBvQ O7PbYSMfrXavmNDdZGs0SmpVXprSINDC0qWcDn6QobY6I6R1CP98u4BlNWeFUbL7orTm2RGaZnOIq Y81++2qxheGC2CQDiXCHa1O7OuCOivjTnIEZ1pUG4cMrSEfYwpwC9reMlSZfLJibdzmHI7je6cV0B P1cU3Y89Q==; Received: from [31.154.58.122] (helo=bombadil.infradead.org) by bombadil.infradead.org with esmtpsa (Exim 4.87 #1 (Red Hat Linux)) id 1eCNFv-0001fb-Qr; Wed, 08 Nov 2017 10:06:48 +0000 From: Sagi Grimberg To: linux-rdma@vger.kernel.org, linux-nvme@lists.infradead.org Cc: Christoph Hellwig , Max Gurtuvoy Subject: [PATCH v2 3/3] nvme-rdma: wait for local invalidation before completing a request Date: Wed, 8 Nov 2017 12:06:16 +0200 Message-Id: <20171108100616.26605-4-sagi@grimberg.me> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20171108100616.26605-1-sagi@grimberg.me> References: <20171108100616.26605-1-sagi@grimberg.me> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We must not complete a request before the host memory region is invalidated. Luckily we have send with invalidate protocol support so we usually don't need to execute it, but in case the target did not invalidate a memory region for us, we must wait for the invalidation to complete before unmapping host memory and completing the I/O. Signed-off-by: Sagi Grimberg --- drivers/nvme/host/rdma.c | 47 ++++++++++++++++++++++++++++++----------------- 1 file changed, 30 insertions(+), 17 deletions(-) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 998f7cb445d9..6ddaa7964657 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -1014,8 +1014,24 @@ static void nvme_rdma_memreg_done(struct ib_cq *cq, struct ib_wc *wc) static void nvme_rdma_inv_rkey_done(struct ib_cq *cq, struct ib_wc *wc) { - if (unlikely(wc->status != IB_WC_SUCCESS)) + struct nvme_rdma_request *req = + container_of(wc->wr_cqe, struct nvme_rdma_request, reg_cqe); + struct request *rq = blk_mq_rq_from_pdu(req); + unsigned long flags; + bool end; + + if (unlikely(wc->status != IB_WC_SUCCESS)) { nvme_rdma_wr_error(cq, wc, "LOCAL_INV"); + return; + } + + spin_lock_irqsave(&req->lock, flags); + req->mr->need_inval = false; + end = (req->resp_completed && req->send_completed); + spin_unlock_irqrestore(&req->lock, flags); + if (end) + nvme_end_request(rq, req->cqe.status, req->cqe.result); + } static int nvme_rdma_inv_rkey(struct nvme_rdma_queue *queue, @@ -1026,7 +1042,7 @@ static int nvme_rdma_inv_rkey(struct nvme_rdma_queue *queue, .opcode = IB_WR_LOCAL_INV, .next = NULL, .num_sge = 0, - .send_flags = 0, + .send_flags = IB_SEND_SIGNALED, .ex.invalidate_rkey = req->mr->rkey, }; @@ -1040,24 +1056,12 @@ static void nvme_rdma_unmap_data(struct nvme_rdma_queue *queue, struct request *rq) { struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq); - struct nvme_rdma_ctrl *ctrl = queue->ctrl; struct nvme_rdma_device *dev = queue->device; struct ib_device *ibdev = dev->dev; - int res; if (!blk_rq_bytes(rq)) return; - if (req->mr->need_inval && test_bit(NVME_RDMA_Q_LIVE, &req->queue->flags)) { - res = nvme_rdma_inv_rkey(queue, req); - if (unlikely(res < 0)) { - dev_err(ctrl->ctrl.device, - "Queueing INV WR for rkey %#x failed (%d)\n", - req->mr->rkey, res); - nvme_rdma_error_recovery(queue->ctrl); - } - } - ib_dma_unmap_sg(ibdev, req->sg_table.sgl, req->nents, rq_data_dir(rq) == WRITE ? DMA_TO_DEVICE : DMA_FROM_DEVICE); @@ -1213,7 +1217,7 @@ static void nvme_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc) spin_lock_irqsave(&req->lock, flags); req->send_completed = true; - end = req->resp_completed; + end = req->resp_completed && !req->mr->need_inval; spin_unlock_irqrestore(&req->lock, flags); if (end) nvme_end_request(rq, req->cqe.status, req->cqe.result); @@ -1338,12 +1342,21 @@ static int nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue, req->cqe.result = cqe->result; if ((wc->wc_flags & IB_WC_WITH_INVALIDATE) && - wc->ex.invalidate_rkey == req->mr->rkey) + wc->ex.invalidate_rkey == req->mr->rkey) { req->mr->need_inval = false; + } else if (req->mr->need_inval) { + ret = nvme_rdma_inv_rkey(queue, req); + if (unlikely(ret < 0)) { + dev_err(queue->ctrl->ctrl.device, + "Queueing INV WR for rkey %#x failed (%d)\n", + req->mr->rkey, ret); + nvme_rdma_error_recovery(queue->ctrl); + } + } spin_lock_irqsave(&req->lock, flags); req->resp_completed = true; - end = req->send_completed; + end = req->send_completed && !req->mr->need_inval; spin_unlock_irqrestore(&req->lock, flags); if (end) { if (rq->tag == tag)