From patchwork Fri Feb 5 21:13:16 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Wise X-Patchwork-Id: 8239741 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id DBCC29F3CD for ; Fri, 5 Feb 2016 21:39:21 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E1C092039C for ; Fri, 5 Feb 2016 21:39:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A8FB720375 for ; Fri, 5 Feb 2016 21:39:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751411AbcBEVjT (ORCPT ); Fri, 5 Feb 2016 16:39:19 -0500 Received: from smtp.opengridcomputing.com ([72.48.136.20]:58454 "EHLO smtp.opengridcomputing.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750931AbcBEVjR (ORCPT ); Fri, 5 Feb 2016 16:39:17 -0500 Received: from smtp.ogc.us (build2.ogc.int [10.10.0.32]) by smtp.opengridcomputing.com (Postfix) with ESMTP id 2BA2F29E5F for ; Fri, 5 Feb 2016 15:39:17 -0600 (CST) Received: by smtp.ogc.us (Postfix, from userid 503) id 19AA4E0753; Fri, 5 Feb 2016 15:39:17 -0600 (CST) Message-Id: <2da1db58d642789e8df154e34d622a37295d1ba3.1454709317.git.swise@chelsio.com> In-Reply-To: References: From: Steve Wise Date: Fri, 5 Feb 2016 13:13:16 -0800 Subject: [PATCH 1/3] IB: new common API for draining a queue pair To: linux-rdma@vger.kernel.org Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Steve Wise Add provider-specific drain_qp function for providers needing special drain logic. Add static function __ib_drain_qp() which posts noop WRs to the RQ and SQ and blocks until their completions are processed. This ensures the applications completions have all been processed. Add API function ib_drain_qp() which calls the provider-specific drain if it exists or __ib_drain_qp(). Signed-off-by: Steve Wise Reviewed-by: Chuck Lever --- drivers/infiniband/core/verbs.c | 72 +++++++++++++++++++++++++++++++++++++++++ include/rdma/ib_verbs.h | 2 ++ 2 files changed, 74 insertions(+) diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 5af6d02..31b82cd 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -1657,3 +1657,75 @@ next_page: return i; } EXPORT_SYMBOL(ib_sg_to_pages); + +struct ib_drain_cqe { + struct ib_cqe cqe; + struct completion done; +}; + +static void ib_drain_qp_done(struct ib_cq *cq, struct ib_wc *wc) +{ + struct ib_drain_cqe *cqe = container_of(wc->wr_cqe, struct ib_drain_cqe, + cqe); + + complete(&cqe->done); +} + +/* + * Post a WR and block until its completion is reaped for both the RQ and SQ. + */ +static void __ib_drain_qp(struct ib_qp *qp) +{ + struct ib_qp_attr attr = { .qp_state = IB_QPS_ERR }; + struct ib_drain_cqe rdrain, sdrain; + struct ib_recv_wr rwr = {}, *bad_rwr; + struct ib_send_wr swr = {}, *bad_swr; + int ret; + + rwr.wr_cqe = &rdrain.cqe; + rdrain.cqe.done = ib_drain_qp_done; + init_completion(&rdrain.done); + + swr.wr_cqe = &sdrain.cqe; + sdrain.cqe.done = ib_drain_qp_done; + init_completion(&sdrain.done); + + ret = ib_modify_qp(qp, &attr, IB_QP_STATE); + if (ret) { + WARN_ONCE(ret, "failed to drain QP: %d\n", ret); + return; + } + + ret = ib_post_recv(qp, &rwr, &bad_rwr); + if (ret) { + WARN_ONCE(ret, "failed to drain recv queue: %d\n", ret); + return; + } + + ret = ib_post_send(qp, &swr, &bad_swr); + if (ret) { + WARN_ONCE(ret, "failed to drain send queue: %d\n", ret); + return; + } + + wait_for_completion(&rdrain.done); + wait_for_completion(&sdrain.done); +} + +/** + * ib_drain_qp() - Block until all CQEs have been consumed by the + * application. + * @qp: queue pair to drain + * + * If the device has a provider-specific drain function, then + * call that. Otherwise call the generic drain function + * __ib_drain_qp(). + */ +void ib_drain_qp(struct ib_qp *qp) +{ + if (qp->device->drain_qp) + qp->device->drain_qp(qp); + else + __ib_drain_qp(qp); +} +EXPORT_SYMBOL(ib_drain_qp); diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 284b00c..d8533ab 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -1846,6 +1846,7 @@ struct ib_device { int (*check_mr_status)(struct ib_mr *mr, u32 check_mask, struct ib_mr_status *mr_status); void (*disassociate_ucontext)(struct ib_ucontext *ibcontext); + void (*drain_qp)(struct ib_qp *qp); struct ib_dma_mapping_ops *dma_ops; @@ -3094,4 +3095,5 @@ int ib_sg_to_pages(struct ib_mr *mr, int sg_nents, int (*set_page)(struct ib_mr *, u64)); +void ib_drain_qp(struct ib_qp *qp); #endif /* IB_VERBS_H */