From patchwork Tue Jul 5 14:52:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Yang X-Patchwork-Id: 12906665 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49045C433EF for ; Tue, 5 Jul 2022 14:52:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229995AbiGEOwZ (ORCPT ); Tue, 5 Jul 2022 10:52:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229492AbiGEOwY (ORCPT ); Tue, 5 Jul 2022 10:52:24 -0400 Received: from heian.cn.fujitsu.com (mail.cn.fujitsu.com [183.91.158.132]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C904413E29 for ; Tue, 5 Jul 2022 07:52:23 -0700 (PDT) IronPort-Data: A9a23:CxM4nqDDj9uxQBVW/8nhw5YqxClBgxIJ4g17XOLfAQDs1T4jgTFRnTYdXWzVPfeOZjChct9yOY7i/RgFvJaAx9UxeLYW3SszFioV86IpJjg4wn/YZnrUdouaJK5ex512huLocYZkHhcwmj/3auK79SMkjPnRLlbBILWs1h5ZFFYMpBgJ2UoLd94R2uaEsPDha++/kYqaT/73ZDdJ7wVJ3lc8sMpvnv/AUMPa41v0tnRmDRxCUcS3e3M9VPrzLonpR5f0rxU9IwK0ewrD5OnREmLx9BFrBM6nk6rgbwsBRbu60Qqm0yIQAvb9xEMZ4HFaPqUTbZLwbW9GgjOGj5Zz2f1DqJ6xVRw0eKbLnYzxVjEBSX4kbfccou6vzX+X9Jb7I1f9W2H0zvx0F0YwPZUV0ulyCGBKs/cfLVglfAGBlfO0murjEsFjg80iKI/gO4Z3knVtyjfxDvs8R53HBaLQ6rdw2DY2m9ALB/rbbuIHZjd1KhfNeRtCPhEQEp1WtOWniVHtcjBApRSerMIKD8L7pOBq+OG1doOLJZrRHoMI9nt0b1nupwzRaiz2/vTFodZdzk+Ruw== IronPort-HdrOrdr: A9a23:E/6YvqCpOc7GPoXlHemQ55DYdb4zR+YMi2TDtnoBLSC9F/b0qynAppomPGDP4gr5NEtApTniAtjkfZq/z+8X3WB5B97LMzUO01HYTr2Kg7GD/xTQXwX69sN4kZxrarVCDrTLZmRSvILX5xaZHr8brOW6zA== X-IronPort-AV: E=Sophos;i="5.88,333,1635177600"; d="scan'208";a="127282229" Received: from unknown (HELO cn.fujitsu.com) ([10.167.33.5]) by heian.cn.fujitsu.com with ESMTP; 05 Jul 2022 22:52:21 +0800 Received: from G08CNEXMBPEKD05.g08.fujitsu.local (unknown [10.167.33.204]) by cn.fujitsu.com (Postfix) with ESMTP id 7111C4D1716E; Tue, 5 Jul 2022 22:52:19 +0800 (CST) Received: from G08CNEXCHPEKD08.g08.fujitsu.local (10.167.33.83) by G08CNEXMBPEKD05.g08.fujitsu.local (10.167.33.204) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Tue, 5 Jul 2022 22:52:20 +0800 Received: from localhost.localdomain (10.167.215.54) by G08CNEXCHPEKD08.g08.fujitsu.local (10.167.33.209) with Microsoft SMTP Server id 15.0.1497.23 via Frontend Transport; Tue, 5 Jul 2022 22:52:20 +0800 From: Xiao Yang To: CC: , , , , Xiao Yang Subject: [PATCH v2 1/2] RDMA/rxe: Add common rxe_prepare_res() Date: Tue, 5 Jul 2022 22:52:11 +0800 Message-ID: <20220705145212.12014-1-yangx.jy@fujitsu.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-yoursite-MailScanner-ID: 7111C4D1716E.A7D4B X-yoursite-MailScanner: Found to be clean X-yoursite-MailScanner-From: yangx.jy@fujitsu.com Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org It's redundant to prepare resources for Read and Atomic requests by different functions. Replace them by a common rxe_prepare_res() with different parameters. In addition, the common rxe_prepare_res() can also be used by new Flush and Atomic Write requests in the future. Signed-off-by: Xiao Yang Reviewed-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_resp.c | 71 +++++++++++++--------------- 1 file changed, 32 insertions(+), 39 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index ccdfc1a6b659..5536582b8fe4 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -553,27 +553,48 @@ static enum resp_states write_data_in(struct rxe_qp *qp, return rc; } -/* Guarantee atomicity of atomic operations at the machine level. */ -static DEFINE_SPINLOCK(atomic_ops_lock); - -static struct resp_res *rxe_prepare_atomic_res(struct rxe_qp *qp, - struct rxe_pkt_info *pkt) +static struct resp_res *rxe_prepare_res(struct rxe_qp *qp, + struct rxe_pkt_info *pkt, + int type) { struct resp_res *res; + u32 pkts; res = &qp->resp.resources[qp->resp.res_head]; rxe_advance_resp_resource(qp); free_rd_atomic_resource(qp, res); - res->type = RXE_ATOMIC_MASK; - res->first_psn = pkt->psn; - res->last_psn = pkt->psn; - res->cur_psn = pkt->psn; + res->type = type; res->replay = 0; + switch (type) { + case RXE_READ_MASK: + res->read.va = qp->resp.va + qp->resp.offset; + res->read.va_org = qp->resp.va + qp->resp.offset; + res->read.resid = qp->resp.resid; + res->read.length = qp->resp.resid; + res->read.rkey = qp->resp.rkey; + + pkts = max_t(u32, (reth_len(pkt) + qp->mtu - 1)/qp->mtu, 1); + res->first_psn = pkt->psn; + res->cur_psn = pkt->psn; + res->last_psn = (pkt->psn + pkts - 1) & BTH_PSN_MASK; + + res->state = rdatm_res_state_new; + break; + case RXE_ATOMIC_MASK: + res->first_psn = pkt->psn; + res->last_psn = pkt->psn; + res->cur_psn = pkt->psn; + break; + } + return res; } +/* Guarantee atomicity of atomic operations at the machine level. */ +static DEFINE_SPINLOCK(atomic_ops_lock); + static enum resp_states rxe_atomic_reply(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { @@ -584,7 +605,7 @@ static enum resp_states rxe_atomic_reply(struct rxe_qp *qp, u64 value; if (!res) { - res = rxe_prepare_atomic_res(qp, pkt); + res = rxe_prepare_res(qp, pkt, RXE_ATOMIC_MASK); qp->resp.res = res; } @@ -680,34 +701,6 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, return skb; } -static struct resp_res *rxe_prepare_read_res(struct rxe_qp *qp, - struct rxe_pkt_info *pkt) -{ - struct resp_res *res; - u32 pkts; - - res = &qp->resp.resources[qp->resp.res_head]; - rxe_advance_resp_resource(qp); - free_rd_atomic_resource(qp, res); - - res->type = RXE_READ_MASK; - res->replay = 0; - res->read.va = qp->resp.va + qp->resp.offset; - res->read.va_org = qp->resp.va + qp->resp.offset; - res->read.resid = qp->resp.resid; - res->read.length = qp->resp.resid; - res->read.rkey = qp->resp.rkey; - - pkts = max_t(u32, (reth_len(pkt) + qp->mtu - 1)/qp->mtu, 1); - res->first_psn = pkt->psn; - res->cur_psn = pkt->psn; - res->last_psn = (pkt->psn + pkts - 1) & BTH_PSN_MASK; - - res->state = rdatm_res_state_new; - - return res; -} - /** * rxe_recheck_mr - revalidate MR from rkey and get a reference * @qp: the qp @@ -778,7 +771,7 @@ static enum resp_states read_reply(struct rxe_qp *qp, struct rxe_mr *mr; if (!res) { - res = rxe_prepare_read_res(qp, req_pkt); + res = rxe_prepare_res(qp, req_pkt, RXE_READ_MASK); qp->resp.res = res; } From patchwork Tue Jul 5 14:52:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Yang X-Patchwork-Id: 12906664 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE388C43334 for ; Tue, 5 Jul 2022 14:52:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231513AbiGEOwX (ORCPT ); Tue, 5 Jul 2022 10:52:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45632 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229995AbiGEOwX (ORCPT ); Tue, 5 Jul 2022 10:52:23 -0400 Received: from heian.cn.fujitsu.com (mail.cn.fujitsu.com [183.91.158.132]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 486F013E29 for ; Tue, 5 Jul 2022 07:52:22 -0700 (PDT) IronPort-Data: A9a23:gsiu1a5hzzpnuGZSAeU94wxRtNjFchMFZxGqfqrLsXjdYENS1DMPmGRMC2COOviMYzHyLdAgPIq/ox8EuceEm4c3Hgc5pCpnJ55ogZCbXIzGdC8cHM8zwvXrFRsht4NHAjX5BJhcokT0+1H9YtANkVEmjfvSHuCkUbadUsxMbVQMpBkJ2EsLd9ER0tYAbeiRW2thiPuqyyHtEAbNNw1cbgr435m+RCZH55wejt+3UmsWPpintHeG/5Uc4Ql2yauZdxMUSaEMdgK2qnqq8V23wo/Z109F5tKNmbC9fFAIQ6LJIE6FjX8+t6qK20AE/3JtlP1gcqd0hUR/0l1lm/hgwdNCpdqyWC8nI6/NhP8AFRJfFkmSOIUfouCdcSLl65z7I0ruNiGEL+9VJFsuMIQC4eFxAXlD3fMdITEJKBuEgoqe0qO5WPhu3Jx7dOHkOYoevjdryjSxJfIrRpbrQKjQ49JcmjAqiahmGffYetpcczZqZTzebBBVfFQaEpQzmKGvnHaXWz9Xp3qHpKcv7i7YxWRMPBLFWDbOUoXSA5wLwQDD/SSbl1kVyyoybLS3oQdpOFr27gMXoR7GZQ== IronPort-HdrOrdr: A9a23:8KZURquQ3YX0nc2xg+9oV3gz7skDktV00zEX/kB9WHVpmszxra6TdZMgpHnJYVcqKQgdcL+7WJVoLUmxyXcx2/h1AV7AZniAhILLFvAA0WKK+VSJcEeSygce79YFT0EUMrzN5DZB4voSmDPIcerI3uP3jZyAtKPPyWt3VwF2Z+VF5wd9MAySFUp7X2B9dOEEPavZ9sxavCChZHhSSsy6A0MOV+/Fq8aOu4nhZXc9dmQawTjLnTW186T7DhTd+h8fVglEybAk/XOAsyGR3NTaj82G X-IronPort-AV: E=Sophos;i="5.88,333,1635177600"; d="scan'208";a="127282228" Received: from unknown (HELO cn.fujitsu.com) ([10.167.33.5]) by heian.cn.fujitsu.com with ESMTP; 05 Jul 2022 22:52:21 +0800 Received: from G08CNEXMBPEKD06.g08.fujitsu.local (unknown [10.167.33.206]) by cn.fujitsu.com (Postfix) with ESMTP id 52BFE4D17179; Tue, 5 Jul 2022 22:52:20 +0800 (CST) Received: from G08CNEXCHPEKD08.g08.fujitsu.local (10.167.33.83) by G08CNEXMBPEKD06.g08.fujitsu.local (10.167.33.206) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Tue, 5 Jul 2022 22:52:18 +0800 Received: from localhost.localdomain (10.167.215.54) by G08CNEXCHPEKD08.g08.fujitsu.local (10.167.33.209) with Microsoft SMTP Server id 15.0.1497.23 via Frontend Transport; Tue, 5 Jul 2022 22:52:21 +0800 From: Xiao Yang To: CC: , , , , Xiao Yang Subject: [PATCH v2 2/2] RDMA/rxe: Rename rxe_atomic_reply to atomic_reply Date: Tue, 5 Jul 2022 22:52:12 +0800 Message-ID: <20220705145212.12014-2-yangx.jy@fujitsu.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705145212.12014-1-yangx.jy@fujitsu.com> References: <20220705145212.12014-1-yangx.jy@fujitsu.com> MIME-Version: 1.0 X-yoursite-MailScanner-ID: 52BFE4D17179.A72C9 X-yoursite-MailScanner: Found to be clean X-yoursite-MailScanner-From: yangx.jy@fujitsu.com Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org It's better to use the unified naming format. Signed-off-by: Xiao Yang --- drivers/infiniband/sw/rxe/rxe_resp.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 5536582b8fe4..265e46fe050f 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -595,7 +595,7 @@ static struct resp_res *rxe_prepare_res(struct rxe_qp *qp, /* Guarantee atomicity of atomic operations at the machine level. */ static DEFINE_SPINLOCK(atomic_ops_lock); -static enum resp_states rxe_atomic_reply(struct rxe_qp *qp, +static enum resp_states atomic_reply(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { u64 *vaddr; @@ -1333,7 +1333,7 @@ int rxe_responder(void *arg) state = read_reply(qp, pkt); break; case RESPST_ATOMIC_REPLY: - state = rxe_atomic_reply(qp, pkt); + state = atomic_reply(qp, pkt); break; case RESPST_ACKNOWLEDGE: state = acknowledge(qp, pkt);