From patchwork Thu Jan 11 12:00:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wen Gu X-Patchwork-Id: 13517322 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-112.freemail.mail.aliyun.com (out30-112.freemail.mail.aliyun.com [115.124.30.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F33A92836B; Thu, 11 Jan 2024 12:01:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046059;MF=guwen@linux.alibaba.com;NM=1;PH=DS;RN=18;SR=0;TI=SMTPD_---0W-PgOwg_1704974489; Received: from localhost(mailfrom:guwen@linux.alibaba.com fp:SMTPD_---0W-PgOwg_1704974489) by smtp.aliyun-inc.com; Thu, 11 Jan 2024 20:01:31 +0800 From: Wen Gu To: wintera@linux.ibm.com, wenjia@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, jaka@linux.ibm.com Cc: borntraeger@linux.ibm.com, svens@linux.ibm.com, alibuda@linux.alibaba.com, tonylu@linux.alibaba.com, guwen@linux.alibaba.com, linux-s390@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 12/15] net/smc: adapt cursor update when sndbuf and peer DMB are merged Date: Thu, 11 Jan 2024 20:00:33 +0800 Message-Id: <20240111120036.109903-13-guwen@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20240111120036.109903-1-guwen@linux.alibaba.com> References: <20240111120036.109903-1-guwen@linux.alibaba.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Since ghost sndbuf shares the same physical memory with peer DMB, the cursor update processing needs to be adapted to ensure that the data to be consumed won't be overwritten. So in this case, the fin_curs and sndbuf_space that were originally updated after sending the CDC message should be modified to not be update until the peer updates cons_curs. Signed-off-by: Wen Gu --- net/smc/smc_cdc.c | 52 +++++++++++++++++++++++++++++++++++++---------- 1 file changed, 41 insertions(+), 11 deletions(-) diff --git a/net/smc/smc_cdc.c b/net/smc/smc_cdc.c index c820ef197610..e938fe3bcc7c 100644 --- a/net/smc/smc_cdc.c +++ b/net/smc/smc_cdc.c @@ -18,6 +18,7 @@ #include "smc_tx.h" #include "smc_rx.h" #include "smc_close.h" +#include "smc_ism.h" /********************************** send *************************************/ @@ -255,17 +256,25 @@ int smcd_cdc_msg_send(struct smc_connection *conn) return rc; smc_curs_copy(&conn->rx_curs_confirmed, &curs, conn); conn->local_rx_ctrl.prod_flags.cons_curs_upd_req = 0; - /* Calculate transmitted data and increment free send buffer space */ - diff = smc_curs_diff(conn->sndbuf_desc->len, &conn->tx_curs_fin, - &conn->tx_curs_sent); - /* increased by confirmed number of bytes */ - smp_mb__before_atomic(); - atomic_add(diff, &conn->sndbuf_space); - /* guarantee 0 <= sndbuf_space <= sndbuf_desc->len */ - smp_mb__after_atomic(); - smc_curs_copy(&conn->tx_curs_fin, &conn->tx_curs_sent, conn); + if (!smc_ism_support_dmb_nocopy(conn->lgr->smcd)) { + /* Ghost sndbuf shares the same memory region with + * peer DMB, so don't update the tx_curs_fin and + * sndbuf_space until peer has consumed the data. + */ + /* Calculate transmitted data and increment free + * send buffer space + */ + diff = smc_curs_diff(conn->sndbuf_desc->len, &conn->tx_curs_fin, + &conn->tx_curs_sent); + /* increased by confirmed number of bytes */ + smp_mb__before_atomic(); + atomic_add(diff, &conn->sndbuf_space); + /* guarantee 0 <= sndbuf_space <= sndbuf_desc->len */ + smp_mb__after_atomic(); + smc_curs_copy(&conn->tx_curs_fin, &conn->tx_curs_sent, conn); - smc_tx_sndbuf_nonfull(smc); + smc_tx_sndbuf_nonfull(smc); + } return rc; } @@ -323,7 +332,7 @@ static void smc_cdc_msg_recv_action(struct smc_sock *smc, { union smc_host_cursor cons_old, prod_old; struct smc_connection *conn = &smc->conn; - int diff_cons, diff_prod; + int diff_cons, diff_prod, diff_tx; smc_curs_copy(&prod_old, &conn->local_rx_ctrl.prod, conn); smc_curs_copy(&cons_old, &conn->local_rx_ctrl.cons, conn); @@ -339,6 +348,27 @@ static void smc_cdc_msg_recv_action(struct smc_sock *smc, atomic_add(diff_cons, &conn->peer_rmbe_space); /* guarantee 0 <= peer_rmbe_space <= peer_rmbe_size */ smp_mb__after_atomic(); + + if (conn->lgr->is_smcd && + smc_ism_support_dmb_nocopy(conn->lgr->smcd)) { + /* Ghost sndbuf shares the same memory region with + * peer RMB, so update tx_curs_fin and sndbuf_space + * when peer has consumed the data. + */ + /* calculate peer rmb consumed data */ + diff_tx = smc_curs_diff(conn->sndbuf_desc->len, + &conn->tx_curs_fin, + &conn->local_rx_ctrl.cons); + /* increase local sndbuf space and fin_curs */ + smp_mb__before_atomic(); + atomic_add(diff_tx, &conn->sndbuf_space); + /* guarantee 0 <= sndbuf_space <= sndbuf_desc->len */ + smp_mb__after_atomic(); + smc_curs_copy(&conn->tx_curs_fin, + &conn->local_rx_ctrl.cons, conn); + + smc_tx_sndbuf_nonfull(smc); + } } diff_prod = smc_curs_diff(conn->rmb_desc->len, &prod_old,