From patchwork Wed Feb 15 16:18:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wen Gu X-Patchwork-Id: 13141896 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BA7DC64EC4 for ; Wed, 15 Feb 2023 16:19:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229485AbjBOQS7 (ORCPT ); Wed, 15 Feb 2023 11:18:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229963AbjBOQSs (ORCPT ); Wed, 15 Feb 2023 11:18:48 -0500 Received: from out30-98.freemail.mail.aliyun.com (out30-98.freemail.mail.aliyun.com [115.124.30.98]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4059B2FCC7; Wed, 15 Feb 2023 08:18:41 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R941e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=guwen@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0Vbl3Oxi_1676477916; Received: from localhost(mailfrom:guwen@linux.alibaba.com fp:SMTPD_---0Vbl3Oxi_1676477916) by smtp.aliyun-inc.com; Thu, 16 Feb 2023 00:18:37 +0800 From: Wen Gu To: kgraul@linux.ibm.com, wenjia@linux.ibm.com, jaka@linux.ibm.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com Cc: linux-s390@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH net-next v3 1/9] net/smc: Decouple ism_dev from SMC-D device dump Date: Thu, 16 Feb 2023 00:18:17 +0800 Message-Id: <1676477905-88043-2-git-send-email-guwen@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> References: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC This patch helps to decouple SMC-D device and ISM device, allowing different underlying device forms, such as non-PCI devices. Signed-off-by: Wen Gu --- include/net/smc.h | 1 + net/smc/smc_ism.c | 6 +++--- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/include/net/smc.h b/include/net/smc.h index 597cb93..debf126 100644 --- a/include/net/smc.h +++ b/include/net/smc.h @@ -76,6 +76,7 @@ struct smcd_ops { struct smcd_dev { const struct smcd_ops *ops; void *priv; + struct device *parent_pci_dev; struct list_head list; spinlock_t lock; struct smc_connection **conn; diff --git a/net/smc/smc_ism.c b/net/smc/smc_ism.c index 3b0b771..8023bb0 100644 --- a/net/smc/smc_ism.c +++ b/net/smc/smc_ism.c @@ -231,11 +231,9 @@ static int smc_nl_handle_smcd_dev(struct smcd_dev *smcd, struct smc_pci_dev smc_pci_dev; struct nlattr *port_attrs; struct nlattr *attrs; - struct ism_dev *ism; int use_cnt = 0; void *nlh; - ism = smcd->priv; nlh = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, &smc_gen_nl_family, NLM_F_MULTI, SMC_NETLINK_GET_DEV_SMCD); @@ -250,7 +248,8 @@ static int smc_nl_handle_smcd_dev(struct smcd_dev *smcd, if (nla_put_u8(skb, SMC_NLA_DEV_IS_CRIT, use_cnt > 0)) goto errattr; memset(&smc_pci_dev, 0, sizeof(smc_pci_dev)); - smc_set_pci_values(to_pci_dev(ism->dev.parent), &smc_pci_dev); + if (smcd->parent_pci_dev) + smc_set_pci_values(to_pci_dev(smcd->parent_pci_dev), &smc_pci_dev); if (nla_put_u32(skb, SMC_NLA_DEV_PCI_FID, smc_pci_dev.pci_fid)) goto errattr; if (nla_put_u16(skb, SMC_NLA_DEV_PCI_CHID, smc_pci_dev.pci_pchid)) @@ -420,6 +419,7 @@ static void smcd_register_dev(struct ism_dev *ism) if (!smcd) return; smcd->priv = ism; + smcd->parent_pci_dev = ism->dev.parent; ism_set_priv(ism, &smc_ism_client, smcd); if (smc_pnetid_by_dev_port(&ism->pdev->dev, 0, smcd->pnetid)) smc_pnetid_by_table_smcd(smcd); From patchwork Wed Feb 15 16:18:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wen Gu X-Patchwork-Id: 13141895 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 923DBC636CC for ; Wed, 15 Feb 2023 16:18:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230033AbjBOQSu (ORCPT ); Wed, 15 Feb 2023 11:18:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229907AbjBOQSs (ORCPT ); Wed, 15 Feb 2023 11:18:48 -0500 Received: from out30-113.freemail.mail.aliyun.com (out30-113.freemail.mail.aliyun.com [115.124.30.113]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 791832DE4A; Wed, 15 Feb 2023 08:18:42 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R671e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045192;MF=guwen@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0Vbl04wX_1676477917; Received: from localhost(mailfrom:guwen@linux.alibaba.com fp:SMTPD_---0Vbl04wX_1676477917) by smtp.aliyun-inc.com; Thu, 16 Feb 2023 00:18:39 +0800 From: Wen Gu To: kgraul@linux.ibm.com, wenjia@linux.ibm.com, jaka@linux.ibm.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com Cc: linux-s390@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH net-next v3 2/9] net/smc: Decouple ism_dev from SMC-D DMB registration Date: Thu, 16 Feb 2023 00:18:18 +0800 Message-Id: <1676477905-88043-3-git-send-email-guwen@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> References: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC This patch tries to decouple ISM device from SMC-D DMB registration, So that the register_dmb option is not restricted to be used by ISM device. Signed-off-by: Wen Gu --- drivers/s390/net/ism_drv.c | 5 +++-- include/net/smc.h | 4 ++-- net/smc/smc_ism.c | 7 ++----- 3 files changed, 7 insertions(+), 9 deletions(-) diff --git a/drivers/s390/net/ism_drv.c b/drivers/s390/net/ism_drv.c index eb7e134..93aff58 100644 --- a/drivers/s390/net/ism_drv.c +++ b/drivers/s390/net/ism_drv.c @@ -799,9 +799,10 @@ static int smcd_query_rgid(struct smcd_dev *smcd, u64 rgid, u32 vid_valid, } static int smcd_register_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb, - struct ism_client *client) + void *client_priv) { - return ism_register_dmb(smcd->priv, (struct ism_dmb *)dmb, client); + return ism_register_dmb(smcd->priv, (struct ism_dmb *)dmb, + (struct ism_client *)client_priv); } static int smcd_unregister_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb) diff --git a/include/net/smc.h b/include/net/smc.h index debf126..9b0da45 100644 --- a/include/net/smc.h +++ b/include/net/smc.h @@ -50,13 +50,12 @@ struct smcd_dmb { #define ISM_ERROR 0xFFFF struct smcd_dev; -struct ism_client; struct smcd_ops { int (*query_remote_gid)(struct smcd_dev *dev, u64 rgid, u32 vid_valid, u32 vid); int (*register_dmb)(struct smcd_dev *dev, struct smcd_dmb *dmb, - struct ism_client *client); + void *client_priv); int (*unregister_dmb)(struct smcd_dev *dev, struct smcd_dmb *dmb); int (*add_vlan_id)(struct smcd_dev *dev, u64 vlan_id); int (*del_vlan_id)(struct smcd_dev *dev, u64 vlan_id); @@ -76,6 +75,7 @@ struct smcd_ops { struct smcd_dev { const struct smcd_ops *ops; void *priv; + void *client_priv; struct device *parent_pci_dev; struct list_head list; spinlock_t lock; diff --git a/net/smc/smc_ism.c b/net/smc/smc_ism.c index 8023bb0..93c7415 100644 --- a/net/smc/smc_ism.c +++ b/net/smc/smc_ism.c @@ -200,7 +200,6 @@ int smc_ism_unregister_dmb(struct smcd_dev *smcd, struct smc_buf_desc *dmb_desc) int smc_ism_register_dmb(struct smc_link_group *lgr, int dmb_len, struct smc_buf_desc *dmb_desc) { -#if IS_ENABLED(CONFIG_ISM) struct smcd_dmb dmb; int rc; @@ -209,7 +208,7 @@ int smc_ism_register_dmb(struct smc_link_group *lgr, int dmb_len, dmb.sba_idx = dmb_desc->sba_idx; dmb.vlan_id = lgr->vlan_id; dmb.rgid = lgr->peer_gid; - rc = lgr->smcd->ops->register_dmb(lgr->smcd, &dmb, &smc_ism_client); + rc = lgr->smcd->ops->register_dmb(lgr->smcd, &dmb, lgr->smcd->client_priv); if (!rc) { dmb_desc->sba_idx = dmb.sba_idx; dmb_desc->token = dmb.dmb_tok; @@ -218,9 +217,6 @@ int smc_ism_register_dmb(struct smc_link_group *lgr, int dmb_len, dmb_desc->len = dmb.dmb_len; } return rc; -#else - return 0; -#endif } static int smc_nl_handle_smcd_dev(struct smcd_dev *smcd, @@ -419,6 +415,7 @@ static void smcd_register_dev(struct ism_dev *ism) if (!smcd) return; smcd->priv = ism; + smcd->client_priv = &smc_ism_client; smcd->parent_pci_dev = ism->dev.parent; ism_set_priv(ism, &smc_ism_client, smcd); if (smc_pnetid_by_dev_port(&ism->pdev->dev, 0, smcd->pnetid)) From patchwork Wed Feb 15 16:18:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wen Gu X-Patchwork-Id: 13141897 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA433C636D4 for ; Wed, 15 Feb 2023 16:19:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230059AbjBOQTB (ORCPT ); Wed, 15 Feb 2023 11:19:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229981AbjBOQSs (ORCPT ); Wed, 15 Feb 2023 11:18:48 -0500 Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1619539CC3; Wed, 15 Feb 2023 08:18:43 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045168;MF=guwen@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0Vbl3meW_1676477919; Received: from localhost(mailfrom:guwen@linux.alibaba.com fp:SMTPD_---0Vbl3meW_1676477919) by smtp.aliyun-inc.com; Thu, 16 Feb 2023 00:18:40 +0800 From: Wen Gu To: kgraul@linux.ibm.com, wenjia@linux.ibm.com, jaka@linux.ibm.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com Cc: linux-s390@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH net-next v3 3/9] net/smc: Extract v2 check helper from SMC-D device registration Date: Thu, 16 Feb 2023 00:18:19 +0800 Message-Id: <1676477905-88043-4-git-send-email-guwen@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> References: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC This patch extracts v2-capable logic from the process of registering the ISM device as an SMC-D device, so that the registration process of other underlying devices can also use it. Signed-off-by: Wen Gu --- net/smc/smc_ism.c | 27 +++++++++++++++++---------- net/smc/smc_ism.h | 1 + 2 files changed, 18 insertions(+), 10 deletions(-) diff --git a/net/smc/smc_ism.c b/net/smc/smc_ism.c index 93c7415..9504273 100644 --- a/net/smc/smc_ism.c +++ b/net/smc/smc_ism.c @@ -69,6 +69,22 @@ bool smc_ism_is_v2_capable(void) return smc_ism_v2_capable; } +/* must be called under smcd_dev_list.mutex lock */ +void smc_ism_check_v2_capable(struct smcd_dev *smcd) +{ + u8 *system_eid = NULL; + + if (!list_empty(&smcd_dev_list.list)) + return; + + system_eid = smcd->ops->get_system_eid(); + if (system_eid[24] != '0' || system_eid[28] != '0') { + smc_ism_v2_capable = true; + memcpy(smc_ism_v2_system_eid, system_eid, + SMC_MAX_EID_LEN); + } +} + /* Set a connection using this DMBE. */ void smc_ism_set_conn(struct smc_connection *conn) { @@ -422,16 +438,7 @@ static void smcd_register_dev(struct ism_dev *ism) smc_pnetid_by_table_smcd(smcd); mutex_lock(&smcd_dev_list.mutex); - if (list_empty(&smcd_dev_list.list)) { - u8 *system_eid = NULL; - - system_eid = smcd->ops->get_system_eid(); - if (system_eid[24] != '0' || system_eid[28] != '0') { - smc_ism_v2_capable = true; - memcpy(smc_ism_v2_system_eid, system_eid, - SMC_MAX_EID_LEN); - } - } + smc_ism_check_v2_capable(smcd); /* sort list: devices without pnetid before devices with pnetid */ if (smcd->pnetid[0]) list_add_tail(&smcd->list, &smcd_dev_list.list); diff --git a/net/smc/smc_ism.h b/net/smc/smc_ism.h index 832b2f4..14d2e77 100644 --- a/net/smc/smc_ism.h +++ b/net/smc/smc_ism.h @@ -42,6 +42,7 @@ int smc_ism_register_dmb(struct smc_link_group *lgr, int buf_size, void smc_ism_get_system_eid(u8 **eid); u16 smc_ism_get_chid(struct smcd_dev *dev); bool smc_ism_is_v2_capable(void); +void smc_ism_check_v2_capable(struct smcd_dev *dev); int smc_ism_init(void); void smc_ism_exit(void); int smcd_nl_get_device(struct sk_buff *skb, struct netlink_callback *cb); From patchwork Wed Feb 15 16:18:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wen Gu X-Patchwork-Id: 13141899 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8078FC636D4 for ; Wed, 15 Feb 2023 16:19:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230084AbjBOQTF (ORCPT ); Wed, 15 Feb 2023 11:19:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230020AbjBOQSu (ORCPT ); Wed, 15 Feb 2023 11:18:50 -0500 Received: from out30-119.freemail.mail.aliyun.com (out30-119.freemail.mail.aliyun.com [115.124.30.119]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 663103A864; Wed, 15 Feb 2023 08:18:46 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R201e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=guwen@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0Vbl3mf1_1676477921; Received: from localhost(mailfrom:guwen@linux.alibaba.com fp:SMTPD_---0Vbl3mf1_1676477921) by smtp.aliyun-inc.com; Thu, 16 Feb 2023 00:18:42 +0800 From: Wen Gu To: kgraul@linux.ibm.com, wenjia@linux.ibm.com, jaka@linux.ibm.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com Cc: linux-s390@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH net-next v3 4/9] net/smc: Introduce SMC-D loopback device Date: Thu, 16 Feb 2023 00:18:20 +0800 Message-Id: <1676477905-88043-5-git-send-email-guwen@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> References: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC This patch introduces a kind of loopback device for SMC-D, thus enabling the SMC communication between two local sockets within one OS instance. The loopback device supports basic capabilities defined by SMC-D options, and exposed as an SMC-D v2 device. Signed-off-by: Wen Gu --- include/net/smc.h | 6 + net/smc/Makefile | 2 +- net/smc/af_smc.c | 12 +- net/smc/smc_cdc.c | 9 +- net/smc/smc_cdc.h | 1 + net/smc/smc_loopback.c | 363 +++++++++++++++++++++++++++++++++++++++++++++++++ net/smc/smc_loopback.h | 51 +++++++ 7 files changed, 441 insertions(+), 3 deletions(-) create mode 100644 net/smc/smc_loopback.c create mode 100644 net/smc/smc_loopback.h diff --git a/include/net/smc.h b/include/net/smc.h index 9b0da45..50df54d 100644 --- a/include/net/smc.h +++ b/include/net/smc.h @@ -41,6 +41,12 @@ struct smcd_dmb { dma_addr_t dma_addr; }; +struct smcd_seid { + u8 seid_string[24]; + u8 serial_number[4]; + u8 type[4]; +}; + #define ISM_EVENT_DMB 0 #define ISM_EVENT_GID 1 #define ISM_EVENT_SWR 2 diff --git a/net/smc/Makefile b/net/smc/Makefile index 875efcd..a8c3711 100644 --- a/net/smc/Makefile +++ b/net/smc/Makefile @@ -4,5 +4,5 @@ obj-$(CONFIG_SMC) += smc.o obj-$(CONFIG_SMC_DIAG) += smc_diag.o smc-y := af_smc.o smc_pnet.o smc_ib.o smc_clc.o smc_core.o smc_wr.o smc_llc.o smc-y += smc_cdc.o smc_tx.o smc_rx.o smc_close.o smc_ism.o smc_netlink.o smc_stats.o -smc-y += smc_tracepoint.o +smc-y += smc_tracepoint.o smc_loopback.o smc-$(CONFIG_SYSCTL) += smc_sysctl.o diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c index 1c0fe9b..dcbd7e3 100644 --- a/net/smc/af_smc.c +++ b/net/smc/af_smc.c @@ -53,6 +53,7 @@ #include "smc_stats.h" #include "smc_tracepoint.h" #include "smc_sysctl.h" +#include "smc_loopback.h" static DEFINE_MUTEX(smc_server_lgr_pending); /* serialize link group * creation on server @@ -3454,15 +3455,23 @@ static int __init smc_init(void) goto out_sock; } + rc = smc_loopback_init(); + if (rc) { + pr_err("%s: smc_loopback_init fails with %d\n", __func__, rc); + goto out_ib; + } + rc = tcp_register_ulp(&smc_ulp_ops); if (rc) { pr_err("%s: tcp_ulp_register fails with %d\n", __func__, rc); - goto out_ib; + goto out_lo; } static_branch_enable(&tcp_have_smc); return 0; +out_lo: + smc_loopback_exit(); out_ib: smc_ib_unregister_client(); out_sock: @@ -3499,6 +3508,7 @@ static void __exit smc_exit(void) tcp_unregister_ulp(&smc_ulp_ops); sock_unregister(PF_SMC); smc_core_exit(); + smc_loopback_exit(); smc_ib_unregister_client(); smc_ism_exit(); destroy_workqueue(smc_close_wq); diff --git a/net/smc/smc_cdc.c b/net/smc/smc_cdc.c index 53f63bf..9023684 100644 --- a/net/smc/smc_cdc.c +++ b/net/smc/smc_cdc.c @@ -407,7 +407,14 @@ static void smc_cdc_msg_recv(struct smc_sock *smc, struct smc_cdc_msg *cdc) */ static void smcd_cdc_rx_tsklet(struct tasklet_struct *t) { - struct smc_connection *conn = from_tasklet(conn, t, rx_tsklet); + struct smc_connection *conn = + from_tasklet(conn, t, rx_tsklet); + + smcd_cdc_rx_handler(conn); +} + +void smcd_cdc_rx_handler(struct smc_connection *conn) +{ struct smcd_cdc_msg *data_cdc; struct smcd_cdc_msg cdc; struct smc_sock *smc; diff --git a/net/smc/smc_cdc.h b/net/smc/smc_cdc.h index 696cc11..11559d4 100644 --- a/net/smc/smc_cdc.h +++ b/net/smc/smc_cdc.h @@ -301,5 +301,6 @@ int smcr_cdc_msg_send_validation(struct smc_connection *conn, struct smc_wr_buf *wr_buf); int smc_cdc_init(void) __init; void smcd_cdc_rx_init(struct smc_connection *conn); +void smcd_cdc_rx_handler(struct smc_connection *conn); #endif /* SMC_CDC_H */ diff --git a/net/smc/smc_loopback.c b/net/smc/smc_loopback.c new file mode 100644 index 0000000..38bacd8 --- /dev/null +++ b/net/smc/smc_loopback.c @@ -0,0 +1,363 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Shared Memory Communications Direct over loopback device. + * + * Provide a SMC-D loopback dummy device. + * + * Copyright (c) 2022, Alibaba Inc. + * + * Author: Wen Gu + * Tony Lu + * + */ + +#include +#include +#include + +#include "smc_cdc.h" +#include "smc_ism.h" +#include "smc_loopback.h" + +static struct smc_lo_dev *lo_dev; +static const char smc_lo_dev_name[] = "smcd_loopback_dev"; + +static struct smcd_seid SMC_DEFAULT_V2_SEID = { + .seid_string = "IBM-SYSZ-ISMSEID00000000", + .serial_number = "1000", + .type = "1000", +}; + +static inline void smc_lo_gen_id(struct smc_lo_dev *ldev) +{ + /* TODO: ensure local_gid is unique. + */ + get_random_bytes(&ldev->local_gid, sizeof(ldev->local_gid)); + ldev->chid = SMC_LO_CHID; +} + +static int smc_lo_query_rgid(struct smcd_dev *smcd, u64 rgid, + u32 vid_valid, u32 vid) +{ + struct smc_lo_dev *ldev = smcd->priv; + + /* rgid should equal to lgid in loopback */ + if (!ldev || rgid != ldev->local_gid) + return -ENETUNREACH; + return 0; +} + +static int smc_lo_register_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb, + void *client_priv) +{ + struct smc_lo_dev *ldev = smcd->priv; + struct smc_lo_dmb_node *dmb_node; + int sba_idx, rc; + + /* check space for new dmb */ + for_each_clear_bit(sba_idx, ldev->sba_idx_mask, SMC_LODEV_MAX_DMBS) { + if (!test_and_set_bit(sba_idx, ldev->sba_idx_mask)) + break; + } + if (sba_idx == SMC_LODEV_MAX_DMBS) + return -ENOSPC; + + dmb_node = kzalloc(sizeof(*dmb_node), GFP_KERNEL); + if (!dmb_node) { + rc = -ENOMEM; + goto err_bit; + } + + dmb_node->sba_idx = sba_idx; + dmb_node->cpu_addr = kzalloc(dmb->dmb_len, GFP_KERNEL | + __GFP_NOWARN | __GFP_NORETRY | + __GFP_NOMEMALLOC); + if (!dmb_node->cpu_addr) { + rc = -ENOMEM; + goto err_node; + } + dmb_node->len = dmb->dmb_len; + + /* TODO: token is random but not exclusive ! + * suppose to find token in dmb hask table, if has this token + * already, then generate another one. + */ + /* add new dmb into hash table */ + get_random_bytes(&dmb_node->token, sizeof(dmb_node->token)); + write_lock(&ldev->dmb_ht_lock); + hash_add(ldev->dmb_ht, &dmb_node->list, dmb_node->token); + write_unlock(&ldev->dmb_ht_lock); + + dmb->sba_idx = dmb_node->sba_idx; + dmb->dmb_tok = dmb_node->token; + dmb->cpu_addr = dmb_node->cpu_addr; + dmb->dma_addr = dmb_node->dma_addr; + dmb->dmb_len = dmb_node->len; + + return 0; + +err_node: + kfree(dmb_node); +err_bit: + clear_bit(sba_idx, ldev->sba_idx_mask); + return rc; +} + +static int smc_lo_unregister_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb) +{ + struct smc_lo_dmb_node *dmb_node = NULL, *tmp_node; + struct smc_lo_dev *ldev = smcd->priv; + + /* remove dmb from hash table */ + write_lock(&ldev->dmb_ht_lock); + hash_for_each_possible(ldev->dmb_ht, tmp_node, list, dmb->dmb_tok) { + if (tmp_node->token == dmb->dmb_tok) { + dmb_node = tmp_node; + break; + } + } + if (!dmb_node) { + write_unlock(&ldev->dmb_ht_lock); + return -EINVAL; + } + hash_del(&dmb_node->list); + write_unlock(&ldev->dmb_ht_lock); + + clear_bit(dmb_node->sba_idx, ldev->sba_idx_mask); + kfree(dmb_node->cpu_addr); + kfree(dmb_node); + + return 0; +} + +static int smc_lo_add_vlan_id(struct smcd_dev *smcd, u64 vlan_id) +{ + return -EOPNOTSUPP; +} + +static int smc_lo_del_vlan_id(struct smcd_dev *smcd, u64 vlan_id) +{ + return -EOPNOTSUPP; +} + +static int smc_lo_set_vlan_required(struct smcd_dev *smcd) +{ + return -EOPNOTSUPP; +} + +static int smc_lo_reset_vlan_required(struct smcd_dev *smcd) +{ + return -EOPNOTSUPP; +} + +static int smc_lo_move_data(struct smcd_dev *smcd, u64 dmb_tok, unsigned int idx, + bool sf, unsigned int offset, void *data, + unsigned int size) +{ + struct smc_lo_dmb_node *rmb_node = NULL, *tmp_node; + struct smc_lo_dev *ldev = smcd->priv; + + read_lock(&ldev->dmb_ht_lock); + hash_for_each_possible(ldev->dmb_ht, tmp_node, list, dmb_tok) { + if (tmp_node->token == dmb_tok) { + rmb_node = tmp_node; + break; + } + } + if (!rmb_node) { + read_unlock(&ldev->dmb_ht_lock); + return -EINVAL; + } + read_unlock(&ldev->dmb_ht_lock); + + memcpy((char *)rmb_node->cpu_addr + offset, data, size); + + if (sf) { + struct smc_connection *conn = + smcd->conn[rmb_node->sba_idx]; + + if (conn && !conn->killed) + smcd_cdc_rx_handler(conn); + } + return 0; +} + +static u8 *smc_lo_get_system_eid(void) +{ + return SMC_DEFAULT_V2_SEID.seid_string; +} + +static u64 smc_lo_get_local_gid(struct smcd_dev *smcd) +{ + return ((struct smc_lo_dev *)smcd->priv)->local_gid; +} + +static u16 smc_lo_get_chid(struct smcd_dev *smcd) +{ + return ((struct smc_lo_dev *)smcd->priv)->chid; +} + +static struct device *smc_lo_get_dev(struct smcd_dev *smcd) +{ + return &((struct smc_lo_dev *)smcd->priv)->dev; +} + +static const struct smcd_ops lo_ops = { + .query_remote_gid = smc_lo_query_rgid, + .register_dmb = smc_lo_register_dmb, + .unregister_dmb = smc_lo_unregister_dmb, + .add_vlan_id = smc_lo_add_vlan_id, + .del_vlan_id = smc_lo_del_vlan_id, + .set_vlan_required = smc_lo_set_vlan_required, + .reset_vlan_required = smc_lo_reset_vlan_required, + .signal_event = NULL, + .move_data = smc_lo_move_data, + .get_system_eid = smc_lo_get_system_eid, + .get_local_gid = smc_lo_get_local_gid, + .get_chid = smc_lo_get_chid, + .get_dev = smc_lo_get_dev, +}; + +static struct smcd_dev *smcd_lo_alloc_dev(const struct smcd_ops *ops, + int max_dmbs) +{ + struct smcd_dev *smcd; + + smcd = kzalloc(sizeof(*smcd), GFP_KERNEL); + if (!smcd) + return NULL; + + smcd->conn = kcalloc(max_dmbs, sizeof(struct smc_connection *), + GFP_KERNEL); + if (!smcd->conn) + goto out_smcd; + + smcd->ops = ops; + + spin_lock_init(&smcd->lock); + spin_lock_init(&smcd->lgr_lock); + INIT_LIST_HEAD(&smcd->vlan); + INIT_LIST_HEAD(&smcd->lgr_list); + init_waitqueue_head(&smcd->lgrs_deleted); + return smcd; + +out_smcd: + kfree(smcd); + return NULL; +} + +static int smcd_lo_register_dev(struct smc_lo_dev *ldev) +{ + struct smcd_dev *smcd; + + smcd = smcd_lo_alloc_dev(&lo_ops, SMC_LODEV_MAX_DMBS); + if (!smcd) + return -ENOMEM; + + ldev->smcd = smcd; + smcd->priv = ldev; + smcd->parent_pci_dev = NULL; + mutex_lock(&smcd_dev_list.mutex); + smc_ism_check_v2_capable(smcd); + list_add(&smcd->list, &smcd_dev_list.list); + mutex_unlock(&smcd_dev_list.mutex); + pr_warn_ratelimited("smc: adding smcd device %s with pnetid %.16s%s\n", + smc_lo_dev_name, smcd->pnetid, + smcd->pnetid_by_user ? " (user defined)" : ""); + return 0; +} + +static void smcd_lo_unregister_dev(struct smc_lo_dev *ldev) +{ + struct smcd_dev *smcd = ldev->smcd; + + pr_warn_ratelimited("smc: removing smcd device %s\n", + smc_lo_dev_name); + smcd->going_away = 1; + smc_smcd_terminate_all(smcd); + mutex_lock(&smcd_dev_list.mutex); + list_del_init(&smcd->list); + mutex_unlock(&smcd_dev_list.mutex); +} + +static void smc_lo_dev_release(struct device *dev) +{ + struct smc_lo_dev *ldev = + container_of(dev, struct smc_lo_dev, dev); + struct smcd_dev *smcd = ldev->smcd; + + kfree(smcd->conn); + kfree(smcd); + kfree(ldev); +} + +static int smc_lo_dev_init(struct smc_lo_dev *ldev) +{ + smc_lo_gen_id(ldev); + rwlock_init(&ldev->dmb_ht_lock); + hash_init(ldev->dmb_ht); + + return smcd_lo_register_dev(ldev); +} + +static int smc_lo_dev_probe(void) +{ + struct smc_lo_dev *ldev; + int ret; + + ldev = kzalloc(sizeof(*ldev), GFP_KERNEL); + if (!ldev) + return -ENOMEM; + + ldev->dev.parent = NULL; + ldev->dev.release = smc_lo_dev_release; + device_initialize(&ldev->dev); + dev_set_name(&ldev->dev, smc_lo_dev_name); + ret = device_add(&ldev->dev); + if (ret) + goto free_dev; + + ret = smc_lo_dev_init(ldev); + if (ret) + goto put_dev; + + lo_dev = ldev; /* global loopback device */ + return 0; + +put_dev: + device_del(&ldev->dev); +free_dev: + kfree(ldev); + return ret; +} + +static void smc_lo_dev_exit(struct smc_lo_dev *ldev) +{ + smcd_lo_unregister_dev(ldev); +} + +static void smc_lo_dev_remove(void) +{ + if (!lo_dev) + return; + + smc_lo_dev_exit(lo_dev); + device_del(&lo_dev->dev); /* device_add in smc_lo_dev_probe */ + put_device(&lo_dev->dev); /* device_initialize in smc_lo_dev_probe */ +} + +int smc_loopback_init(void) +{ +#if !IS_ENABLED(CONFIG_ISM) + return smc_lo_dev_probe(); +#else + return 0; +#endif +} + +void smc_loopback_exit(void) +{ +#if !IS_ENABLED(CONFIG_ISM) + smc_lo_dev_remove(); +#endif +} diff --git a/net/smc/smc_loopback.h b/net/smc/smc_loopback.h new file mode 100644 index 0000000..9d34aba --- /dev/null +++ b/net/smc/smc_loopback.h @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Shared Memory Communications Direct over loopback device. + * + * Provide a SMC-D loopback dummy device. + * + * Copyright (c) 2022, Alibaba Inc. + * + * Author: Wen Gu + * Tony Lu + * + */ + +#ifndef _SMC_LOOPBACK_H +#define _SMC_LOOPBACK_H + +#include +#include +#include +#include +#include + +#include "smc_core.h" + +#define SMC_LO_CHID 0xFFFF +#define SMC_LODEV_MAX_DMBS 5000 +#define SMC_LODEV_MAX_DMBS_BUCKETS 16 + +struct smc_lo_dmb_node { + struct hlist_node list; + u64 token; + u32 len; + u32 sba_idx; + void *cpu_addr; + dma_addr_t dma_addr; +}; + +struct smc_lo_dev { + struct smcd_dev *smcd; + struct device dev; + u16 chid; + u64 local_gid; + DECLARE_BITMAP(sba_idx_mask, SMC_LODEV_MAX_DMBS); + rwlock_t dmb_ht_lock; + DECLARE_HASHTABLE(dmb_ht, SMC_LODEV_MAX_DMBS_BUCKETS); +}; + +int smc_loopback_init(void); +void smc_loopback_exit(void); + +#endif /* _SMC_LOOPBACK_H */ From patchwork Wed Feb 15 16:18:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wen Gu X-Patchwork-Id: 13141898 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4EF9C64EC4 for ; Wed, 15 Feb 2023 16:19:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230078AbjBOQTE (ORCPT ); Wed, 15 Feb 2023 11:19:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230014AbjBOQSt (ORCPT ); Wed, 15 Feb 2023 11:18:49 -0500 Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0EEF13A85D; Wed, 15 Feb 2023 08:18:46 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045168;MF=guwen@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0Vbl2feR_1676477922; Received: from localhost(mailfrom:guwen@linux.alibaba.com fp:SMTPD_---0Vbl2feR_1676477922) by smtp.aliyun-inc.com; Thu, 16 Feb 2023 00:18:44 +0800 From: Wen Gu To: kgraul@linux.ibm.com, wenjia@linux.ibm.com, jaka@linux.ibm.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com Cc: linux-s390@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH net-next v3 5/9] net/smc: Introduce an interface for getting DMB attribute Date: Thu, 16 Feb 2023 00:18:21 +0800 Message-Id: <1676477905-88043-6-git-send-email-guwen@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> References: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC On s390, since all OSs run on a kind of machine level hypervisor which is a partitioning hypervisor without paging, the sndbufs and DMBs in such case are unable to be mapped to the same physical memory. However, in other scene, such as communication within the same OS instance (loopback) or between guests of a paging hypervisor, eg. KVM, the sndbufs and DMBs can be mapped to the same physical memory to avoid memory copy from sndbufs to DMBs. So this patch introduces an interface to smcd_ops for users to judge whether DMB-map is available. And for reuse, the interface is designed to return DMB attribute, not only mappability. Signed-off-by: Wen Gu --- include/net/smc.h | 5 +++++ net/smc/smc_ism.c | 8 ++++++++ net/smc/smc_ism.h | 1 + 3 files changed, 14 insertions(+) diff --git a/include/net/smc.h b/include/net/smc.h index 50df54d..256d600 100644 --- a/include/net/smc.h +++ b/include/net/smc.h @@ -55,6 +55,10 @@ struct smcd_seid { #define ISM_ERROR 0xFFFF +enum { + ISM_DMB_MAPPABLE = 0, +}; + struct smcd_dev; struct smcd_ops { @@ -76,6 +80,7 @@ struct smcd_ops { u64 (*get_local_gid)(struct smcd_dev *dev); u16 (*get_chid)(struct smcd_dev *dev); struct device* (*get_dev)(struct smcd_dev *dev); + int (*get_dev_dmb_attr)(struct smcd_dev *dev); }; struct smcd_dev { diff --git a/net/smc/smc_ism.c b/net/smc/smc_ism.c index 9504273..e085c48 100644 --- a/net/smc/smc_ism.c +++ b/net/smc/smc_ism.c @@ -213,6 +213,14 @@ int smc_ism_unregister_dmb(struct smcd_dev *smcd, struct smc_buf_desc *dmb_desc) return rc; } +bool smc_ism_dmb_mappable(struct smcd_dev *smcd) +{ + if (smcd->ops->get_dev_dmb_attr && + (smcd->ops->get_dev_dmb_attr(smcd) & (1 << ISM_DMB_MAPPABLE))) + return true; + return false; +} + int smc_ism_register_dmb(struct smc_link_group *lgr, int dmb_len, struct smc_buf_desc *dmb_desc) { diff --git a/net/smc/smc_ism.h b/net/smc/smc_ism.h index 14d2e77..aabea35 100644 --- a/net/smc/smc_ism.h +++ b/net/smc/smc_ism.h @@ -38,6 +38,7 @@ struct smc_ism_vlanid { /* VLAN id set on ISM device */ int smc_ism_register_dmb(struct smc_link_group *lgr, int buf_size, struct smc_buf_desc *dmb_desc); int smc_ism_unregister_dmb(struct smcd_dev *dev, struct smc_buf_desc *dmb_desc); +bool smc_ism_dmb_mappable(struct smcd_dev *smcd); int smc_ism_signal_shutdown(struct smc_link_group *lgr); void smc_ism_get_system_eid(u8 **eid); u16 smc_ism_get_chid(struct smcd_dev *dev); From patchwork Wed Feb 15 16:18:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wen Gu X-Patchwork-Id: 13141900 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8695C64EC4 for ; Wed, 15 Feb 2023 16:19:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230121AbjBOQTU (ORCPT ); Wed, 15 Feb 2023 11:19:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230044AbjBOQSx (ORCPT ); Wed, 15 Feb 2023 11:18:53 -0500 Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5B643B0E8; Wed, 15 Feb 2023 08:18:50 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R931e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=guwen@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0Vbl3P-7_1676477924; Received: from localhost(mailfrom:guwen@linux.alibaba.com fp:SMTPD_---0Vbl3P-7_1676477924) by smtp.aliyun-inc.com; Thu, 16 Feb 2023 00:18:45 +0800 From: Wen Gu To: kgraul@linux.ibm.com, wenjia@linux.ibm.com, jaka@linux.ibm.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com Cc: linux-s390@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH net-next v3 6/9] net/smc: Introudce interfaces for DMB attach and detach Date: Thu, 16 Feb 2023 00:18:22 +0800 Message-Id: <1676477905-88043-7-git-send-email-guwen@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> References: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC This patch extends smcd_ops, adding two more semantic for SMC-D DMB: - attach_dmb: Attach an already registered DMB to a specific buf_desc, so that we can refer to the DMB through this buf_desc. - detach_dmb: Reverse operation of attach_dmb. detach the DMB from the buf_desc. This interface extension is to prepare for the avoidance of memory copy from sndbuf to DMB with SMC-D device whose DMBs has ISM_DMB_MAPPABLE attribute. Signed-off-by: Wen Gu --- include/net/smc.h | 2 ++ net/smc/smc_ism.c | 31 +++++++++++++++++++++++++++++++ net/smc/smc_ism.h | 2 ++ 3 files changed, 35 insertions(+) diff --git a/include/net/smc.h b/include/net/smc.h index 256d600..0790e64 100644 --- a/include/net/smc.h +++ b/include/net/smc.h @@ -67,6 +67,8 @@ struct smcd_ops { int (*register_dmb)(struct smcd_dev *dev, struct smcd_dmb *dmb, void *client_priv); int (*unregister_dmb)(struct smcd_dev *dev, struct smcd_dmb *dmb); + int (*attach_dmb)(struct smcd_dev *dev, struct smcd_dmb *dmb); + int (*detach_dmb)(struct smcd_dev *dev, u64 token); int (*add_vlan_id)(struct smcd_dev *dev, u64 vlan_id); int (*del_vlan_id)(struct smcd_dev *dev, u64 vlan_id); int (*set_vlan_required)(struct smcd_dev *dev); diff --git a/net/smc/smc_ism.c b/net/smc/smc_ism.c index e085c48..e2670e3 100644 --- a/net/smc/smc_ism.c +++ b/net/smc/smc_ism.c @@ -243,6 +243,37 @@ int smc_ism_register_dmb(struct smc_link_group *lgr, int dmb_len, return rc; } +int smc_ism_attach_dmb(struct smcd_dev *dev, u64 token, + struct smc_buf_desc *dmb_desc) +{ + struct smcd_dmb dmb; + int rc = 0; + + memset(&dmb, 0, sizeof(dmb)); + dmb.dmb_tok = token; + + if (!dev->ops->attach_dmb) + return -EINVAL; + + rc = dev->ops->attach_dmb(dev, &dmb); + if (!rc) { + dmb_desc->sba_idx = dmb.sba_idx; + dmb_desc->token = dmb.dmb_tok; + dmb_desc->cpu_addr = dmb.cpu_addr; + dmb_desc->dma_addr = dmb.dma_addr; + dmb_desc->len = dmb.dmb_len; + } + return rc; +} + +int smc_ism_detach_dmb(struct smcd_dev *dev, u64 token) +{ + if (!dev->ops->detach_dmb) + return -EINVAL; + + return dev->ops->detach_dmb(dev, token); +} + static int smc_nl_handle_smcd_dev(struct smcd_dev *smcd, struct sk_buff *skb, struct netlink_callback *cb) diff --git a/net/smc/smc_ism.h b/net/smc/smc_ism.h index aabea35..fc16dc2 100644 --- a/net/smc/smc_ism.h +++ b/net/smc/smc_ism.h @@ -39,6 +39,8 @@ int smc_ism_register_dmb(struct smc_link_group *lgr, int buf_size, struct smc_buf_desc *dmb_desc); int smc_ism_unregister_dmb(struct smcd_dev *dev, struct smc_buf_desc *dmb_desc); bool smc_ism_dmb_mappable(struct smcd_dev *smcd); +int smc_ism_attach_dmb(struct smcd_dev *dev, u64 token, struct smc_buf_desc *dmb_desc); +int smc_ism_detach_dmb(struct smcd_dev *dev, u64 token); int smc_ism_signal_shutdown(struct smc_link_group *lgr); void smc_ism_get_system_eid(u8 **eid); u16 smc_ism_get_chid(struct smcd_dev *dev); From patchwork Wed Feb 15 16:18:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wen Gu X-Patchwork-Id: 13141901 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECEB2C636CC for ; Wed, 15 Feb 2023 16:19:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229612AbjBOQTW (ORCPT ); Wed, 15 Feb 2023 11:19:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229982AbjBOQS6 (ORCPT ); Wed, 15 Feb 2023 11:18:58 -0500 Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C61563B0E6; Wed, 15 Feb 2023 08:18:50 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046056;MF=guwen@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0Vbl.oTZ_1676477926; Received: from localhost(mailfrom:guwen@linux.alibaba.com fp:SMTPD_---0Vbl.oTZ_1676477926) by smtp.aliyun-inc.com; Thu, 16 Feb 2023 00:18:47 +0800 From: Wen Gu To: kgraul@linux.ibm.com, wenjia@linux.ibm.com, jaka@linux.ibm.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com Cc: linux-s390@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH net-next v3 7/9] net/smc: Avoid data copy from sndbuf to peer RMB in SMC-D Date: Thu, 16 Feb 2023 00:18:23 +0800 Message-Id: <1676477905-88043-8-git-send-email-guwen@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> References: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC This patch aims to avoid data copy from local sndbuf to peer RMB by attaching local sndbuf to peer RMB when DMBs have ISM_DMB_MAPPABLE attribute. After this, local sndbuf and peer RMB share the same physical memory. +----------+ +----------+ | socket A | | socket B | +----------+ +----------+ | ^ | +---------+ | regard as | | ----------| local sndbuf | B's | regard as | | RMB | local RMB |-------> | | +---------+ 1. Actions on local RMB. a. Create or reuse RMB when connection is created; b. Unuse RMB when connection is freed; c. Free RMB when link group is freed; 2. Actions on local sndbuf.Attach local sndbuf to peer RMB. a. Attach local sndbuf to peer RMB by the rtoken exchanged through CLC message. Since then, accessing local sndbuf is equivalent to accessing peer RMB b. sndbuf_desc is exclusive to specific connection and won't be added to lgr buffer pool for reuse. c. Local sndbuf is detached from peer RMB when connection is freed, and sndbuf_desc will be freed. Therefore, the data written to local sndbuf will directly reach peer RMB. Signed-off-by: Wen Gu --- net/smc/af_smc.c | 14 +++++++++++ net/smc/smc_core.c | 70 +++++++++++++++++++++++++++++++++++++++++++++++++++++- net/smc/smc_core.h | 1 + 3 files changed, 84 insertions(+), 1 deletion(-) diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c index dcbd7e3..079a9f1 100644 --- a/net/smc/af_smc.c +++ b/net/smc/af_smc.c @@ -1366,6 +1366,12 @@ static int smc_connect_ism(struct smc_sock *smc, } smc_conn_save_peer_info(smc, aclc); + + if (smc_ism_dmb_mappable(smc->conn.lgr->smcd)) { + rc = smcd_buf_attach(smc); + if (rc) + goto connect_abort; + } smc_close_init(smc); smc_rx_init(smc); smc_tx_init(smc); @@ -2422,6 +2428,14 @@ static void smc_listen_work(struct work_struct *work) mutex_unlock(&smc_server_lgr_pending); } smc_conn_save_peer_info(new_smc, cclc); + + if (ini->is_smcd && + smc_ism_dmb_mappable(new_smc->conn.lgr->smcd)) { + rc = smcd_buf_attach(new_smc); + if (rc) + goto out_decl; + } + smc_listen_out_connected(new_smc); SMC_STAT_SERV_SUCC_INC(sock_net(newclcsock->sk), ini); goto out_free; diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c index 7642b16..5ed018b 100644 --- a/net/smc/smc_core.c +++ b/net/smc/smc_core.c @@ -1129,6 +1129,20 @@ static void smcr_buf_unuse(struct smc_buf_desc *buf_desc, bool is_rmb, } } +static void smcd_buf_detach(struct smc_connection *conn) +{ + struct smcd_dev *smcd = conn->lgr->smcd; + u64 peer_token = conn->peer_token; + + if (!conn->sndbuf_desc) + return; + + smc_ism_detach_dmb(smcd, peer_token); + + kfree(conn->sndbuf_desc); + conn->sndbuf_desc = NULL; +} + static void smc_buf_unuse(struct smc_connection *conn, struct smc_link_group *lgr) { @@ -1175,6 +1189,10 @@ void smc_conn_free(struct smc_connection *conn) if (!list_empty(&lgr->list)) smc_ism_unset_conn(conn); tasklet_kill(&conn->rx_tsklet); + + /* detach sndbuf from peer RMB */ + if (smc_ism_dmb_mappable(lgr->smcd)) + smcd_buf_detach(conn); } else { smc_cdc_wait_pend_tx_wr(conn); if (current_work() != &conn->abort_work) @@ -2425,15 +2443,23 @@ void smc_rmb_sync_sg_for_cpu(struct smc_connection *conn) */ int smc_buf_create(struct smc_sock *smc, bool is_smcd) { + bool sndbuf_created = false; int rc; + if (is_smcd && + smc_ism_dmb_mappable(smc->conn.lgr->smcd)) + goto create_rmb; + /* create send buffer */ rc = __smc_buf_create(smc, is_smcd, false); if (rc) return rc; + sndbuf_created = true; + +create_rmb: /* create rmb */ rc = __smc_buf_create(smc, is_smcd, true); - if (rc) { + if (rc && sndbuf_created) { mutex_lock(&smc->conn.lgr->sndbufs_lock); list_del(&smc->conn.sndbuf_desc->list); mutex_unlock(&smc->conn.lgr->sndbufs_lock); @@ -2443,6 +2469,48 @@ int smc_buf_create(struct smc_sock *smc, bool is_smcd) return rc; } +int smcd_buf_attach(struct smc_sock *smc) +{ + struct smc_connection *conn = &smc->conn; + struct smcd_dev *smcd = conn->lgr->smcd; + u64 peer_token = conn->peer_token; + struct smc_buf_desc *buf_desc; + int rc; + + buf_desc = kzalloc(sizeof(*buf_desc), GFP_KERNEL); + if (!buf_desc) + return -ENOMEM; + + /* map local sndbuf desc to peer RMB, so operations on local + * sndbuf are equivalent to operations on peer RMB. + */ + rc = smc_ism_attach_dmb(smcd, peer_token, buf_desc); + if (rc) { + rc = SMC_CLC_DECL_MEM; + goto free; + } + + smc->sk.sk_sndbuf = buf_desc->len; + buf_desc->cpu_addr = (u8 *)buf_desc->cpu_addr + sizeof(struct smcd_cdc_msg); + buf_desc->len -= sizeof(struct smcd_cdc_msg); + conn->sndbuf_desc = buf_desc; + conn->sndbuf_desc->used = 1; + atomic_set(&conn->sndbuf_space, conn->sndbuf_desc->len); + return 0; + +free: + if (conn->rmb_desc) { + /* free local RMB as well */ + mutex_lock(&conn->lgr->rmbs_lock); + list_del(&conn->rmb_desc->list); + mutex_unlock(&conn->lgr->rmbs_lock); + smc_buf_free(conn->lgr, true, conn->rmb_desc); + conn->rmb_desc = NULL; + } + kfree(buf_desc); + return rc; +} + static inline int smc_rmb_reserve_rtoken_idx(struct smc_link_group *lgr) { int i; diff --git a/net/smc/smc_core.h b/net/smc/smc_core.h index 285f9bd..080c810 100644 --- a/net/smc/smc_core.h +++ b/net/smc/smc_core.h @@ -518,6 +518,7 @@ void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid, void smc_smcd_terminate_all(struct smcd_dev *dev); void smc_smcr_terminate_all(struct smc_ib_device *smcibdev); int smc_buf_create(struct smc_sock *smc, bool is_smcd); +int smcd_buf_attach(struct smc_sock *smc); int smc_uncompress_bufsize(u8 compressed); int smc_rmb_rtoken_handling(struct smc_connection *conn, struct smc_link *link, struct smc_clc_msg_accept_confirm *clc); From patchwork Wed Feb 15 16:18:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wen Gu X-Patchwork-Id: 13141902 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE7B4C636D7 for ; Wed, 15 Feb 2023 16:19:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230099AbjBOQTY (ORCPT ); Wed, 15 Feb 2023 11:19:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230048AbjBOQS7 (ORCPT ); Wed, 15 Feb 2023 11:18:59 -0500 Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 33F6F3B0CE; Wed, 15 Feb 2023 08:18:52 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R811e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045168;MF=guwen@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0Vbl04yG_1676477927; Received: from localhost(mailfrom:guwen@linux.alibaba.com fp:SMTPD_---0Vbl04yG_1676477927) by smtp.aliyun-inc.com; Thu, 16 Feb 2023 00:18:49 +0800 From: Wen Gu To: kgraul@linux.ibm.com, wenjia@linux.ibm.com, jaka@linux.ibm.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com Cc: linux-s390@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH net-next v3 8/9] net/smc: Modify cursor update logic when using mappable DMB Date: Thu, 16 Feb 2023 00:18:24 +0800 Message-Id: <1676477905-88043-9-git-send-email-guwen@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> References: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Since local sndbuf shares the same physical memory region with peer RMB when using mappable DMBs, the cursor update logic needs to be adapted. The main concern is to ensure that the data written by local to this memory region won't overwrite the data that has not been consumed by the peer. So in this scene, the fin_curs and sndbuf_space that were originally updated when sending out CDC message are not updated until the cons_curs update from the peer is received. Signed-off-by: Wen Gu --- net/smc/smc_cdc.c | 50 +++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 39 insertions(+), 11 deletions(-) diff --git a/net/smc/smc_cdc.c b/net/smc/smc_cdc.c index 9023684..74fcd2f 100644 --- a/net/smc/smc_cdc.c +++ b/net/smc/smc_cdc.c @@ -18,6 +18,7 @@ #include "smc_tx.h" #include "smc_rx.h" #include "smc_close.h" +#include "smc_ism.h" /********************************** send *************************************/ @@ -253,17 +254,24 @@ int smcd_cdc_msg_send(struct smc_connection *conn) return rc; smc_curs_copy(&conn->rx_curs_confirmed, &curs, conn); conn->local_rx_ctrl.prod_flags.cons_curs_upd_req = 0; - /* Calculate transmitted data and increment free send buffer space */ - diff = smc_curs_diff(conn->sndbuf_desc->len, &conn->tx_curs_fin, - &conn->tx_curs_sent); - /* increased by confirmed number of bytes */ - smp_mb__before_atomic(); - atomic_add(diff, &conn->sndbuf_space); - /* guarantee 0 <= sndbuf_space <= sndbuf_desc->len */ - smp_mb__after_atomic(); - smc_curs_copy(&conn->tx_curs_fin, &conn->tx_curs_sent, conn); + if (!smc_ism_dmb_mappable(conn->lgr->smcd)) { + /* If local sndbuf has been mapped to peer RMB, then + * don't update the tx_curs_fin and sndbuf_space until + * peer has consumed the data in RMB. + */ - smc_tx_sndbuf_nonfull(smc); + /* Calculate transmitted data and increment free send buffer space */ + diff = smc_curs_diff(conn->sndbuf_desc->len, &conn->tx_curs_fin, + &conn->tx_curs_sent); + /* increased by confirmed number of bytes */ + smp_mb__before_atomic(); + atomic_add(diff, &conn->sndbuf_space); + /* guarantee 0 <= sndbuf_space <= sndbuf_desc->len */ + smp_mb__after_atomic(); + smc_curs_copy(&conn->tx_curs_fin, &conn->tx_curs_sent, conn); + + smc_tx_sndbuf_nonfull(smc); + } return rc; } @@ -321,7 +329,7 @@ static void smc_cdc_msg_recv_action(struct smc_sock *smc, { union smc_host_cursor cons_old, prod_old; struct smc_connection *conn = &smc->conn; - int diff_cons, diff_prod; + int diff_cons, diff_prod, diff_tx; smc_curs_copy(&prod_old, &conn->local_rx_ctrl.prod, conn); smc_curs_copy(&cons_old, &conn->local_rx_ctrl.cons, conn); @@ -337,6 +345,26 @@ static void smc_cdc_msg_recv_action(struct smc_sock *smc, atomic_add(diff_cons, &conn->peer_rmbe_space); /* guarantee 0 <= peer_rmbe_space <= peer_rmbe_size */ smp_mb__after_atomic(); + + if (conn->lgr->is_smcd && + smc_ism_dmb_mappable(conn->lgr->smcd)) { + /* If local sndbuf has been mapped to peer RMB, then + * update tx_curs_fin and sndbuf_space when peer has + * consumed the data in it's RMB. + */ + + /* calculate peer rmb consumed data */ + diff_tx = smc_curs_diff(conn->sndbuf_desc->len, &conn->tx_curs_fin, + &conn->local_rx_ctrl.cons); + /* increase local sndbuf space and fin_curs */ + smp_mb__before_atomic(); + atomic_add(diff_tx, &conn->sndbuf_space); + /* guarantee 0 <= sndbuf_space <= sndbuf_desc->len */ + smp_mb__after_atomic(); + smc_curs_copy(&conn->tx_curs_fin, &conn->local_rx_ctrl.cons, conn); + + smc_tx_sndbuf_nonfull(smc); + } } diff_prod = smc_curs_diff(conn->rmb_desc->len, &prod_old, From patchwork Wed Feb 15 16:18:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wen Gu X-Patchwork-Id: 13141903 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65683C64EC4 for ; Wed, 15 Feb 2023 16:19:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230071AbjBOQT0 (ORCPT ); Wed, 15 Feb 2023 11:19:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230100AbjBOQTL (ORCPT ); Wed, 15 Feb 2023 11:19:11 -0500 Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 772E13B0DF; Wed, 15 Feb 2023 08:18:55 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R991e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046056;MF=guwen@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0Vbl2fgZ_1676477929; Received: from localhost(mailfrom:guwen@linux.alibaba.com fp:SMTPD_---0Vbl2fgZ_1676477929) by smtp.aliyun-inc.com; Thu, 16 Feb 2023 00:18:51 +0800 From: Wen Gu To: kgraul@linux.ibm.com, wenjia@linux.ibm.com, jaka@linux.ibm.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com Cc: linux-s390@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH net-next v3 9/9] net/smc: Add interface implementation of loopback device Date: Thu, 16 Feb 2023 00:18:25 +0800 Message-Id: <1676477905-88043-10-git-send-email-guwen@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> References: <1676477905-88043-1-git-send-email-guwen@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC This patch completes the specific implementation of loopback device for the newly added SMC-D DMB-related interface. The loopback device always provides mappable DMB because the device users are in the same OS instance. Signed-off-by: Wen Gu --- net/smc/smc_loopback.c | 93 ++++++++++++++++++++++++++++++++++++++++++++++---- net/smc/smc_loopback.h | 4 +++ 2 files changed, 90 insertions(+), 7 deletions(-) diff --git a/net/smc/smc_loopback.c b/net/smc/smc_loopback.c index 38bacd8..2fa2a73 100644 --- a/net/smc/smc_loopback.c +++ b/net/smc/smc_loopback.c @@ -77,6 +77,7 @@ static int smc_lo_register_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb, goto err_node; } dmb_node->len = dmb->dmb_len; + refcount_set(&dmb_node->refcnt, 1); /* TODO: token is random but not exclusive ! * suppose to find token in dmb hask table, if has this token @@ -87,6 +88,7 @@ static int smc_lo_register_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb, write_lock(&ldev->dmb_ht_lock); hash_add(ldev->dmb_ht, &dmb_node->list, dmb_node->token); write_unlock(&ldev->dmb_ht_lock); + atomic_inc(&ldev->dmb_cnt); dmb->sba_idx = dmb_node->sba_idx; dmb->dmb_tok = dmb_node->token; @@ -124,12 +126,76 @@ static int smc_lo_unregister_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb) write_unlock(&ldev->dmb_ht_lock); clear_bit(dmb_node->sba_idx, ldev->sba_idx_mask); + + /* wait for dmb refcnt to be 0 */ + if (!refcount_dec_and_test(&dmb_node->refcnt)) + wait_event(ldev->dmbs_release, !refcount_read(&dmb_node->refcnt)); kfree(dmb_node->cpu_addr); kfree(dmb_node); + if (atomic_dec_and_test(&ldev->dmb_cnt)) + wake_up(&ldev->ldev_release); + return 0; +} + +static int smc_lo_attach_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb) +{ + struct smc_lo_dmb_node *dmb_node = NULL, *tmp_node; + struct smc_lo_dev *ldev = smcd->priv; + + /* find dmb_node according to dmb->dmb_tok */ + read_lock(&ldev->dmb_ht_lock); + hash_for_each_possible(ldev->dmb_ht, tmp_node, list, dmb->dmb_tok) { + if (tmp_node->token == dmb->dmb_tok) { + dmb_node = tmp_node; + break; + } + } + if (!dmb_node) { + read_unlock(&ldev->dmb_ht_lock); + return -EINVAL; + } + read_unlock(&ldev->dmb_ht_lock); + refcount_inc(&dmb_node->refcnt); + + /* provide dmb information */ + dmb->sba_idx = dmb_node->sba_idx; + dmb->dmb_tok = dmb_node->token; + dmb->cpu_addr = dmb_node->cpu_addr; + dmb->dma_addr = dmb_node->dma_addr; + dmb->dmb_len = dmb_node->len; + return 0; +} + +static int smc_lo_detach_dmb(struct smcd_dev *smcd, u64 token) +{ + struct smc_lo_dmb_node *dmb_node = NULL, *tmp_node; + struct smc_lo_dev *ldev = smcd->priv; + + /* find dmb_node according to dmb->dmb_tok */ + read_lock(&ldev->dmb_ht_lock); + hash_for_each_possible(ldev->dmb_ht, tmp_node, list, token) { + if (tmp_node->token == token) { + dmb_node = tmp_node; + break; + } + } + if (!dmb_node) { + read_unlock(&ldev->dmb_ht_lock); + return -EINVAL; + } + read_unlock(&ldev->dmb_ht_lock); + + if (refcount_dec_and_test(&dmb_node->refcnt)) + wake_up_all(&ldev->dmbs_release); return 0; } +static int smc_lo_get_dev_dmb_attr(struct smcd_dev *smcd) +{ + return (1 << ISM_DMB_MAPPABLE); +} + static int smc_lo_add_vlan_id(struct smcd_dev *smcd, u64 vlan_id) { return -EOPNOTSUPP; @@ -156,7 +222,15 @@ static int smc_lo_move_data(struct smcd_dev *smcd, u64 dmb_tok, unsigned int idx { struct smc_lo_dmb_node *rmb_node = NULL, *tmp_node; struct smc_lo_dev *ldev = smcd->priv; - + struct smc_connection *conn; + + if (!sf) { + /* local sndbuf shares the same physical memory with + * peer RMB, so no need to copy data from local sndbuf + * to peer RMB. + */ + return 0; + } read_lock(&ldev->dmb_ht_lock); hash_for_each_possible(ldev->dmb_ht, tmp_node, list, dmb_tok) { if (tmp_node->token == dmb_tok) { @@ -172,13 +246,10 @@ static int smc_lo_move_data(struct smcd_dev *smcd, u64 dmb_tok, unsigned int idx memcpy((char *)rmb_node->cpu_addr + offset, data, size); - if (sf) { - struct smc_connection *conn = - smcd->conn[rmb_node->sba_idx]; + conn = smcd->conn[rmb_node->sba_idx]; + if (conn && !conn->killed) + smcd_cdc_rx_handler(conn); - if (conn && !conn->killed) - smcd_cdc_rx_handler(conn); - } return 0; } @@ -206,6 +277,8 @@ static struct device *smc_lo_get_dev(struct smcd_dev *smcd) .query_remote_gid = smc_lo_query_rgid, .register_dmb = smc_lo_register_dmb, .unregister_dmb = smc_lo_unregister_dmb, + .attach_dmb = smc_lo_attach_dmb, + .detach_dmb = smc_lo_detach_dmb, .add_vlan_id = smc_lo_add_vlan_id, .del_vlan_id = smc_lo_del_vlan_id, .set_vlan_required = smc_lo_set_vlan_required, @@ -216,6 +289,7 @@ static struct device *smc_lo_get_dev(struct smcd_dev *smcd) .get_local_gid = smc_lo_get_local_gid, .get_chid = smc_lo_get_chid, .get_dev = smc_lo_get_dev, + .get_dev_dmb_attr = smc_lo_get_dev_dmb_attr, }; static struct smcd_dev *smcd_lo_alloc_dev(const struct smcd_ops *ops, @@ -296,6 +370,9 @@ static int smc_lo_dev_init(struct smc_lo_dev *ldev) smc_lo_gen_id(ldev); rwlock_init(&ldev->dmb_ht_lock); hash_init(ldev->dmb_ht); + atomic_set(&ldev->dmb_cnt, 0); + init_waitqueue_head(&ldev->dmbs_release); + init_waitqueue_head(&ldev->ldev_release); return smcd_lo_register_dev(ldev); } @@ -334,6 +411,8 @@ static int smc_lo_dev_probe(void) static void smc_lo_dev_exit(struct smc_lo_dev *ldev) { smcd_lo_unregister_dev(ldev); + if (atomic_read(&ldev->dmb_cnt)) + wait_event(ldev->ldev_release, !atomic_read(&ldev->dmb_cnt)); } static void smc_lo_dev_remove(void) diff --git a/net/smc/smc_loopback.h b/net/smc/smc_loopback.h index 9d34aba..0a09330 100644 --- a/net/smc/smc_loopback.h +++ b/net/smc/smc_loopback.h @@ -33,6 +33,7 @@ struct smc_lo_dmb_node { u32 sba_idx; void *cpu_addr; dma_addr_t dma_addr; + refcount_t refcnt; }; struct smc_lo_dev { @@ -43,6 +44,9 @@ struct smc_lo_dev { DECLARE_BITMAP(sba_idx_mask, SMC_LODEV_MAX_DMBS); rwlock_t dmb_ht_lock; DECLARE_HASHTABLE(dmb_ht, SMC_LODEV_MAX_DMBS_BUCKETS); + atomic_t dmb_cnt; + wait_queue_head_t dmbs_release; + wait_queue_head_t ldev_release; }; int smc_loopback_init(void);