From patchwork Sun Jan 17 14:59:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 12025439 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31B86C43381 for ; Sun, 17 Jan 2021 15:01:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0939421D81 for ; Sun, 17 Jan 2021 15:01:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729351AbhAQPBb (ORCPT ); Sun, 17 Jan 2021 10:01:31 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:45146 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729124AbhAQPBG (ORCPT ); Sun, 17 Jan 2021 10:01:06 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from tariqt@nvidia.com) with SMTP; 17 Jan 2021 17:00:15 +0200 Received: from dev-l-vrt-206-005.mtl.labs.mlnx (dev-l-vrt-206-005.mtl.labs.mlnx [10.234.206.5]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10HF0F7C029614; Sun, 17 Jan 2021 17:00:15 +0200 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski Cc: Boris Pismenny , netdev@vger.kernel.org, Tariq Toukan , Moshe Shemesh , Jay Vosburgh , Veaceslav Falico , Andy Gospodarek , John Fastabend , Daniel Borkmann , Jarod Wilson , Ivan Vecera , Tariq Toukan Subject: [PATCH net-next V3 1/8] net: netdevice: Add operation ndo_sk_get_lower_dev Date: Sun, 17 Jan 2021 16:59:42 +0200 Message-Id: <20210117145949.8632-2-tariqt@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210117145949.8632-1-tariqt@nvidia.com> References: <20210117145949.8632-1-tariqt@nvidia.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org ndo_sk_get_lower_dev returns the lower netdev that corresponds to a given socket. Additionally, we implement a helper netdev_sk_get_lowest_dev() to get the lowest one in chain. Signed-off-by: Tariq Toukan Reviewed-by: Boris Pismenny --- include/linux/netdevice.h | 4 ++++ net/core/dev.c | 33 +++++++++++++++++++++++++++++++++ 2 files changed, 37 insertions(+) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 5b949076ed23..02dcef4d66e2 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1398,6 +1398,8 @@ struct net_device_ops { struct net_device* (*ndo_get_xmit_slave)(struct net_device *dev, struct sk_buff *skb, bool all_slaves); + struct net_device* (*ndo_sk_get_lower_dev)(struct net_device *dev, + struct sock *sk); netdev_features_t (*ndo_fix_features)(struct net_device *dev, netdev_features_t features); int (*ndo_set_features)(struct net_device *dev, @@ -2858,6 +2860,8 @@ int init_dummy_netdev(struct net_device *dev); struct net_device *netdev_get_xmit_slave(struct net_device *dev, struct sk_buff *skb, bool all_slaves); +struct net_device *netdev_sk_get_lowest_dev(struct net_device *dev, + struct sock *sk); struct net_device *dev_get_by_index(struct net *net, int ifindex); struct net_device *__dev_get_by_index(struct net *net, int ifindex); struct net_device *dev_get_by_index_rcu(struct net *net, int ifindex); diff --git a/net/core/dev.c b/net/core/dev.c index bae35c1ae192..6b90520a01b1 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -8105,6 +8105,39 @@ struct net_device *netdev_get_xmit_slave(struct net_device *dev, } EXPORT_SYMBOL(netdev_get_xmit_slave); +static struct net_device *netdev_sk_get_lower_dev(struct net_device *dev, + struct sock *sk) +{ + const struct net_device_ops *ops = dev->netdev_ops; + + if (!ops->ndo_sk_get_lower_dev) + return NULL; + return ops->ndo_sk_get_lower_dev(dev, sk); +} + +/** + * netdev_sk_get_lowest_dev - Get the lowest device in chain given device and socket + * @dev: device + * @sk: the socket + * + * %NULL is returned if no lower device is found. + */ + +struct net_device *netdev_sk_get_lowest_dev(struct net_device *dev, + struct sock *sk) +{ + struct net_device *lower; + + lower = netdev_sk_get_lower_dev(dev, sk); + while (lower) { + dev = lower; + lower = netdev_sk_get_lower_dev(dev, sk); + } + + return dev; +} +EXPORT_SYMBOL(netdev_sk_get_lowest_dev); + static void netdev_adjacent_add_links(struct net_device *dev) { struct netdev_adjacent *iter; From patchwork Sun Jan 17 14:59:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 12025435 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEC69C433E6 for ; Sun, 17 Jan 2021 15:01:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A60CD22571 for ; Sun, 17 Jan 2021 15:01:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729340AbhAQPBY (ORCPT ); Sun, 17 Jan 2021 10:01:24 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:45147 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729115AbhAQPBF (ORCPT ); Sun, 17 Jan 2021 10:01:05 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from tariqt@nvidia.com) with SMTP; 17 Jan 2021 17:00:15 +0200 Received: from dev-l-vrt-206-005.mtl.labs.mlnx (dev-l-vrt-206-005.mtl.labs.mlnx [10.234.206.5]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10HF0F7D029614; Sun, 17 Jan 2021 17:00:15 +0200 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski Cc: Boris Pismenny , netdev@vger.kernel.org, Tariq Toukan , Moshe Shemesh , Jay Vosburgh , Veaceslav Falico , Andy Gospodarek , John Fastabend , Daniel Borkmann , Jarod Wilson , Ivan Vecera , Tariq Toukan Subject: [PATCH net-next V3 2/8] net/bonding: Take IP hash logic into a helper Date: Sun, 17 Jan 2021 16:59:43 +0200 Message-Id: <20210117145949.8632-3-tariqt@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210117145949.8632-1-tariqt@nvidia.com> References: <20210117145949.8632-1-tariqt@nvidia.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Hash logic on L3 will be used in a downstream patch for one more use case. Take it to a function for a better code reuse. Signed-off-by: Tariq Toukan Reviewed-by: Boris Pismenny --- drivers/net/bonding/bond_main.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index ad5192ee1845..759ad22b7279 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -3541,6 +3541,16 @@ static bool bond_flow_dissect(struct bonding *bond, struct sk_buff *skb, return true; } +static u32 bond_ip_hash(u32 hash, struct flow_keys *flow) +{ + hash ^= (__force u32)flow_get_u32_dst(flow) ^ + (__force u32)flow_get_u32_src(flow); + hash ^= (hash >> 16); + hash ^= (hash >> 8); + /* discard lowest hash bit to deal with the common even ports pattern */ + return hash >> 1; +} + /** * bond_xmit_hash - generate a hash value based on the xmit policy * @bond: bonding device @@ -3571,12 +3581,8 @@ u32 bond_xmit_hash(struct bonding *bond, struct sk_buff *skb) else memcpy(&hash, &flow.ports.ports, sizeof(hash)); } - hash ^= (__force u32)flow_get_u32_dst(&flow) ^ - (__force u32)flow_get_u32_src(&flow); - hash ^= (hash >> 16); - hash ^= (hash >> 8); - return hash >> 1; + return bond_ip_hash(hash, &flow); } /*-------------------------- Device entry points ----------------------------*/ From patchwork Sun Jan 17 14:59:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 12025441 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0874BC433E9 for ; Sun, 17 Jan 2021 15:01:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DAE7922522 for ; Sun, 17 Jan 2021 15:01:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729383AbhAQPBg (ORCPT ); Sun, 17 Jan 2021 10:01:36 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:45148 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729143AbhAQPBG (ORCPT ); Sun, 17 Jan 2021 10:01:06 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from tariqt@nvidia.com) with SMTP; 17 Jan 2021 17:00:15 +0200 Received: from dev-l-vrt-206-005.mtl.labs.mlnx (dev-l-vrt-206-005.mtl.labs.mlnx [10.234.206.5]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10HF0F7E029614; Sun, 17 Jan 2021 17:00:15 +0200 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski Cc: Boris Pismenny , netdev@vger.kernel.org, Tariq Toukan , Moshe Shemesh , Jay Vosburgh , Veaceslav Falico , Andy Gospodarek , John Fastabend , Daniel Borkmann , Jarod Wilson , Ivan Vecera , Tariq Toukan Subject: [PATCH net-next V3 3/8] net/bonding: Implement ndo_sk_get_lower_dev Date: Sun, 17 Jan 2021 16:59:44 +0200 Message-Id: <20210117145949.8632-4-tariqt@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210117145949.8632-1-tariqt@nvidia.com> References: <20210117145949.8632-1-tariqt@nvidia.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add ndo_sk_get_lower_dev() implementation for bond interfaces. Support only for the cases where the socket's and SKBs' hash yields identical value for the whole connection lifetime. Here we restrict it to L3+4 sockets only, with xmit_hash_policy==LAYER34 and bond modes xor/802.3ad. Signed-off-by: Tariq Toukan Reviewed-by: Boris Pismenny --- drivers/net/bonding/bond_main.c | 93 +++++++++++++++++++++++++++++++++ include/net/bonding.h | 2 + 2 files changed, 95 insertions(+) diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index 759ad22b7279..09524f99c753 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -301,6 +301,19 @@ netdev_tx_t bond_dev_queue_xmit(struct bonding *bond, struct sk_buff *skb, return dev_queue_xmit(skb); } +bool bond_sk_check(struct bonding *bond) +{ + switch (BOND_MODE(bond)) { + case BOND_MODE_8023AD: + case BOND_MODE_XOR: + if (bond->params.xmit_policy == BOND_XMIT_POLICY_LAYER34) + return true; + fallthrough; + default: + return false; + } +} + /*---------------------------------- VLAN -----------------------------------*/ /* In the following 2 functions, bond_vlan_rx_add_vid and bond_vlan_rx_kill_vid, @@ -4555,6 +4568,85 @@ static struct net_device *bond_xmit_get_slave(struct net_device *master_dev, return NULL; } +static void bond_sk_to_flow(struct sock *sk, struct flow_keys *flow) +{ + switch (sk->sk_family) { +#if IS_ENABLED(CONFIG_IPV6) + case AF_INET6: + if (sk->sk_ipv6only || + ipv6_addr_type(&sk->sk_v6_daddr) != IPV6_ADDR_MAPPED) { + flow->control.addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS; + flow->addrs.v6addrs.src = inet6_sk(sk)->saddr; + flow->addrs.v6addrs.dst = sk->sk_v6_daddr; + break; + } + fallthrough; +#endif + default: /* AF_INET */ + flow->control.addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS; + flow->addrs.v4addrs.src = inet_sk(sk)->inet_rcv_saddr; + flow->addrs.v4addrs.dst = inet_sk(sk)->inet_daddr; + break; + } + + flow->ports.src = inet_sk(sk)->inet_sport; + flow->ports.dst = inet_sk(sk)->inet_dport; +} + +/** + * bond_sk_hash_l34 - generate a hash value based on the socket's L3 and L4 fields + * @sk: socket to use for headers + * + * This function will extract the necessary field from the socket and use + * them to generate a hash based on the LAYER34 xmit_policy. + * Assumes that sk is a TCP or UDP socket. + */ +static u32 bond_sk_hash_l34(struct sock *sk) +{ + struct flow_keys flow; + u32 hash; + + bond_sk_to_flow(sk, &flow); + + /* L4 */ + memcpy(&hash, &flow.ports.ports, sizeof(hash)); + /* L3 */ + return bond_ip_hash(hash, &flow); +} + +static struct net_device *__bond_sk_get_lower_dev(struct bonding *bond, + struct sock *sk) +{ + struct bond_up_slave *slaves; + struct slave *slave; + unsigned int count; + u32 hash; + + slaves = rcu_dereference(bond->usable_slaves); + count = slaves ? READ_ONCE(slaves->count) : 0; + if (unlikely(!count)) + return NULL; + + hash = bond_sk_hash_l34(sk); + slave = slaves->arr[hash % count]; + + return slave->dev; +} + +static struct net_device *bond_sk_get_lower_dev(struct net_device *dev, + struct sock *sk) +{ + struct bonding *bond = netdev_priv(dev); + struct net_device *lower = NULL; + + rcu_read_lock(); + if (bond_sk_check(bond)) + lower = __bond_sk_get_lower_dev(bond, sk); + rcu_read_unlock(); + + return lower; +} + static netdev_tx_t __bond_start_xmit(struct sk_buff *skb, struct net_device *dev) { struct bonding *bond = netdev_priv(dev); @@ -4691,6 +4783,7 @@ static const struct net_device_ops bond_netdev_ops = { .ndo_fix_features = bond_fix_features, .ndo_features_check = passthru_features_check, .ndo_get_xmit_slave = bond_xmit_get_slave, + .ndo_sk_get_lower_dev = bond_sk_get_lower_dev, }; static const struct device_type bond_type = { diff --git a/include/net/bonding.h b/include/net/bonding.h index adc3da776970..21497193c4a4 100644 --- a/include/net/bonding.h +++ b/include/net/bonding.h @@ -265,6 +265,8 @@ struct bond_vlan_tag { unsigned short vlan_id; }; +bool bond_sk_check(struct bonding *bond); + /** * Returns NULL if the net_device does not belong to any of the bond's slaves * From patchwork Sun Jan 17 14:59:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 12025449 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19782C433E0 for ; Sun, 17 Jan 2021 15:02:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E6FD322262 for ; Sun, 17 Jan 2021 15:02:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729428AbhAQPCK (ORCPT ); Sun, 17 Jan 2021 10:02:10 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:45154 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729074AbhAQPBF (ORCPT ); Sun, 17 Jan 2021 10:01:05 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from tariqt@nvidia.com) with SMTP; 17 Jan 2021 17:00:15 +0200 Received: from dev-l-vrt-206-005.mtl.labs.mlnx (dev-l-vrt-206-005.mtl.labs.mlnx [10.234.206.5]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10HF0F7F029614; Sun, 17 Jan 2021 17:00:15 +0200 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski Cc: Boris Pismenny , netdev@vger.kernel.org, Tariq Toukan , Moshe Shemesh , Jay Vosburgh , Veaceslav Falico , Andy Gospodarek , John Fastabend , Daniel Borkmann , Jarod Wilson , Ivan Vecera , Tariq Toukan Subject: [PATCH net-next V3 4/8] net/bonding: Take update_features call out of XFRM funciton Date: Sun, 17 Jan 2021 16:59:45 +0200 Message-Id: <20210117145949.8632-5-tariqt@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210117145949.8632-1-tariqt@nvidia.com> References: <20210117145949.8632-1-tariqt@nvidia.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org In preparation for more cases that call netdev_update_features(). While here, move the features logic to the stage where struct bond is already updated, and pass it as the only parameter to function bond_set_xfrm_features(). Signed-off-by: Tariq Toukan Reviewed-by: Boris Pismenny --- drivers/net/bonding/bond_options.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c index a4e4e15f574d..7f0ad97926de 100644 --- a/drivers/net/bonding/bond_options.c +++ b/drivers/net/bonding/bond_options.c @@ -745,17 +745,17 @@ const struct bond_option *bond_opt_get(unsigned int option) return &bond_opts[option]; } -static void bond_set_xfrm_features(struct net_device *bond_dev, u64 mode) +static bool bond_set_xfrm_features(struct bonding *bond) { if (!IS_ENABLED(CONFIG_XFRM_OFFLOAD)) - return; + return false; - if (mode == BOND_MODE_ACTIVEBACKUP) - bond_dev->wanted_features |= BOND_XFRM_FEATURES; + if (BOND_MODE(bond) == BOND_MODE_ACTIVEBACKUP) + bond->dev->wanted_features |= BOND_XFRM_FEATURES; else - bond_dev->wanted_features &= ~BOND_XFRM_FEATURES; + bond->dev->wanted_features &= ~BOND_XFRM_FEATURES; - netdev_update_features(bond_dev); + return true; } static int bond_option_mode_set(struct bonding *bond, @@ -780,13 +780,14 @@ static int bond_option_mode_set(struct bonding *bond, if (newval->value == BOND_MODE_ALB) bond->params.tlb_dynamic_lb = 1; - if (bond->dev->reg_state == NETREG_REGISTERED) - bond_set_xfrm_features(bond->dev, newval->value); - /* don't cache arp_validate between modes */ bond->params.arp_validate = BOND_ARP_VALIDATE_NONE; bond->params.mode = newval->value; + if (bond->dev->reg_state == NETREG_REGISTERED) + if (bond_set_xfrm_features(bond)) + netdev_update_features(bond->dev); + return 0; } From patchwork Sun Jan 17 14:59:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 12025443 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFE38C433E0 for ; Sun, 17 Jan 2021 15:01:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C568222262 for ; Sun, 17 Jan 2021 15:01:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729366AbhAQPBc (ORCPT ); Sun, 17 Jan 2021 10:01:32 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:45168 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729154AbhAQPBG (ORCPT ); Sun, 17 Jan 2021 10:01:06 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from tariqt@nvidia.com) with SMTP; 17 Jan 2021 17:00:15 +0200 Received: from dev-l-vrt-206-005.mtl.labs.mlnx (dev-l-vrt-206-005.mtl.labs.mlnx [10.234.206.5]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10HF0F7G029614; Sun, 17 Jan 2021 17:00:15 +0200 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski Cc: Boris Pismenny , netdev@vger.kernel.org, Tariq Toukan , Moshe Shemesh , Jay Vosburgh , Veaceslav Falico , Andy Gospodarek , John Fastabend , Daniel Borkmann , Jarod Wilson , Ivan Vecera , Tariq Toukan Subject: [PATCH net-next V3 5/8] net/bonding: Implement TLS TX device offload Date: Sun, 17 Jan 2021 16:59:46 +0200 Message-Id: <20210117145949.8632-6-tariqt@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210117145949.8632-1-tariqt@nvidia.com> References: <20210117145949.8632-1-tariqt@nvidia.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Implement TLS TX device offload for bonding interfaces. This allows kTLS sockets running on a bond to benefit from the device offload on capable lower devices. To allow a simple and fast maintenance of the TLS context in SW and lower devices, we bind the TLS socket to a specific lower dev. To achieve a behavior similar to SW kTLS, we support only balance-xor and 802.3ad modes, with xmit_hash_policy=layer3+4. This is enforced in bond_sk_check(), done in a previous patch. For the above configuration, the SW implementation keeps picking the same exact lower dev for all the socket's SKBs. The device offload behaves similarly, making the decision once at the connection creation. Per socket, the TLS module should work directly with the lowest netdev in chain, to call the tls_dev_ops operations. As the bond interface is being bypassed by the TLS module, interacting directly against the lower devs, there is no way for the bond interface to disable its device offload capabilities, as long as the mode/policy config allows it. Hence, the feature flag is not directly controllable, but just reflects the current offload status based on the logic under bond_sk_check(). Signed-off-by: Tariq Toukan Reviewed-by: Boris Pismenny --- drivers/net/bonding/bond_main.c | 29 +++++++++++++++++++++++++++++ drivers/net/bonding/bond_options.c | 27 +++++++++++++++++++++++++-- include/net/bonding.h | 2 ++ 3 files changed, 56 insertions(+), 2 deletions(-) diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index 09524f99c753..539c6bc218df 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -83,6 +83,9 @@ #include #include #include +#if IS_ENABLED(CONFIG_TLS_DEVICE) +#include +#endif #include "bonding_priv.h" @@ -1225,6 +1228,13 @@ static netdev_features_t bond_fix_features(struct net_device *dev, netdev_features_t mask; struct slave *slave; +#if IS_ENABLED(CONFIG_TLS_DEVICE) + if (bond_sk_check(bond)) + features |= BOND_TLS_FEATURES; + else + features &= ~BOND_TLS_FEATURES; +#endif + mask = features; features &= ~NETIF_F_ONE_FOR_ALL; @@ -4647,6 +4657,16 @@ static struct net_device *bond_sk_get_lower_dev(struct net_device *dev, return lower; } +#if IS_ENABLED(CONFIG_TLS_DEVICE) +static netdev_tx_t bond_tls_device_xmit(struct bonding *bond, struct sk_buff *skb, + struct net_device *dev) +{ + if (likely(bond_get_slave_by_dev(bond, tls_get_ctx(skb->sk)->netdev))) + return bond_dev_queue_xmit(bond, skb, tls_get_ctx(skb->sk)->netdev); + return bond_tx_drop(dev, skb); +} +#endif + static netdev_tx_t __bond_start_xmit(struct sk_buff *skb, struct net_device *dev) { struct bonding *bond = netdev_priv(dev); @@ -4655,6 +4675,11 @@ static netdev_tx_t __bond_start_xmit(struct sk_buff *skb, struct net_device *dev !bond_slave_override(bond, skb)) return NETDEV_TX_OK; +#if IS_ENABLED(CONFIG_TLS_DEVICE) + if (skb->sk && tls_is_sk_tx_device_offloaded(skb->sk)) + return bond_tls_device_xmit(bond, skb, dev); +#endif + switch (BOND_MODE(bond)) { case BOND_MODE_ROUNDROBIN: return bond_xmit_roundrobin(skb, dev); @@ -4855,6 +4880,10 @@ void bond_setup(struct net_device *bond_dev) if (BOND_MODE(bond) == BOND_MODE_ACTIVEBACKUP) bond_dev->features |= BOND_XFRM_FEATURES; #endif /* CONFIG_XFRM_OFFLOAD */ +#if IS_ENABLED(CONFIG_TLS_DEVICE) + if (bond_sk_check(bond)) + bond_dev->features |= BOND_TLS_FEATURES; +#endif } /* Destroy a bonding device. diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c index 7f0ad97926de..8fcbf7f9c7b2 100644 --- a/drivers/net/bonding/bond_options.c +++ b/drivers/net/bonding/bond_options.c @@ -758,6 +758,19 @@ static bool bond_set_xfrm_features(struct bonding *bond) return true; } +static bool bond_set_tls_features(struct bonding *bond) +{ + if (!IS_ENABLED(CONFIG_TLS_DEVICE)) + return false; + + if (bond_sk_check(bond)) + bond->dev->wanted_features |= BOND_TLS_FEATURES; + else + bond->dev->wanted_features &= ~BOND_TLS_FEATURES; + + return true; +} + static int bond_option_mode_set(struct bonding *bond, const struct bond_opt_value *newval) { @@ -784,9 +797,15 @@ static int bond_option_mode_set(struct bonding *bond, bond->params.arp_validate = BOND_ARP_VALIDATE_NONE; bond->params.mode = newval->value; - if (bond->dev->reg_state == NETREG_REGISTERED) - if (bond_set_xfrm_features(bond)) + if (bond->dev->reg_state == NETREG_REGISTERED) { + bool update = false; + + update |= bond_set_xfrm_features(bond); + update |= bond_set_tls_features(bond); + + if (update) netdev_update_features(bond->dev); + } return 0; } @@ -1220,6 +1239,10 @@ static int bond_option_xmit_hash_policy_set(struct bonding *bond, newval->string, newval->value); bond->params.xmit_policy = newval->value; + if (bond->dev->reg_state == NETREG_REGISTERED) + if (bond_set_tls_features(bond)) + netdev_update_features(bond->dev); + return 0; } diff --git a/include/net/bonding.h b/include/net/bonding.h index 21497193c4a4..97fbec02df2d 100644 --- a/include/net/bonding.h +++ b/include/net/bonding.h @@ -89,6 +89,8 @@ #define BOND_XFRM_FEATURES (NETIF_F_HW_ESP | NETIF_F_HW_ESP_TX_CSUM | \ NETIF_F_GSO_ESP) +#define BOND_TLS_FEATURES (NETIF_F_HW_TLS_TX) + #ifdef CONFIG_NET_POLL_CONTROLLER extern atomic_t netpoll_block_tx; From patchwork Sun Jan 17 14:59:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 12025447 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2ADD0C433DB for ; Sun, 17 Jan 2021 15:02:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E4C7922262 for ; Sun, 17 Jan 2021 15:02:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729410AbhAQPCE (ORCPT ); Sun, 17 Jan 2021 10:02:04 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:45182 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729160AbhAQPBF (ORCPT ); Sun, 17 Jan 2021 10:01:05 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from tariqt@nvidia.com) with SMTP; 17 Jan 2021 17:00:16 +0200 Received: from dev-l-vrt-206-005.mtl.labs.mlnx (dev-l-vrt-206-005.mtl.labs.mlnx [10.234.206.5]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10HF0F7H029614; Sun, 17 Jan 2021 17:00:15 +0200 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski Cc: Boris Pismenny , netdev@vger.kernel.org, Tariq Toukan , Moshe Shemesh , Jay Vosburgh , Veaceslav Falico , Andy Gospodarek , John Fastabend , Daniel Borkmann , Jarod Wilson , Ivan Vecera , Tariq Toukan Subject: [PATCH net-next V3 6/8] net/bonding: Declare TLS RX device offload support Date: Sun, 17 Jan 2021 16:59:47 +0200 Message-Id: <20210117145949.8632-7-tariqt@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210117145949.8632-1-tariqt@nvidia.com> References: <20210117145949.8632-1-tariqt@nvidia.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Following the description in previous patch (for TX): As the bond interface is being bypassed by the TLS module, interacting directly against the lower devs, there is no way for the bond interface to disable its device offload capabilities, as long as the mode/policy config allows it. Hence, the feature flag is not directly controllable, but just reflects the offload status based on the logic under bond_sk_check(). Here we just declare RX device offload support, and expose it via the NETIF_F_HW_TLS_RX flag. Signed-off-by: Tariq Toukan Reviewed-by: Boris Pismenny --- include/net/bonding.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/net/bonding.h b/include/net/bonding.h index 97fbec02df2d..019e998d944a 100644 --- a/include/net/bonding.h +++ b/include/net/bonding.h @@ -89,7 +89,7 @@ #define BOND_XFRM_FEATURES (NETIF_F_HW_ESP | NETIF_F_HW_ESP_TX_CSUM | \ NETIF_F_GSO_ESP) -#define BOND_TLS_FEATURES (NETIF_F_HW_TLS_TX) +#define BOND_TLS_FEATURES (NETIF_F_HW_TLS_TX | NETIF_F_HW_TLS_RX) #ifdef CONFIG_NET_POLL_CONTROLLER extern atomic_t netpoll_block_tx; From patchwork Sun Jan 17 14:59:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 12025445 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9B07C433DB for ; Sun, 17 Jan 2021 15:01:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8F6692226A for ; Sun, 17 Jan 2021 15:01:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729399AbhAQPB4 (ORCPT ); Sun, 17 Jan 2021 10:01:56 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:45196 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729195AbhAQPBF (ORCPT ); Sun, 17 Jan 2021 10:01:05 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from tariqt@nvidia.com) with SMTP; 17 Jan 2021 17:00:16 +0200 Received: from dev-l-vrt-206-005.mtl.labs.mlnx (dev-l-vrt-206-005.mtl.labs.mlnx [10.234.206.5]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10HF0F7I029614; Sun, 17 Jan 2021 17:00:16 +0200 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski Cc: Boris Pismenny , netdev@vger.kernel.org, Tariq Toukan , Moshe Shemesh , Jay Vosburgh , Veaceslav Falico , Andy Gospodarek , John Fastabend , Daniel Borkmann , Jarod Wilson , Ivan Vecera , Tariq Toukan Subject: [PATCH net-next V3 7/8] net/tls: Device offload to use lowest netdevice in chain Date: Sun, 17 Jan 2021 16:59:48 +0200 Message-Id: <20210117145949.8632-8-tariqt@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210117145949.8632-1-tariqt@nvidia.com> References: <20210117145949.8632-1-tariqt@nvidia.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Do not call the tls_dev_ops of upper devices. Instead, ask them for the proper lowest device and communicate with it directly. Signed-off-by: Tariq Toukan Reviewed-by: Boris Pismenny --- net/tls/tls_device.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index f7fb7d2c1de1..75ceea0a41bf 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -113,7 +113,7 @@ static struct net_device *get_netdev_for_sock(struct sock *sk) struct net_device *netdev = NULL; if (likely(dst)) { - netdev = dst->dev; + netdev = netdev_sk_get_lowest_dev(dst->dev, sk); dev_hold(netdev); } From patchwork Sun Jan 17 14:59:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 12025433 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1395AC433DB for ; Sun, 17 Jan 2021 15:01:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CCAB321D81 for ; Sun, 17 Jan 2021 15:01:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729308AbhAQPBT (ORCPT ); Sun, 17 Jan 2021 10:01:19 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:45212 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729218AbhAQPBF (ORCPT ); Sun, 17 Jan 2021 10:01:05 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from tariqt@nvidia.com) with SMTP; 17 Jan 2021 17:00:16 +0200 Received: from dev-l-vrt-206-005.mtl.labs.mlnx (dev-l-vrt-206-005.mtl.labs.mlnx [10.234.206.5]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10HF0F7J029614; Sun, 17 Jan 2021 17:00:16 +0200 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski Cc: Boris Pismenny , netdev@vger.kernel.org, Tariq Toukan , Moshe Shemesh , Jay Vosburgh , Veaceslav Falico , Andy Gospodarek , John Fastabend , Daniel Borkmann , Jarod Wilson , Ivan Vecera , Tariq Toukan Subject: [PATCH net-next V3 8/8] net/tls: Except bond interface from some TLS checks Date: Sun, 17 Jan 2021 16:59:49 +0200 Message-Id: <20210117145949.8632-9-tariqt@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210117145949.8632-1-tariqt@nvidia.com> References: <20210117145949.8632-1-tariqt@nvidia.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org In the tls_dev_event handler, ignore tlsdev_ops requirement for bond interfaces, they do not exist as the interaction is done directly with the lower device. Also, make the validate function pass when it's called with the upper bond interface. Signed-off-by: Tariq Toukan Reviewed-by: Boris Pismenny --- net/tls/tls_device.c | 2 ++ net/tls/tls_device_fallback.c | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index 75ceea0a41bf..d9cd229aa111 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -1329,6 +1329,8 @@ static int tls_dev_event(struct notifier_block *this, unsigned long event, switch (event) { case NETDEV_REGISTER: case NETDEV_FEAT_CHANGE: + if (netif_is_bond_master(dev)) + return NOTIFY_DONE; if ((dev->features & NETIF_F_HW_TLS_RX) && !dev->tlsdev_ops->tls_dev_resync) return NOTIFY_BAD; diff --git a/net/tls/tls_device_fallback.c b/net/tls/tls_device_fallback.c index d946817ed065..cacf040872c7 100644 --- a/net/tls/tls_device_fallback.c +++ b/net/tls/tls_device_fallback.c @@ -424,7 +424,7 @@ struct sk_buff *tls_validate_xmit_skb(struct sock *sk, struct net_device *dev, struct sk_buff *skb) { - if (dev == tls_get_ctx(sk)->netdev) + if (dev == tls_get_ctx(sk)->netdev || netif_is_bond_master(dev)) return skb; return tls_sw_fallback(sk, skb);