From patchwork Tue Dec 29 11:41:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 11992207 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AB20C4332E for ; Tue, 29 Dec 2020 11:42:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 724F22242A for ; Tue, 29 Dec 2020 11:42:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726504AbgL2Lmb (ORCPT ); Tue, 29 Dec 2020 06:42:31 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:55465 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726203AbgL2LmL (ORCPT ); Tue, 29 Dec 2020 06:42:11 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from tariqt@nvidia.com) with SMTP; 29 Dec 2020 13:41:21 +0200 Received: from dev-l-vrt-206-005.mtl.labs.mlnx (dev-l-vrt-206-005.mtl.labs.mlnx [10.234.206.5]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0BTBfKQj031596; Tue, 29 Dec 2020 13:41:21 +0200 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski Cc: Saeed Mahameed , Boris Pismenny , netdev@vger.kernel.org, Moshe Shemesh , andy@greyhouse.net, vfalico@gmail.com, j.vosburgh@gmail.com, Tariq Toukan , Tariq Toukan Subject: [PATCH RFC net-next 5/6] net/bonding: Implement ndo_sk_get_slave Date: Tue, 29 Dec 2020 13:41:03 +0200 Message-Id: <20201229114104.7120-6-tariqt@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20201229114104.7120-1-tariqt@nvidia.com> References: <20201229114104.7120-1-tariqt@nvidia.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Support L3/4 sockets only, with xmit_hash_policy==LAYER34 and modes xor/802.3ad. Signed-off-by: Tariq Toukan --- drivers/net/bonding/bond_main.c | 90 +++++++++++++++++++++++++++++++++ 1 file changed, 90 insertions(+) diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index 8bc7629a2805..0303e43e5fcf 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -301,6 +301,19 @@ netdev_tx_t bond_dev_queue_xmit(struct bonding *bond, struct sk_buff *skb, return dev_queue_xmit(skb); } +static bool bond_sk_check(struct bonding *bond) +{ + switch (BOND_MODE(bond)) { + case BOND_MODE_8023AD: + case BOND_MODE_XOR: + if (bond->params.xmit_policy == BOND_XMIT_POLICY_LAYER34) + return true; + fallthrough; + default: + return false; + } +} + /*---------------------------------- VLAN -----------------------------------*/ /* In the following 2 functions, bond_vlan_rx_add_vid and bond_vlan_rx_kill_vid, @@ -4553,6 +4566,82 @@ static struct net_device *bond_xmit_get_slave(struct net_device *master_dev, return NULL; } +static void bond_sk_to_flow(struct sock *sk, struct flow_keys *flow) +{ + switch (sk->sk_family) { +#if IS_ENABLED(CONFIG_IPV6) + case AF_INET6: + if (sk->sk_ipv6only || + ipv6_addr_type(&sk->sk_v6_daddr) != IPV6_ADDR_MAPPED) { + flow->control.addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS; + flow->addrs.v6addrs.src = inet6_sk(sk)->saddr; + flow->addrs.v6addrs.dst = sk->sk_v6_daddr; + break; + } + fallthrough; +#endif + default: /* AF_INET */ + flow->control.addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS; + flow->addrs.v4addrs.src = inet_sk(sk)->inet_rcv_saddr; + flow->addrs.v4addrs.dst = inet_sk(sk)->inet_daddr; + break; + } + + flow->ports.src = inet_sk(sk)->inet_sport; + flow->ports.dst = inet_sk(sk)->inet_dport; +} + +/** + * bond_sk_hash_l34 - generate a hash value based on the socket's L3 and L4 fields + * @sk: socket to use for headers + * + * This function will extract the necessary field from the socket and use + * them to generate a hash based on the LAYER34 xmit_policy. + * Assumes that sk is a TCP or UDP socket. + */ +static u32 bond_sk_hash_l34(struct sock *sk) +{ + struct flow_keys flow; + u32 hash; + + bond_sk_to_flow(sk, &flow); + + /* L4 */ + memcpy(&hash, &flow.ports.ports, sizeof(hash)); + /* L3 */ + return bond_ip_hash(hash, &flow); +} + +static struct net_device *__bond_sk_get_slave_dev(struct bonding *bond, + struct sock *sk) +{ + struct bond_up_slave *slaves; + struct slave *slave; + unsigned int count; + u32 hash; + + slaves = rcu_dereference(bond->usable_slaves); + count = slaves ? READ_ONCE(slaves->count) : 0; + if (unlikely(!count)) + return NULL; + + hash = bond_sk_hash_l34(sk); + slave = slaves->arr[hash % count]; + + return slave->dev; +} + +static struct net_device *bond_sk_get_slave(struct net_device *master_dev, + struct sock *sk) +{ + struct bonding *bond = netdev_priv(master_dev); + + if (bond_sk_check(bond)) + return __bond_sk_get_slave_dev(bond, sk); + + return NULL; +} + static netdev_tx_t __bond_start_xmit(struct sk_buff *skb, struct net_device *dev) { struct bonding *bond = netdev_priv(dev); @@ -4689,6 +4778,7 @@ static const struct net_device_ops bond_netdev_ops = { .ndo_fix_features = bond_fix_features, .ndo_features_check = passthru_features_check, .ndo_get_xmit_slave = bond_xmit_get_slave, + .ndo_sk_get_slave = bond_sk_get_slave, }; static const struct device_type bond_type = {