From patchwork Fri Mar 31 08:29:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Felix Fietkau X-Patchwork-Id: 13195437 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C954BC76196 for ; Fri, 31 Mar 2023 08:29:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230148AbjCaI3y (ORCPT ); Fri, 31 Mar 2023 04:29:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229646AbjCaI3v (ORCPT ); Fri, 31 Mar 2023 04:29:51 -0400 Received: from nbd.name (nbd.name [46.4.11.11]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0AC111BCE for ; Fri, 31 Mar 2023 01:29:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nbd.name; s=20160729; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:Subject: Cc:To:From:Sender:Reply-To:Content-Type:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=NZeq0MFMDrSPFsdCzFurY6nko8ys6JrpWaoMV8qlwUA=; b=R5AEE2eDWeVtiTt7bdRvXhk9bb 0YtYFtPgtLtlfIDAY3apExjNYayWk1/hyVfS5bvpS6b5ZIAQvtKrCjr+AbimIa8jO9+nlhrlmlNdo +1/RD+9zSzGqcsXSca0X3k7O+AKgiXR7RaCILkLqS6zhbXtJjS/hm1PZvK0GhfhqVl2s=; Received: from p54ae9730.dip0.t-ipconnect.de ([84.174.151.48] helo=Maecks.lan) by ds12 with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (Exim 4.94.2) (envelope-from ) id 1piA8o-008qbG-65; Fri, 31 Mar 2023 10:29:46 +0200 From: Felix Fietkau To: netdev@vger.kernel.org Cc: Daniel Golle Subject: [PATCH net-next 1/3] net: ethernet: mtk_eth_soc: improve keeping track of offloaded flows Date: Fri, 31 Mar 2023 10:29:43 +0200 Message-Id: <20230331082945.75075-1-nbd@nbd.name> X-Mailer: git-send-email 2.39.0 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Unify tracking of L2 and L3 flows. Use the generic list field in struct mtk_foe_entry for tracking L2 subflows. Preparation for improving flow accounting support. Signed-off-by: Felix Fietkau Reviewed-by: Leon Romanovsky --- drivers/net/ethernet/mediatek/mtk_ppe.c | 162 ++++++++++++------------ drivers/net/ethernet/mediatek/mtk_ppe.h | 15 +-- 2 files changed, 86 insertions(+), 91 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c index dd9581334b05..e9bcbdbe9c12 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe.c @@ -460,42 +460,43 @@ int mtk_foe_entry_set_queue(struct mtk_eth *eth, struct mtk_foe_entry *entry, return 0; } +static int +mtk_flow_entry_match_len(struct mtk_eth *eth, struct mtk_foe_entry *entry) +{ + int type = mtk_get_ib1_pkt_type(eth, entry->ib1); + + if (type > MTK_PPE_PKT_TYPE_IPV4_DSLITE) + return offsetof(struct mtk_foe_entry, ipv6._rsv); + else + return offsetof(struct mtk_foe_entry, ipv4.ib2); +} + static bool mtk_flow_entry_match(struct mtk_eth *eth, struct mtk_flow_entry *entry, - struct mtk_foe_entry *data) + struct mtk_foe_entry *data, int len) { - int type, len; - if ((data->ib1 ^ entry->data.ib1) & MTK_FOE_IB1_UDP) return false; - type = mtk_get_ib1_pkt_type(eth, entry->data.ib1); - if (type > MTK_PPE_PKT_TYPE_IPV4_DSLITE) - len = offsetof(struct mtk_foe_entry, ipv6._rsv); - else - len = offsetof(struct mtk_foe_entry, ipv4.ib2); - return !memcmp(&entry->data.data, &data->data, len - 4); } static void -__mtk_foe_entry_clear(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) +__mtk_foe_entry_clear(struct mtk_ppe *ppe, struct mtk_flow_entry *entry, + bool set_state) { - struct hlist_head *head; struct hlist_node *tmp; if (entry->type == MTK_FLOW_TYPE_L2) { rhashtable_remove_fast(&ppe->l2_flows, &entry->l2_node, mtk_flow_l2_ht_params); - head = &entry->l2_flows; - hlist_for_each_entry_safe(entry, tmp, head, l2_data.list) - __mtk_foe_entry_clear(ppe, entry); + hlist_for_each_entry_safe(entry, tmp, &entry->l2_flows, l2_list) + __mtk_foe_entry_clear(ppe, entry, set_state); return; } - hlist_del_init(&entry->list); - if (entry->hash != 0xffff) { + if (entry->hash != 0xffff && set_state) { struct mtk_foe_entry *hwe = mtk_foe_get_entry(ppe, entry->hash); hwe->ib1 &= ~MTK_FOE_IB1_STATE; @@ -516,7 +517,8 @@ __mtk_foe_entry_clear(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) if (entry->type != MTK_FLOW_TYPE_L2_SUBFLOW) return; - hlist_del_init(&entry->l2_data.list); + hlist_del_init(&entry->l2_list); + hlist_del_init(&entry->list); kfree(entry); } @@ -532,68 +534,57 @@ static int __mtk_foe_entry_idle_time(struct mtk_ppe *ppe, u32 ib1) return now - timestamp; } +static bool +mtk_flow_entry_update(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) +{ + struct mtk_foe_entry foe = {}; + struct mtk_foe_entry *hwe; + u16 hash = entry->hash; + int len; + + if (hash == 0xffff) + return false; + + hwe = mtk_foe_get_entry(ppe, hash); + len = mtk_flow_entry_match_len(ppe->eth, &entry->data); + memcpy(&foe, hwe, len); + + if (!mtk_flow_entry_match(ppe->eth, entry, &foe, len) || + FIELD_GET(MTK_FOE_IB1_STATE, foe.ib1) != MTK_FOE_STATE_BIND) + return false; + + entry->data.ib1 = foe.ib1; + + return true; +} + static void mtk_flow_entry_update_l2(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) { u32 ib1_ts_mask = mtk_get_ib1_ts_mask(ppe->eth); struct mtk_flow_entry *cur; - struct mtk_foe_entry *hwe; struct hlist_node *tmp; int idle; idle = __mtk_foe_entry_idle_time(ppe, entry->data.ib1); - hlist_for_each_entry_safe(cur, tmp, &entry->l2_flows, l2_data.list) { + hlist_for_each_entry_safe(cur, tmp, &entry->l2_flows, l2_list) { int cur_idle; - u32 ib1; - - hwe = mtk_foe_get_entry(ppe, cur->hash); - ib1 = READ_ONCE(hwe->ib1); - if (FIELD_GET(MTK_FOE_IB1_STATE, ib1) != MTK_FOE_STATE_BIND) { - cur->hash = 0xffff; - __mtk_foe_entry_clear(ppe, cur); + if (!mtk_flow_entry_update(ppe, cur)) { + __mtk_foe_entry_clear(ppe, entry, false); continue; } - cur_idle = __mtk_foe_entry_idle_time(ppe, ib1); + cur_idle = __mtk_foe_entry_idle_time(ppe, cur->data.ib1); if (cur_idle >= idle) continue; idle = cur_idle; entry->data.ib1 &= ~ib1_ts_mask; - entry->data.ib1 |= hwe->ib1 & ib1_ts_mask; + entry->data.ib1 |= cur->data.ib1 & ib1_ts_mask; } } -static void -mtk_flow_entry_update(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) -{ - struct mtk_foe_entry foe = {}; - struct mtk_foe_entry *hwe; - - spin_lock_bh(&ppe_lock); - - if (entry->type == MTK_FLOW_TYPE_L2) { - mtk_flow_entry_update_l2(ppe, entry); - goto out; - } - - if (entry->hash == 0xffff) - goto out; - - hwe = mtk_foe_get_entry(ppe, entry->hash); - memcpy(&foe, hwe, ppe->eth->soc->foe_entry_size); - if (!mtk_flow_entry_match(ppe->eth, entry, &foe)) { - entry->hash = 0xffff; - goto out; - } - - entry->data.ib1 = foe.ib1; - -out: - spin_unlock_bh(&ppe_lock); -} - static void __mtk_foe_entry_commit(struct mtk_ppe *ppe, struct mtk_foe_entry *entry, u16 hash) @@ -628,7 +619,8 @@ __mtk_foe_entry_commit(struct mtk_ppe *ppe, struct mtk_foe_entry *entry, void mtk_foe_entry_clear(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) { spin_lock_bh(&ppe_lock); - __mtk_foe_entry_clear(ppe, entry); + __mtk_foe_entry_clear(ppe, entry, true); + hlist_del_init(&entry->list); spin_unlock_bh(&ppe_lock); } @@ -675,8 +667,8 @@ mtk_foe_entry_commit_subflow(struct mtk_ppe *ppe, struct mtk_flow_entry *entry, { const struct mtk_soc_data *soc = ppe->eth->soc; struct mtk_flow_entry *flow_info; - struct mtk_foe_entry foe = {}, *hwe; struct mtk_foe_mac_info *l2; + struct mtk_foe_entry *hwe; u32 ib1_mask = mtk_get_ib1_pkt_type_mask(ppe->eth) | MTK_FOE_IB1_UDP; int type; @@ -684,30 +676,30 @@ mtk_foe_entry_commit_subflow(struct mtk_ppe *ppe, struct mtk_flow_entry *entry, if (!flow_info) return; - flow_info->l2_data.base_flow = entry; flow_info->type = MTK_FLOW_TYPE_L2_SUBFLOW; flow_info->hash = hash; hlist_add_head(&flow_info->list, &ppe->foe_flow[hash / soc->hash_offset]); - hlist_add_head(&flow_info->l2_data.list, &entry->l2_flows); + hlist_add_head(&flow_info->l2_list, &entry->l2_flows); hwe = mtk_foe_get_entry(ppe, hash); - memcpy(&foe, hwe, soc->foe_entry_size); - foe.ib1 &= ib1_mask; - foe.ib1 |= entry->data.ib1 & ~ib1_mask; + memcpy(&flow_info->data, hwe, soc->foe_entry_size); + flow_info->data.ib1 &= ib1_mask; + flow_info->data.ib1 |= entry->data.ib1 & ~ib1_mask; - l2 = mtk_foe_entry_l2(ppe->eth, &foe); + l2 = mtk_foe_entry_l2(ppe->eth, &flow_info->data); memcpy(l2, &entry->data.bridge.l2, sizeof(*l2)); - type = mtk_get_ib1_pkt_type(ppe->eth, foe.ib1); + type = mtk_get_ib1_pkt_type(ppe->eth, flow_info->data.ib1); if (type == MTK_PPE_PKT_TYPE_IPV4_HNAPT) - memcpy(&foe.ipv4.new, &foe.ipv4.orig, sizeof(foe.ipv4.new)); + memcpy(&flow_info->data.ipv4.new, &flow_info->data.ipv4.orig, + sizeof(flow_info->data.ipv4.new)); else if (type >= MTK_PPE_PKT_TYPE_IPV6_ROUTE_3T && l2->etype == ETH_P_IP) l2->etype = ETH_P_IPV6; - *mtk_foe_entry_ib2(ppe->eth, &foe) = entry->data.bridge.ib2; + *mtk_foe_entry_ib2(ppe->eth, &flow_info->data) = entry->data.bridge.ib2; - __mtk_foe_entry_commit(ppe, &foe, hash); + __mtk_foe_entry_commit(ppe, &flow_info->data, hash); } void __mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash) @@ -717,9 +709,11 @@ void __mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash) struct mtk_foe_entry *hwe = mtk_foe_get_entry(ppe, hash); struct mtk_flow_entry *entry; struct mtk_foe_bridge key = {}; + struct mtk_foe_entry foe = {}; struct hlist_node *n; struct ethhdr *eh; bool found = false; + int entry_len; u8 *tag; spin_lock_bh(&ppe_lock); @@ -727,20 +721,14 @@ void __mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash) if (FIELD_GET(MTK_FOE_IB1_STATE, hwe->ib1) == MTK_FOE_STATE_BIND) goto out; - hlist_for_each_entry_safe(entry, n, head, list) { - if (entry->type == MTK_FLOW_TYPE_L2_SUBFLOW) { - if (unlikely(FIELD_GET(MTK_FOE_IB1_STATE, hwe->ib1) == - MTK_FOE_STATE_BIND)) - continue; - - entry->hash = 0xffff; - __mtk_foe_entry_clear(ppe, entry); - continue; - } + entry_len = mtk_flow_entry_match_len(ppe->eth, hwe); + memcpy(&foe, hwe, entry_len); - if (found || !mtk_flow_entry_match(ppe->eth, entry, hwe)) { + hlist_for_each_entry_safe(entry, n, head, list) { + if (found || + !mtk_flow_entry_match(ppe->eth, entry, &foe, entry_len)) { if (entry->hash != 0xffff) - entry->hash = 0xffff; + __mtk_foe_entry_clear(ppe, entry, false); continue; } @@ -791,9 +779,17 @@ void __mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash) int mtk_foe_entry_idle_time(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) { - mtk_flow_entry_update(ppe, entry); + int idle; + + spin_lock_bh(&ppe_lock); + if (entry->type == MTK_FLOW_TYPE_L2) + mtk_flow_entry_update_l2(ppe, entry); + else + mtk_flow_entry_update(ppe, entry); + idle = __mtk_foe_entry_idle_time(ppe, entry->data.ib1); + spin_unlock_bh(&ppe_lock); - return __mtk_foe_entry_idle_time(ppe, entry->data.ib1); + return idle; } int mtk_ppe_prepare_reset(struct mtk_ppe *ppe) diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.h b/drivers/net/ethernet/mediatek/mtk_ppe.h index e1aab2e8e262..6823256016a2 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.h +++ b/drivers/net/ethernet/mediatek/mtk_ppe.h @@ -265,7 +265,12 @@ enum { struct mtk_flow_entry { union { - struct hlist_node list; + /* regular flows + L2 subflows */ + struct { + struct hlist_node list; + struct hlist_node l2_list; + }; + /* L2 flows */ struct { struct rhash_head l2_node; struct hlist_head l2_flows; @@ -275,13 +280,7 @@ struct mtk_flow_entry { s8 wed_index; u8 ppe_index; u16 hash; - union { - struct mtk_foe_entry data; - struct { - struct mtk_flow_entry *base_flow; - struct hlist_node list; - } l2_data; - }; + struct mtk_foe_entry data; struct rhash_head node; unsigned long cookie; }; From patchwork Fri Mar 31 08:29:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Felix Fietkau X-Patchwork-Id: 13195436 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9FC1C6FD18 for ; Fri, 31 Mar 2023 08:29:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230429AbjCaI3x (ORCPT ); Fri, 31 Mar 2023 04:29:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230365AbjCaI3v (ORCPT ); Fri, 31 Mar 2023 04:29:51 -0400 Received: from nbd.name (nbd.name [46.4.11.11]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01C351992 for ; Fri, 31 Mar 2023 01:29:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nbd.name; s=20160729; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=8AdyBxGPxwu/Q2U3TLVIv37Me+2lfXzA/GTCC5dQa30=; b=W3dQneWIBnrlLvvYZbBpLzlOZd adYrvmrz3gwpFr4DllC48hj8b7bVcqKNre5SWxbHsybuxCn9geUg+Brewt7bvuln9srNPI6dCUnMj TsKXLEa5vNih/QpxtkbSf/fsaSD0p9KLLQuMqHfNfcVCYIZXHDSq4o4ZB8sGU0drnxN8=; Received: from p54ae9730.dip0.t-ipconnect.de ([84.174.151.48] helo=Maecks.lan) by ds12 with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (Exim 4.94.2) (envelope-from ) id 1piA8o-008qbG-CX; Fri, 31 Mar 2023 10:29:46 +0200 From: Felix Fietkau To: netdev@vger.kernel.org Cc: Daniel Golle Subject: [PATCH net-next 2/3] net: ethernet: mtk_eth_soc: fix ppe flow accounting for L2 flows Date: Fri, 31 Mar 2023 10:29:44 +0200 Message-Id: <20230331082945.75075-2-nbd@nbd.name> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230331082945.75075-1-nbd@nbd.name> References: <20230331082945.75075-1-nbd@nbd.name> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org For L2 flows, the packet/byte counters should report the sum of the counters of their subflows, both current and expired. In order to make this work, change the way that accounting data is tracked. Reset counters when a flow enters bind. Once it expires (or enters unbind), store the last counter value in struct mtk_flow_entry. Fixes: 3fbe4d8c0e53 ("net: ethernet: mtk_eth_soc: ppe: add support for flow accounting") Signed-off-by: Felix Fietkau Reviewed-by: Leon Romanovsky --- drivers/net/ethernet/mediatek/mtk_ppe.c | 137 ++++++++++-------- drivers/net/ethernet/mediatek/mtk_ppe.h | 8 +- .../net/ethernet/mediatek/mtk_ppe_debugfs.c | 2 +- .../net/ethernet/mediatek/mtk_ppe_offload.c | 17 +-- 4 files changed, 88 insertions(+), 76 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c index e9bcbdbe9c12..64e8dc8d814b 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe.c @@ -80,9 +80,9 @@ static int mtk_ppe_mib_wait_busy(struct mtk_ppe *ppe) int ret; u32 val; - ret = readl_poll_timeout(ppe->base + MTK_PPE_MIB_SER_CR, val, - !(val & MTK_PPE_MIB_SER_CR_ST), - 20, MTK_PPE_WAIT_TIMEOUT_US); + ret = readl_poll_timeout_atomic(ppe->base + MTK_PPE_MIB_SER_CR, val, + !(val & MTK_PPE_MIB_SER_CR_ST), + 20, MTK_PPE_WAIT_TIMEOUT_US); if (ret) dev_err(ppe->dev, "MIB table busy"); @@ -90,18 +90,32 @@ static int mtk_ppe_mib_wait_busy(struct mtk_ppe *ppe) return ret; } -static int mtk_mib_entry_read(struct mtk_ppe *ppe, u16 index, u64 *bytes, u64 *packets) +static inline struct mtk_foe_accounting * +mtk_ppe_acct_data(struct mtk_ppe *ppe, u16 index) +{ + if (!ppe->acct_table) + return NULL; + + return ppe->acct_table + index * sizeof(struct mtk_foe_accounting); +} + +struct mtk_foe_accounting *mtk_ppe_mib_entry_read(struct mtk_ppe *ppe, u16 index) { u32 byte_cnt_low, byte_cnt_high, pkt_cnt_low, pkt_cnt_high; u32 val, cnt_r0, cnt_r1, cnt_r2; + struct mtk_foe_accounting *acct; int ret; val = FIELD_PREP(MTK_PPE_MIB_SER_CR_ADDR, index) | MTK_PPE_MIB_SER_CR_ST; ppe_w32(ppe, MTK_PPE_MIB_SER_CR, val); + acct = mtk_ppe_acct_data(ppe, index); + if (!acct) + return NULL; + ret = mtk_ppe_mib_wait_busy(ppe); if (ret) - return ret; + return acct; cnt_r0 = readl(ppe->base + MTK_PPE_MIB_SER_R0); cnt_r1 = readl(ppe->base + MTK_PPE_MIB_SER_R1); @@ -111,10 +125,11 @@ static int mtk_mib_entry_read(struct mtk_ppe *ppe, u16 index, u64 *bytes, u64 *p byte_cnt_high = FIELD_GET(MTK_PPE_MIB_SER_R1_BYTE_CNT_HIGH, cnt_r1); pkt_cnt_low = FIELD_GET(MTK_PPE_MIB_SER_R1_PKT_CNT_LOW, cnt_r1); pkt_cnt_high = FIELD_GET(MTK_PPE_MIB_SER_R2_PKT_CNT_HIGH, cnt_r2); - *bytes = ((u64)byte_cnt_high << 32) | byte_cnt_low; - *packets = (pkt_cnt_high << 16) | pkt_cnt_low; - return 0; + acct->bytes += ((u64)byte_cnt_high << 32) | byte_cnt_low; + acct->packets += (pkt_cnt_high << 16) | pkt_cnt_low; + + return acct; } static void mtk_ppe_cache_clear(struct mtk_ppe *ppe) @@ -503,14 +518,6 @@ __mtk_foe_entry_clear(struct mtk_ppe *ppe, struct mtk_flow_entry *entry, hwe->ib1 |= FIELD_PREP(MTK_FOE_IB1_STATE, MTK_FOE_STATE_INVALID); dma_wmb(); mtk_ppe_cache_clear(ppe); - - if (ppe->accounting) { - struct mtk_foe_accounting *acct; - - acct = ppe->acct_table + entry->hash * sizeof(*acct); - acct->packets = 0; - acct->bytes = 0; - } } entry->hash = 0xffff; @@ -535,11 +542,14 @@ static int __mtk_foe_entry_idle_time(struct mtk_ppe *ppe, u32 ib1) } static bool -mtk_flow_entry_update(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) +mtk_flow_entry_update(struct mtk_ppe *ppe, struct mtk_flow_entry *entry, + u64 *packets, u64 *bytes) { + struct mtk_foe_accounting *acct; struct mtk_foe_entry foe = {}; struct mtk_foe_entry *hwe; u16 hash = entry->hash; + bool ret = false; int len; if (hash == 0xffff) @@ -550,18 +560,35 @@ mtk_flow_entry_update(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) memcpy(&foe, hwe, len); if (!mtk_flow_entry_match(ppe->eth, entry, &foe, len) || - FIELD_GET(MTK_FOE_IB1_STATE, foe.ib1) != MTK_FOE_STATE_BIND) - return false; + FIELD_GET(MTK_FOE_IB1_STATE, foe.ib1) != MTK_FOE_STATE_BIND) { + acct = mtk_ppe_acct_data(ppe, hash); + if (acct) { + entry->prev_packets += acct->packets; + entry->prev_bytes += acct->bytes; + } + + goto out; + } entry->data.ib1 = foe.ib1; + acct = mtk_ppe_mib_entry_read(ppe, hash); + ret = true; - return true; +out: + if (acct) { + *packets += acct->packets; + *bytes += acct->bytes; + } + + return ret; } static void mtk_flow_entry_update_l2(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) { u32 ib1_ts_mask = mtk_get_ib1_ts_mask(ppe->eth); + u64 *packets = &entry->packets; + u64 *bytes = &entry->bytes; struct mtk_flow_entry *cur; struct hlist_node *tmp; int idle; @@ -570,7 +597,9 @@ mtk_flow_entry_update_l2(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) hlist_for_each_entry_safe(cur, tmp, &entry->l2_flows, l2_list) { int cur_idle; - if (!mtk_flow_entry_update(ppe, cur)) { + if (!mtk_flow_entry_update(ppe, cur, packets, bytes)) { + entry->prev_packets += cur->prev_packets; + entry->prev_bytes += cur->prev_bytes; __mtk_foe_entry_clear(ppe, entry, false); continue; } @@ -585,10 +614,29 @@ mtk_flow_entry_update_l2(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) } } +void mtk_foe_entry_get_stats(struct mtk_ppe *ppe, struct mtk_flow_entry *entry, + int *idle) +{ + entry->packets = entry->prev_packets; + entry->bytes = entry->prev_bytes; + + spin_lock_bh(&ppe_lock); + + if (entry->type == MTK_FLOW_TYPE_L2) + mtk_flow_entry_update_l2(ppe, entry); + else + mtk_flow_entry_update(ppe, entry, &entry->packets, &entry->bytes); + + *idle = __mtk_foe_entry_idle_time(ppe, entry->data.ib1); + + spin_unlock_bh(&ppe_lock); +} + static void __mtk_foe_entry_commit(struct mtk_ppe *ppe, struct mtk_foe_entry *entry, u16 hash) { + struct mtk_foe_accounting *acct; struct mtk_eth *eth = ppe->eth; u16 timestamp = mtk_eth_timestamp(eth); struct mtk_foe_entry *hwe; @@ -613,6 +661,12 @@ __mtk_foe_entry_commit(struct mtk_ppe *ppe, struct mtk_foe_entry *entry, dma_wmb(); + acct = mtk_ppe_mib_entry_read(ppe, hash); + if (acct) { + acct->packets = 0; + acct->bytes = 0; + } + mtk_ppe_cache_clear(ppe); } @@ -777,21 +831,6 @@ void __mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash) spin_unlock_bh(&ppe_lock); } -int mtk_foe_entry_idle_time(struct mtk_ppe *ppe, struct mtk_flow_entry *entry) -{ - int idle; - - spin_lock_bh(&ppe_lock); - if (entry->type == MTK_FLOW_TYPE_L2) - mtk_flow_entry_update_l2(ppe, entry); - else - mtk_flow_entry_update(ppe, entry); - idle = __mtk_foe_entry_idle_time(ppe, entry->data.ib1); - spin_unlock_bh(&ppe_lock); - - return idle; -} - int mtk_ppe_prepare_reset(struct mtk_ppe *ppe) { if (!ppe) @@ -819,32 +858,6 @@ int mtk_ppe_prepare_reset(struct mtk_ppe *ppe) return mtk_ppe_wait_busy(ppe); } -struct mtk_foe_accounting *mtk_foe_entry_get_mib(struct mtk_ppe *ppe, u32 index, - struct mtk_foe_accounting *diff) -{ - struct mtk_foe_accounting *acct; - int size = sizeof(struct mtk_foe_accounting); - u64 bytes, packets; - - if (!ppe->accounting) - return NULL; - - if (mtk_mib_entry_read(ppe, index, &bytes, &packets)) - return NULL; - - acct = ppe->acct_table + index * size; - - acct->bytes += bytes; - acct->packets += packets; - - if (diff) { - diff->bytes = bytes; - diff->packets = packets; - } - - return acct; -} - struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base, int index) { bool accounting = eth->soc->has_accounting; diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.h b/drivers/net/ethernet/mediatek/mtk_ppe.h index 6823256016a2..13dd7988e95c 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.h +++ b/drivers/net/ethernet/mediatek/mtk_ppe.h @@ -283,6 +283,8 @@ struct mtk_flow_entry { struct mtk_foe_entry data; struct rhash_head node; unsigned long cookie; + u64 prev_packets, prev_bytes; + u64 packets, bytes; }; struct mtk_mib_entry { @@ -327,6 +329,7 @@ void mtk_ppe_deinit(struct mtk_eth *eth); void mtk_ppe_start(struct mtk_ppe *ppe); int mtk_ppe_stop(struct mtk_ppe *ppe); int mtk_ppe_prepare_reset(struct mtk_ppe *ppe); +struct mtk_foe_accounting *mtk_ppe_mib_entry_read(struct mtk_ppe *ppe, u16 index); void __mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash); @@ -375,9 +378,8 @@ int mtk_foe_entry_set_queue(struct mtk_eth *eth, struct mtk_foe_entry *entry, unsigned int queue); int mtk_foe_entry_commit(struct mtk_ppe *ppe, struct mtk_flow_entry *entry); void mtk_foe_entry_clear(struct mtk_ppe *ppe, struct mtk_flow_entry *entry); -int mtk_foe_entry_idle_time(struct mtk_ppe *ppe, struct mtk_flow_entry *entry); int mtk_ppe_debugfs_init(struct mtk_ppe *ppe, int index); -struct mtk_foe_accounting *mtk_foe_entry_get_mib(struct mtk_ppe *ppe, u32 index, - struct mtk_foe_accounting *diff); +void mtk_foe_entry_get_stats(struct mtk_ppe *ppe, struct mtk_flow_entry *entry, + int *idle); #endif diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_debugfs.c b/drivers/net/ethernet/mediatek/mtk_ppe_debugfs.c index 53cf87e9acbb..c13acdbb874c 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe_debugfs.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe_debugfs.c @@ -96,7 +96,7 @@ mtk_ppe_debugfs_foe_show(struct seq_file *m, void *private, bool bind) if (bind && state != MTK_FOE_STATE_BIND) continue; - acct = mtk_foe_entry_get_mib(ppe, i, NULL); + acct = mtk_ppe_mib_entry_read(ppe, i); type = FIELD_GET(MTK_FOE_IB1_PACKET_TYPE, entry->ib1); seq_printf(m, "%05x %s %7s", i, diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c index c9cb317b7a2d..f1e8cdac5792 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c @@ -499,24 +499,21 @@ static int mtk_flow_offload_stats(struct mtk_eth *eth, struct flow_cls_offload *f) { struct mtk_flow_entry *entry; - struct mtk_foe_accounting diff; - u32 idle; + u64 packets, bytes; + int idle; entry = rhashtable_lookup(ð->flow_table, &f->cookie, mtk_flow_ht_params); if (!entry) return -ENOENT; - idle = mtk_foe_entry_idle_time(eth->ppe[entry->ppe_index], entry); + packets = entry->packets; + bytes = entry->bytes; + mtk_foe_entry_get_stats(eth->ppe[entry->ppe_index], entry, &idle); + f->stats.pkts += entry->packets - packets; + f->stats.bytes += entry->bytes - bytes; f->stats.lastused = jiffies - idle * HZ; - if (entry->hash != 0xFFFF && - mtk_foe_entry_get_mib(eth->ppe[entry->ppe_index], entry->hash, - &diff)) { - f->stats.pkts += diff.packets; - f->stats.bytes += diff.bytes; - } - return 0; } From patchwork Fri Mar 31 08:29:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Felix Fietkau X-Patchwork-Id: 13195435 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41B2FC76196 for ; Fri, 31 Mar 2023 08:29:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230356AbjCaI3v (ORCPT ); Fri, 31 Mar 2023 04:29:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229646AbjCaI3u (ORCPT ); Fri, 31 Mar 2023 04:29:50 -0400 Received: from nbd.name (nbd.name [46.4.11.11]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6DF381AE for ; Fri, 31 Mar 2023 01:29:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nbd.name; s=20160729; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=XEtkwoJ1NZByZ5V1hQ4v+wxmvSQyMvfDrHGJhCYQCOE=; b=PjAGuGs2XQkhMN2Nb1BNSvnoLr Mmqs+LWrYyvrQh1aXv4HA25mzL/LxqAdN37gUO5a/DmfJBHbMWoXdZe9+VF+7dKTho1RJ6jNBC6kS 9caGfr9K99gFk27V0pbqFa/yN9/8yjbTSXGoWd5+Yzg1UIPGYkMioT+SpYp0W1WT6Ryg=; Received: from p54ae9730.dip0.t-ipconnect.de ([84.174.151.48] helo=Maecks.lan) by ds12 with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (Exim 4.94.2) (envelope-from ) id 1piA8o-008qbG-IN; Fri, 31 Mar 2023 10:29:46 +0200 From: Felix Fietkau To: netdev@vger.kernel.org Cc: Daniel Golle Subject: [PATCH net-next 3/3] net: ethernet: mtk_eth_soc: fix ppe flow accounting for v1 hardware Date: Fri, 31 Mar 2023 10:29:45 +0200 Message-Id: <20230331082945.75075-3-nbd@nbd.name> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230331082945.75075-1-nbd@nbd.name> References: <20230331082945.75075-1-nbd@nbd.name> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Older chips (like MT7622) use a different bit in ib2 to enable hardware counter support. Fixes: 3fbe4d8c0e53 ("net: ethernet: mtk_eth_soc: ppe: add support for flow accounting") Signed-off-by: Felix Fietkau Reviewed-by: Leon Romanovsky --- drivers/net/ethernet/mediatek/mtk_ppe.c | 10 ++++++++-- drivers/net/ethernet/mediatek/mtk_ppe.h | 3 ++- 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c index 64e8dc8d814b..5cfa45ba66dd 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe.c @@ -640,6 +640,7 @@ __mtk_foe_entry_commit(struct mtk_ppe *ppe, struct mtk_foe_entry *entry, struct mtk_eth *eth = ppe->eth; u16 timestamp = mtk_eth_timestamp(eth); struct mtk_foe_entry *hwe; + u32 val; if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { entry->ib1 &= ~MTK_FOE_IB1_BIND_TIMESTAMP_V2; @@ -656,8 +657,13 @@ __mtk_foe_entry_commit(struct mtk_ppe *ppe, struct mtk_foe_entry *entry, wmb(); hwe->ib1 = entry->ib1; - if (ppe->accounting) - *mtk_foe_entry_ib2(eth, hwe) |= MTK_FOE_IB2_MIB_CNT; + if (ppe->accounting) { + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) + val = MTK_FOE_IB2_MIB_CNT_V2; + else + val = MTK_FOE_IB2_MIB_CNT; + *mtk_foe_entry_ib2(eth, hwe) |= val; + } dma_wmb(); diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.h b/drivers/net/ethernet/mediatek/mtk_ppe.h index 13dd7988e95c..321aea4bde85 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.h +++ b/drivers/net/ethernet/mediatek/mtk_ppe.h @@ -55,9 +55,10 @@ enum { #define MTK_FOE_IB2_PSE_QOS BIT(4) #define MTK_FOE_IB2_DEST_PORT GENMASK(7, 5) #define MTK_FOE_IB2_MULTICAST BIT(8) +#define MTK_FOE_IB2_MIB_CNT BIT(10) #define MTK_FOE_IB2_WDMA_QID2 GENMASK(13, 12) -#define MTK_FOE_IB2_MIB_CNT BIT(15) +#define MTK_FOE_IB2_MIB_CNT_V2 BIT(15) #define MTK_FOE_IB2_WDMA_DEVIDX BIT(16) #define MTK_FOE_IB2_WDMA_WINFO BIT(17)