From patchwork Sat Jun 26 00:33:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinicius Costa Gomes X-Patchwork-Id: 12346053 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3ACC2C49EB7 for ; Sat, 26 Jun 2021 00:33:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1557761474 for ; Sat, 26 Jun 2021 00:33:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230036AbhFZAgN (ORCPT ); Fri, 25 Jun 2021 20:36:13 -0400 Received: from mga18.intel.com ([134.134.136.126]:48448 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229915AbhFZAgD (ORCPT ); Fri, 25 Jun 2021 20:36:03 -0400 IronPort-SDR: HNuQT8kGKLcWrV+vzlLW6TkmXjm8VozuhB6SMTL/x7J7kTQL8VpY25mqsE+VaUbh2vnRu+Wv0r zwjPJHJBL9mQ== X-IronPort-AV: E=McAfee;i="6200,9189,10026"; a="195054019" X-IronPort-AV: E=Sophos;i="5.83,300,1616482800"; d="scan'208";a="195054019" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jun 2021 17:33:41 -0700 IronPort-SDR: HJfPt6Zxe0EexBOf+dDXT95xJNvYqUGb9b87MbYwnYALwon/s80hAaqU/1tXPt/yMFzXoD1Umv qt5A+khHS4lw== X-IronPort-AV: E=Sophos;i="5.83,300,1616482800"; d="scan'208";a="557008603" Received: from aschmalt-mobl1.amr.corp.intel.com (HELO localhost.localdomain) ([10.212.160.59]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jun 2021 17:33:41 -0700 From: Vinicius Costa Gomes To: netdev@vger.kernel.org Cc: Vinicius Costa Gomes , jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, kuba@kernel.org, vladimir.oltean@nxp.com, po.liu@nxp.com, intel-wired-lan@lists.osuosl.org, anthony.l.nguyen@intel.com, mkubecek@suse.cz Subject: [PATCH net-next v4 03/12] core: Introduce netdev_tc_map_to_queue_mask() Date: Fri, 25 Jun 2021 17:33:05 -0700 Message-Id: <20210626003314.3159402-4-vinicius.gomes@intel.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210626003314.3159402-1-vinicius.gomes@intel.com> References: <20210626003314.3159402-1-vinicius.gomes@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Converts from a bitmask specifying traffic classes (bit 0 for traffic class (TC) 0, bit 1 for TC 1, and so on) to a bitmask for queues. The conversion is done using the netdev.tc_to_txq map. netdev_tc_map_to_queue_mask() first users will be the mqprio and taprio qdiscs. Signed-off-by: Vinicius Costa Gomes --- include/linux/netdevice.h | 1 + net/core/dev.c | 20 ++++++++++++++++++++ 2 files changed, 21 insertions(+) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index af5d4c5b0ad5..dcff0b9a55ab 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -2279,6 +2279,7 @@ int netdev_txq_to_tc(struct net_device *dev, unsigned int txq); void netdev_reset_tc(struct net_device *dev); int netdev_set_tc_queue(struct net_device *dev, u8 tc, u16 count, u16 offset); int netdev_set_num_tc(struct net_device *dev, u8 num_tc); +u32 netdev_tc_map_to_queue_mask(struct net_device *dev, u32 tc_mask); static inline int netdev_get_num_tc(struct net_device *dev) diff --git a/net/core/dev.c b/net/core/dev.c index 991d09b67bd9..4b25dbd26243 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -2956,6 +2956,26 @@ int netdev_set_num_tc(struct net_device *dev, u8 num_tc) } EXPORT_SYMBOL(netdev_set_num_tc); +u32 netdev_tc_map_to_queue_mask(struct net_device *dev, u32 tc_mask) +{ + u32 i, queue_mask = 0; + + for (i = 0; i < dev->num_tc; i++) { + u32 offset, count; + + if (!(tc_mask & BIT(i))) + continue; + + offset = dev->tc_to_txq[i].offset; + count = dev->tc_to_txq[i].count; + + queue_mask |= GENMASK(offset + count - 1, offset); + } + + return queue_mask; +} +EXPORT_SYMBOL(netdev_tc_map_to_queue_mask); + void netdev_unbind_sb_channel(struct net_device *dev, struct net_device *sb_dev) {