From patchwork Mon Sep 4 12:31:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Przemek Kitszel X-Patchwork-Id: 13373797 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E743AD5C for ; Mon, 4 Sep 2023 12:34:22 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85AA61AD; Mon, 4 Sep 2023 05:34:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693830860; x=1725366860; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GLCtZy2q8d+I9Afwmb1nrr2Y95bhv3oUqLsaLvdHSI4=; b=JRbzzqA4zBq0IPmd6msIp2C8+cfNgsUFTtI/lCjfePBc/0gkQWPMHQTN Cspg17eOsPOXl+RbnOsn1VjKtB0F/udnx2mD69u8BxWmG/7iiYBRoNYtt oaAgvThCkI8akwBgEuwHuPICbW9hNUVigsHR8sm8HA6oRlLvQJNrMQ3Zj G38eTGIX8UIn8PytM6EL/smjWuia4wQO8qrzEmcPDh1ebIS9Gubyc2rh7 rj0euWCS3zmeZZZsRe3W4H0PNMhGBUohAED1MZIoJegk/wbi2rIhMEh50 L5BxF8K89Fci7H/gzB3rVHOWfchqMhT7M/SxBM5vly2GgtBwVTCV475Wr A==; X-IronPort-AV: E=McAfee;i="6600,9927,10822"; a="373977137" X-IronPort-AV: E=Sophos;i="6.02,226,1688454000"; d="scan'208";a="373977137" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2023 05:34:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10822"; a="740749744" X-IronPort-AV: E=Sophos;i="6.02,226,1688454000"; d="scan'208";a="740749744" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by orsmga002.jf.intel.com with ESMTP; 04 Sep 2023 05:34:16 -0700 Received: from pelor.igk.intel.com (pelor.igk.intel.com [10.123.220.13]) by irvmail002.ir.intel.com (Postfix) with ESMTP id B6A2A33E93; Mon, 4 Sep 2023 13:34:14 +0100 (IST) From: Przemek Kitszel To: netdev@vger.kernel.org Cc: Kees Cook , Jacob Keller , intel-wired-lan@lists.osuosl.org, Alexander Lobakin , linux-hardening@vger.kernel.org, Steven Zou , Jakub Kicinski , Anthony Nguyen , David Laight , Przemek Kitszel Subject: [RFC net-next v4 1/7] overflow: add DEFINE_FLEX() for on-stack allocs Date: Mon, 4 Sep 2023 08:31:01 -0400 Message-Id: <20230904123107.116381-2-przemyslaw.kitszel@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230904123107.116381-1-przemyslaw.kitszel@intel.com> References: <20230904123107.116381-1-przemyslaw.kitszel@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Add DEFINE_FLEX() macro for on-stack allocations of structs with flexible array member. Expose __struct_size() macro outside of fortify-string.h, as it could be used to read size of structs allocated by DEFINE_FLEX(). Move __member_size() alongside it. -Kees Using underlying array for on-stack storage lets us to declare known-at-compile-time structures without kzalloc(). Actual usage for ice driver is in following patches of the series. Missing __has_builtin() workaround is moved up to serve also assembly compilation with m68k-linux-gcc, see [1]. Error was (note the .S file extension): In file included from ../include/linux/linkage.h:5, from ../arch/m68k/fpsp040/skeleton.S:40: ../include/linux/compiler_types.h:331:5: warning: "__has_builtin" is not defined, evaluates to 0 [-Wundef] 331 | #if __has_builtin(__builtin_dynamic_object_size) | ^~~~~~~~~~~~~ ../include/linux/compiler_types.h:331:18: error: missing binary operator before token "(" 331 | #if __has_builtin(__builtin_dynamic_object_size) | ^ [1] https://lore.kernel.org/netdev/202308112122.OuF0YZqL-lkp@intel.com/ Co-developed-by: Kees Cook Signed-off-by: Kees Cook Acked-by: Kees Cook Signed-off-by: Przemek Kitszel --- Prev versions, [v3] with Kees ACK: [v3] https://lore.kernel.org/netdev/202308161337.975C93F163@keescook v4: David: add _Static_assert to give better error message for non-const counts; add one more lvl of macros to ease up adding non-zeroing (or other) variant in the future; v3: remove old macro needlessly kept in v2; fix build warning; reword doc comment; v2: Kees: reuse __struct_size() instead of adding new macro (adding Kees as Co-dev here); v1: change macro name; add macro for size read; accept struct type instead of ptr to it; change alignment; --- include/linux/compiler_types.h | 32 ++++++++++++++++++++----------- include/linux/fortify-string.h | 4 ---- include/linux/overflow.h | 35 ++++++++++++++++++++++++++++++++++ 3 files changed, 56 insertions(+), 15 deletions(-) diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index c523c6683789..6f1ca49306d2 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -2,6 +2,15 @@ #ifndef __LINUX_COMPILER_TYPES_H #define __LINUX_COMPILER_TYPES_H +/* + * __has_builtin is supported on gcc >= 10, clang >= 3 and icc >= 21. + * In the meantime, to support gcc < 10, we implement __has_builtin + * by hand. + */ +#ifndef __has_builtin +#define __has_builtin(x) (0) +#endif + #ifndef __ASSEMBLY__ /* @@ -134,17 +143,6 @@ static inline void __chk_io_ptr(const volatile void __iomem *ptr) { } # define __preserve_most #endif -/* Builtins */ - -/* - * __has_builtin is supported on gcc >= 10, clang >= 3 and icc >= 21. - * In the meantime, to support gcc < 10, we implement __has_builtin - * by hand. - */ -#ifndef __has_builtin -#define __has_builtin(x) (0) -#endif - /* Compiler specific macros. */ #ifdef __clang__ #include @@ -352,6 +350,18 @@ struct ftrace_likely_data { # define __realloc_size(x, ...) #endif +/* + * When the size of an allocated object is needed, use the best available + * mechanism to find it. (For cases where sizeof() cannot be used.) + */ +#if __has_builtin(__builtin_dynamic_object_size) +#define __struct_size(p) __builtin_dynamic_object_size(p, 0) +#define __member_size(p) __builtin_dynamic_object_size(p, 1) +#else +#define __struct_size(p) __builtin_object_size(p, 0) +#define __member_size(p) __builtin_object_size(p, 1) +#endif + #ifndef asm_volatile_goto #define asm_volatile_goto(x...) asm goto(x) #endif diff --git a/include/linux/fortify-string.h b/include/linux/fortify-string.h index da51a83b2829..1e7711185ec6 100644 --- a/include/linux/fortify-string.h +++ b/include/linux/fortify-string.h @@ -93,13 +93,9 @@ extern char *__underlying_strncpy(char *p, const char *q, __kernel_size_t size) #if __has_builtin(__builtin_dynamic_object_size) #define POS __pass_dynamic_object_size(1) #define POS0 __pass_dynamic_object_size(0) -#define __struct_size(p) __builtin_dynamic_object_size(p, 0) -#define __member_size(p) __builtin_dynamic_object_size(p, 1) #else #define POS __pass_object_size(1) #define POS0 __pass_object_size(0) -#define __struct_size(p) __builtin_object_size(p, 0) -#define __member_size(p) __builtin_object_size(p, 1) #endif #define __compiletime_lessthan(bounds, length) ( \ diff --git a/include/linux/overflow.h b/include/linux/overflow.h index f9b60313eaea..7b5cf4a5cd19 100644 --- a/include/linux/overflow.h +++ b/include/linux/overflow.h @@ -309,4 +309,39 @@ static inline size_t __must_check size_sub(size_t minuend, size_t subtrahend) #define struct_size_t(type, member, count) \ struct_size((type *)NULL, member, count) +/** + * _DEFINE_FLEX() - helper macro for DEFINE_FLEX() family. + * Enables caller macro to pass (different) initializer. + * + * @type: structure type name, including "struct" keyword. + * @name: Name for a variable to define. + * @member: Name of the array member. + * @count: Number of elements in the array; must be compile-time const. + * @initializer: initializer expression (could be empty for no init). + */ +#define _DEFINE_FLEX(type, name, member, count, initializer) \ + _Static_assert(__builtin_constant_p(count), \ + "onstack flex array members require compile-time const count"); \ + union { \ + u8 bytes[struct_size_t(type, member, count)]; \ + type obj; \ + } name##_u initializer; \ + type *name = (type *)&name##_u + +/** + * DEFINE_FLEX() - Define an on-stack instance of structure with a trailing + * flexible array member. + * + * @type: structure type name, including "struct" keyword. + * @name: Name for a variable to define. + * @member: Name of the array member. + * @count: Number of elements in the array; must be compile-time const. + * + * Define a zeroed, on-stack, instance of @type structure with a trailing + * flexible array member. + * Use __struct_size(@name) to get compile-time size of it afterwards. + */ +#define DEFINE_FLEX(type, name, member, count) \ + _DEFINE_FLEX(type, name, member, count, = {}) + #endif /* __LINUX_OVERFLOW_H */ From patchwork Mon Sep 4 12:31:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Przemek Kitszel X-Patchwork-Id: 13373796 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2F96E8F73 for ; Mon, 4 Sep 2023 12:34:22 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C013A1B7; Mon, 4 Sep 2023 05:34:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693830860; x=1725366860; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OamUofaSFnk4Lj5AHZy+Do0gP7t3e3jM/3c3+1xCt7Q=; b=MO25WD2Ig5DK5JnN19JkIjllQh1suc2nHnAvAouG6l5yi9X+7QtLRxDq 9nO5bJfH6+C+flXHEMF2ZGxQjnEks2+7e4rCN28hq+GxDPPEH+FzS6Hm8 cj7LAO7ak/LOCe+O/E9l6seZdxjnuGkkIGwpe0Hj+0c08FQQv6SBBl75T OsyM0AzPXdrHC6qSJGk85CLWdQtIwvAgqI9q6ASM0+H2nimn0IK3+zENu Qu5UJ/MAuF6zQ2+NzR/1OCZ7yIloxl4O1Shp8q5phDZW8fJMhGuOM/Fwj SpkFkVz+iTvqSO6W1jrm6At7Yiy+u3YL4/GQD4J9EbmXvsOzyG6GvCaX/ Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10822"; a="373977143" X-IronPort-AV: E=Sophos;i="6.02,226,1688454000"; d="scan'208";a="373977143" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2023 05:34:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10822"; a="740749746" X-IronPort-AV: E=Sophos;i="6.02,226,1688454000"; d="scan'208";a="740749746" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by orsmga002.jf.intel.com with ESMTP; 04 Sep 2023 05:34:16 -0700 Received: from pelor.igk.intel.com (pelor.igk.intel.com [10.123.220.13]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 7AD5A33EBF; Mon, 4 Sep 2023 13:34:15 +0100 (IST) From: Przemek Kitszel To: netdev@vger.kernel.org Cc: Kees Cook , Jacob Keller , intel-wired-lan@lists.osuosl.org, Alexander Lobakin , linux-hardening@vger.kernel.org, Steven Zou , Jakub Kicinski , Anthony Nguyen , David Laight , Przemek Kitszel Subject: [RFC net-next v4 2/7] ice: ice_sched_remove_elems: replace 1 elem array param by u32 Date: Mon, 4 Sep 2023 08:31:02 -0400 Message-Id: <20230904123107.116381-3-przemyslaw.kitszel@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230904123107.116381-1-przemyslaw.kitszel@intel.com> References: <20230904123107.116381-1-przemyslaw.kitszel@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Replace array+size params of ice_sched_remove_elems:() by just single u32, as all callers are using it with "1". This enables moving from heap-based, to stack-based allocation, what is also more elegant thanks to DEFINE_FLEX() macro. Signed-off-by: Przemek Kitszel --- add/remove: 2/2 grow/shrink: 0/2 up/down: 252/-388 (-136) --- drivers/net/ethernet/intel/ice/ice_sched.c | 26 ++++++++-------------- 1 file changed, 9 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c index c0533d7b66b9..efa5cb202eac 100644 --- a/drivers/net/ethernet/intel/ice/ice_sched.c +++ b/drivers/net/ethernet/intel/ice/ice_sched.c @@ -229,37 +229,29 @@ ice_aq_delete_sched_elems(struct ice_hw *hw, u16 grps_req, * ice_sched_remove_elems - remove nodes from HW * @hw: pointer to the HW struct * @parent: pointer to the parent node - * @num_nodes: number of nodes - * @node_teids: array of node teids to be deleted + * @node_teid: node teid to be deleted * * This function remove nodes from HW */ static int ice_sched_remove_elems(struct ice_hw *hw, struct ice_sched_node *parent, - u16 num_nodes, u32 *node_teids) + u32 node_teid) { - struct ice_aqc_delete_elem *buf; - u16 i, num_groups_removed = 0; - u16 buf_size; + DEFINE_FLEX(struct ice_aqc_delete_elem, buf, teid, 1); + u16 buf_size = __struct_size(buf); + u16 num_groups_removed = 0; int status; - buf_size = struct_size(buf, teid, num_nodes); - buf = devm_kzalloc(ice_hw_to_dev(hw), buf_size, GFP_KERNEL); - if (!buf) - return -ENOMEM; - buf->hdr.parent_teid = parent->info.node_teid; - buf->hdr.num_elems = cpu_to_le16(num_nodes); - for (i = 0; i < num_nodes; i++) - buf->teid[i] = cpu_to_le32(node_teids[i]); + buf->hdr.num_elems = cpu_to_le16(1); + buf->teid[0] = cpu_to_le32(node_teid); status = ice_aq_delete_sched_elems(hw, 1, buf, buf_size, &num_groups_removed, NULL); if (status || num_groups_removed != 1) ice_debug(hw, ICE_DBG_SCHED, "remove node failed FW error %d\n", hw->adminq.sq_last_status); - devm_kfree(ice_hw_to_dev(hw), buf); return status; } @@ -326,7 +318,7 @@ void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node) node->info.data.elem_type != ICE_AQC_ELEM_TYPE_LEAF) { u32 teid = le32_to_cpu(node->info.node_teid); - ice_sched_remove_elems(hw, node->parent, 1, &teid); + ice_sched_remove_elems(hw, node->parent, teid); } parent = node->parent; /* root has no parent */ @@ -1193,7 +1185,7 @@ static void ice_rm_dflt_leaf_node(struct ice_port_info *pi) int status; /* remove the default leaf node */ - status = ice_sched_remove_elems(pi->hw, node->parent, 1, &teid); + status = ice_sched_remove_elems(pi->hw, node->parent, teid); if (!status) ice_free_sched_node(pi, node); } From patchwork Mon Sep 4 12:31:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Przemek Kitszel X-Patchwork-Id: 13373798 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B05BC138 for ; Mon, 4 Sep 2023 12:34:23 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F119A1B8; Mon, 4 Sep 2023 05:34:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693830862; x=1725366862; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XAuEVGtmYfOaURx7+IJq9knprmddw1Mq+7QU/J9DXgY=; b=J9Mx1qWsdGZ7g6CAI9ulnFcs5B2oNgvqhSmojX/5kkVaAWaL4x2W5gZW kg6ybPteMO/7/X1xFH+1VJTHQ4Jq+3VdAjIin2LW871Lg0LL8poH6Gu+i 8v2OHkYjV0fXViQeLHrVkQ9+q0dQGGkzvHhmEHwyK4Al0mVKkt6MutTSX 657cTbP8LFS8L5wCxA9zxB5CKZxvHj5uj+AxMin7FEM4Cuo/Vi8h44CDD wYBl9iH01ytpYW3ihEi6C2B876azVQ0Lt73hN7aGzK3Fll1smPXJ0UICt ar9Rj1phfdMOlqeShhpY5Yg8+rWFFBtbFhFAA+zrwYXICKQzzXmL2aX9R A==; X-IronPort-AV: E=McAfee;i="6600,9927,10822"; a="373977152" X-IronPort-AV: E=Sophos;i="6.02,226,1688454000"; d="scan'208";a="373977152" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2023 05:34:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10822"; a="740749760" X-IronPort-AV: E=Sophos;i="6.02,226,1688454000"; d="scan'208";a="740749760" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by orsmga002.jf.intel.com with ESMTP; 04 Sep 2023 05:34:17 -0700 Received: from pelor.igk.intel.com (pelor.igk.intel.com [10.123.220.13]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 3F2B7340D5; Mon, 4 Sep 2023 13:34:16 +0100 (IST) From: Przemek Kitszel To: netdev@vger.kernel.org Cc: Kees Cook , Jacob Keller , intel-wired-lan@lists.osuosl.org, Alexander Lobakin , linux-hardening@vger.kernel.org, Steven Zou , Jakub Kicinski , Anthony Nguyen , David Laight , Przemek Kitszel Subject: [RFC net-next v4 3/7] ice: drop two params of ice_aq_move_sched_elems() Date: Mon, 4 Sep 2023 08:31:03 -0400 Message-Id: <20230904123107.116381-4-przemyslaw.kitszel@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230904123107.116381-1-przemyslaw.kitszel@intel.com> References: <20230904123107.116381-1-przemyslaw.kitszel@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Remove two arguments of ice_aq_move_sched_elems(). Last of them was always NULL, and @grps_req was always 1. Assuming @grps_req to be one, allows us to use DEFINE_FLEX() macro, what removes some need for heap allocations. Signed-off-by: Przemek Kitszel --- add/remove: 0/0 grow/shrink: 1/6 up/down: 46/-261 (-215) --- drivers/net/ethernet/intel/ice/ice_lag.c | 48 ++++++---------------- drivers/net/ethernet/intel/ice/ice_sched.c | 30 ++++---------- drivers/net/ethernet/intel/ice/ice_sched.h | 6 +-- 3 files changed, 23 insertions(+), 61 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c index 4f39863b5537..2c96d1883e19 100644 --- a/drivers/net/ethernet/intel/ice/ice_lag.c +++ b/drivers/net/ethernet/intel/ice/ice_lag.c @@ -430,10 +430,11 @@ static void ice_lag_move_vf_node_tc(struct ice_lag *lag, u8 oldport, u8 newport, u16 vsi_num, u8 tc) { - u16 numq, valq, buf_size, num_moved, qbuf_size; + DEFINE_FLEX(struct ice_aqc_move_elem, buf, teid, 1); struct device *dev = ice_pf_to_dev(lag->pf); + u16 numq, valq, num_moved, qbuf_size; + u16 buf_size = __struct_size(buf); struct ice_aqc_cfg_txqs_buf *qbuf; - struct ice_aqc_move_elem *buf; struct ice_sched_node *n_prt; struct ice_hw *new_hw = NULL; __le32 teid, parent_teid; @@ -505,26 +506,17 @@ ice_lag_move_vf_node_tc(struct ice_lag *lag, u8 oldport, u8 newport, goto resume_traffic; /* Move Vf's VSI node for this TC to newport's scheduler tree */ - buf_size = struct_size(buf, teid, 1); - buf = kzalloc(buf_size, GFP_KERNEL); - if (!buf) { - dev_warn(dev, "Failure to alloc memory for VF node failover\n"); - goto resume_traffic; - } - buf->hdr.src_parent_teid = parent_teid; buf->hdr.dest_parent_teid = n_prt->info.node_teid; buf->hdr.num_elems = cpu_to_le16(1); buf->hdr.mode = ICE_AQC_MOVE_ELEM_MODE_KEEP_OWN; buf->teid[0] = teid; - if (ice_aq_move_sched_elems(&lag->pf->hw, 1, buf, buf_size, &num_moved, - NULL)) + if (ice_aq_move_sched_elems(&lag->pf->hw, buf, buf_size, &num_moved)) dev_warn(dev, "Failure to move VF nodes for failover\n"); else ice_sched_update_parent(n_prt, ctx->sched.vsi_node[tc]); - kfree(buf); goto resume_traffic; qbuf_err: @@ -755,10 +747,11 @@ static void ice_lag_reclaim_vf_tc(struct ice_lag *lag, struct ice_hw *src_hw, u16 vsi_num, u8 tc) { - u16 numq, valq, buf_size, num_moved, qbuf_size; + DEFINE_FLEX(struct ice_aqc_move_elem, buf, teid, 1); struct device *dev = ice_pf_to_dev(lag->pf); + u16 numq, valq, num_moved, qbuf_size; + u16 buf_size = __struct_size(buf); struct ice_aqc_cfg_txqs_buf *qbuf; - struct ice_aqc_move_elem *buf; struct ice_sched_node *n_prt; __le32 teid, parent_teid; struct ice_vsi_ctx *ctx; @@ -820,26 +813,17 @@ ice_lag_reclaim_vf_tc(struct ice_lag *lag, struct ice_hw *src_hw, u16 vsi_num, goto resume_reclaim; /* Move node to new parent */ - buf_size = struct_size(buf, teid, 1); - buf = kzalloc(buf_size, GFP_KERNEL); - if (!buf) { - dev_warn(dev, "Failure to alloc memory for VF node failover\n"); - goto resume_reclaim; - } - buf->hdr.src_parent_teid = parent_teid; buf->hdr.dest_parent_teid = n_prt->info.node_teid; buf->hdr.num_elems = cpu_to_le16(1); buf->hdr.mode = ICE_AQC_MOVE_ELEM_MODE_KEEP_OWN; buf->teid[0] = teid; - if (ice_aq_move_sched_elems(&lag->pf->hw, 1, buf, buf_size, &num_moved, - NULL)) + if (ice_aq_move_sched_elems(&lag->pf->hw, buf, buf_size, &num_moved)) dev_warn(dev, "Failure to move VF nodes for LAG reclaim\n"); else ice_sched_update_parent(n_prt, ctx->sched.vsi_node[tc]); - kfree(buf); goto resume_reclaim; reclaim_qerr: @@ -1792,10 +1776,11 @@ static void ice_lag_move_vf_nodes_tc_sync(struct ice_lag *lag, struct ice_hw *dest_hw, u16 vsi_num, u8 tc) { - u16 numq, valq, buf_size, num_moved, qbuf_size; + DEFINE_FLEX(struct ice_aqc_move_elem, buf, teid, 1); struct device *dev = ice_pf_to_dev(lag->pf); + u16 numq, valq, num_moved, qbuf_size; + u16 buf_size = __struct_size(buf); struct ice_aqc_cfg_txqs_buf *qbuf; - struct ice_aqc_move_elem *buf; struct ice_sched_node *n_prt; __le32 teid, parent_teid; struct ice_vsi_ctx *ctx; @@ -1853,26 +1838,17 @@ ice_lag_move_vf_nodes_tc_sync(struct ice_lag *lag, struct ice_hw *dest_hw, goto resume_sync; /* Move node to new parent */ - buf_size = struct_size(buf, teid, 1); - buf = kzalloc(buf_size, GFP_KERNEL); - if (!buf) { - dev_warn(dev, "Failure to alloc for VF node move in reset rebuild\n"); - goto resume_sync; - } - buf->hdr.src_parent_teid = parent_teid; buf->hdr.dest_parent_teid = n_prt->info.node_teid; buf->hdr.num_elems = cpu_to_le16(1); buf->hdr.mode = ICE_AQC_MOVE_ELEM_MODE_KEEP_OWN; buf->teid[0] = teid; - if (ice_aq_move_sched_elems(&lag->pf->hw, 1, buf, buf_size, &num_moved, - NULL)) + if (ice_aq_move_sched_elems(&lag->pf->hw, buf, buf_size, &num_moved)) dev_warn(dev, "Failure to move VF nodes for LAG reset rebuild\n"); else ice_sched_update_parent(n_prt, ctx->sched.vsi_node[tc]); - kfree(buf); goto resume_sync; sync_qerr: diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c index efa5cb202eac..2f4a621254e8 100644 --- a/drivers/net/ethernet/intel/ice/ice_sched.c +++ b/drivers/net/ethernet/intel/ice/ice_sched.c @@ -429,24 +429,20 @@ ice_aq_cfg_sched_elems(struct ice_hw *hw, u16 elems_req, } /** - * ice_aq_move_sched_elems - move scheduler elements + * ice_aq_move_sched_elems - move scheduler element (just 1 group) * @hw: pointer to the HW struct - * @grps_req: number of groups to move * @buf: pointer to buffer * @buf_size: buffer size in bytes * @grps_movd: returns total number of groups moved - * @cd: pointer to command details structure or NULL * * Move scheduling elements (0x0408) */ int -ice_aq_move_sched_elems(struct ice_hw *hw, u16 grps_req, - struct ice_aqc_move_elem *buf, u16 buf_size, - u16 *grps_movd, struct ice_sq_cd *cd) +ice_aq_move_sched_elems(struct ice_hw *hw, struct ice_aqc_move_elem *buf, + u16 buf_size, u16 *grps_movd) { return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_move_sched_elems, - grps_req, (void *)buf, buf_size, - grps_movd, cd); + 1, buf, buf_size, grps_movd, NULL); } /** @@ -2224,12 +2220,12 @@ int ice_sched_move_nodes(struct ice_port_info *pi, struct ice_sched_node *parent, u16 num_items, u32 *list) { - struct ice_aqc_move_elem *buf; + DEFINE_FLEX(struct ice_aqc_move_elem, buf, teid, 1); + u16 buf_len = __struct_size(buf); struct ice_sched_node *node; u16 i, grps_movd = 0; struct ice_hw *hw; int status = 0; - u16 buf_len; hw = pi->hw; @@ -2241,35 +2237,27 @@ ice_sched_move_nodes(struct ice_port_info *pi, struct ice_sched_node *parent, hw->max_children[parent->tx_sched_layer]) return -ENOSPC; - buf_len = struct_size(buf, teid, 1); - buf = kzalloc(buf_len, GFP_KERNEL); - if (!buf) - return -ENOMEM; - for (i = 0; i < num_items; i++) { node = ice_sched_find_node_by_teid(pi->root, list[i]); if (!node) { status = -EINVAL; - goto move_err_exit; + break; } buf->hdr.src_parent_teid = node->info.parent_teid; buf->hdr.dest_parent_teid = parent->info.node_teid; buf->teid[0] = node->info.node_teid; buf->hdr.num_elems = cpu_to_le16(1); - status = ice_aq_move_sched_elems(hw, 1, buf, buf_len, - &grps_movd, NULL); + status = ice_aq_move_sched_elems(hw, buf, buf_len, &grps_movd); if (status && grps_movd != 1) { status = -EIO; - goto move_err_exit; + break; } /* update the SW DB */ ice_sched_update_parent(parent, node); } -move_err_exit: - kfree(buf); return status; } diff --git a/drivers/net/ethernet/intel/ice/ice_sched.h b/drivers/net/ethernet/intel/ice/ice_sched.h index 0055d9330c07..1aef05ea5a57 100644 --- a/drivers/net/ethernet/intel/ice/ice_sched.h +++ b/drivers/net/ethernet/intel/ice/ice_sched.h @@ -161,10 +161,8 @@ ice_sched_add_nodes_to_layer(struct ice_port_info *pi, u16 *num_nodes_added); void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw); void ice_sched_replay_agg(struct ice_hw *hw); -int -ice_aq_move_sched_elems(struct ice_hw *hw, u16 grps_req, - struct ice_aqc_move_elem *buf, u16 buf_size, - u16 *grps_movd, struct ice_sq_cd *cd); +int ice_aq_move_sched_elems(struct ice_hw *hw, struct ice_aqc_move_elem *buf, + u16 buf_size, u16 *grps_movd); int ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle); int ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx); #endif /* _ICE_SCHED_H_ */ From patchwork Mon Sep 4 12:31:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Przemek Kitszel X-Patchwork-Id: 13373799 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A912C138 for ; Mon, 4 Sep 2023 12:34:24 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6174F1AD; Mon, 4 Sep 2023 05:34:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693830863; x=1725366863; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SeLgtUYbrOh0aByBaO5uFLDhHv+A3oGVhKqUMUqlyHQ=; b=N4wvgu77qwzbJm4mUgn+nKTmIxykY1vx2PIEP3+UBbgWSgg46+ivkCnp TDpHAFjrSR+L0Us46/EgpBxN3gHsbm91FoR5PCo7WD9HbCmAqRZ1RAy9W zTbmbGrh3BZZ/mMncX1oDYYHp91rj/ha292FxQNXYERalSYMHOYBuQBe6 EeTHFtpNcJT1LVZ3JhbLa0NUzOrIcb7NF2gIYCY2ncBeo2PfdlEMPNT6y D4LOlyhF1VPGipAOWqNm0BJ7M9TaLq9AC0NOwMurD3y01MK+RhOzvJGhe znhud0oEiFTGEwlpZ3M8HPqYm4xMn9A2GjKTmkCv1w4ZZXNEcru+741ae g==; X-IronPort-AV: E=McAfee;i="6600,9927,10822"; a="373977157" X-IronPort-AV: E=Sophos;i="6.02,226,1688454000"; d="scan'208";a="373977157" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2023 05:34:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10822"; a="740749771" X-IronPort-AV: E=Sophos;i="6.02,226,1688454000"; d="scan'208";a="740749771" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by orsmga002.jf.intel.com with ESMTP; 04 Sep 2023 05:34:18 -0700 Received: from pelor.igk.intel.com (pelor.igk.intel.com [10.123.220.13]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 1266735430; Mon, 4 Sep 2023 13:34:17 +0100 (IST) From: Przemek Kitszel To: netdev@vger.kernel.org Cc: Kees Cook , Jacob Keller , intel-wired-lan@lists.osuosl.org, Alexander Lobakin , linux-hardening@vger.kernel.org, Steven Zou , Jakub Kicinski , Anthony Nguyen , David Laight , Przemek Kitszel Subject: [RFC net-next v4 4/7] ice: make use of DEFINE_FLEX() in ice_ddp.c Date: Mon, 4 Sep 2023 08:31:04 -0400 Message-Id: <20230904123107.116381-5-przemyslaw.kitszel@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230904123107.116381-1-przemyslaw.kitszel@intel.com> References: <20230904123107.116381-1-przemyslaw.kitszel@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Use DEFINE_FLEX() macro for constant-num-of-elems (4) flex array members of ice_ddp.c Signed-off-by: Przemek Kitszel --- add/remove: 4/0 grow/shrink: 0/1 up/down: 1195/-878 (317) --- drivers/net/ethernet/intel/ice/ice_ddp.c | 39 +++++++----------------- 1 file changed, 11 insertions(+), 28 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.c b/drivers/net/ethernet/intel/ice/ice_ddp.c index b27ec93638b6..78ed909745fe 100644 --- a/drivers/net/ethernet/intel/ice/ice_ddp.c +++ b/drivers/net/ethernet/intel/ice/ice_ddp.c @@ -1560,21 +1560,14 @@ static enum ice_ddp_state ice_init_pkg_info(struct ice_hw *hw, */ static enum ice_ddp_state ice_get_pkg_info(struct ice_hw *hw) { - enum ice_ddp_state state = ICE_DDP_PKG_SUCCESS; - struct ice_aqc_get_pkg_info_resp *pkg_info; - u16 size; + DEFINE_FLEX(struct ice_aqc_get_pkg_info_resp, pkg_info, pkg_info, + ICE_PKG_CNT); + u16 size = __struct_size(pkg_info); u32 i; - size = struct_size(pkg_info, pkg_info, ICE_PKG_CNT); - pkg_info = kzalloc(size, GFP_KERNEL); - if (!pkg_info) + if (ice_aq_get_pkg_info_list(hw, pkg_info, size, NULL)) return ICE_DDP_PKG_ERR; - if (ice_aq_get_pkg_info_list(hw, pkg_info, size, NULL)) { - state = ICE_DDP_PKG_ERR; - goto init_pkg_free_alloc; - } - for (i = 0; i < le32_to_cpu(pkg_info->count); i++) { #define ICE_PKG_FLAG_COUNT 4 char flags[ICE_PKG_FLAG_COUNT + 1] = { 0 }; @@ -1604,10 +1597,7 @@ static enum ice_ddp_state ice_get_pkg_info(struct ice_hw *hw) pkg_info->pkg_info[i].name, flags); } -init_pkg_free_alloc: - kfree(pkg_info); - - return state; + return ICE_DDP_PKG_SUCCESS; } /** @@ -1622,9 +1612,10 @@ static enum ice_ddp_state ice_chk_pkg_compat(struct ice_hw *hw, struct ice_pkg_hdr *ospkg, struct ice_seg **seg) { - struct ice_aqc_get_pkg_info_resp *pkg; + DEFINE_FLEX(struct ice_aqc_get_pkg_info_resp, pkg, pkg_info, + ICE_PKG_CNT); + u16 size = __struct_size(pkg); enum ice_ddp_state state; - u16 size; u32 i; /* Check package version compatibility */ @@ -1643,15 +1634,8 @@ static enum ice_ddp_state ice_chk_pkg_compat(struct ice_hw *hw, } /* Check if FW is compatible with the OS package */ - size = struct_size(pkg, pkg_info, ICE_PKG_CNT); - pkg = kzalloc(size, GFP_KERNEL); - if (!pkg) - return ICE_DDP_PKG_ERR; - - if (ice_aq_get_pkg_info_list(hw, pkg, size, NULL)) { - state = ICE_DDP_PKG_LOAD_ERROR; - goto fw_ddp_compat_free_alloc; - } + if (ice_aq_get_pkg_info_list(hw, pkg, size, NULL)) + return ICE_DDP_PKG_LOAD_ERROR; for (i = 0; i < le32_to_cpu(pkg->count); i++) { /* loop till we find the NVM package */ @@ -1668,8 +1652,7 @@ static enum ice_ddp_state ice_chk_pkg_compat(struct ice_hw *hw, /* done processing NVM package so break */ break; } -fw_ddp_compat_free_alloc: - kfree(pkg); + return state; } From patchwork Mon Sep 4 12:31:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Przemek Kitszel X-Patchwork-Id: 13373800 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A762DC156 for ; Mon, 4 Sep 2023 12:34:24 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77A8A1A7; Mon, 4 Sep 2023 05:34:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693830863; x=1725366863; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vP/fCTIR9KvIL0oWNZP+5MEI2P81yeMIZzvNhvrA4C8=; b=AbXJH8G3jfzWAEvcZVXRVHU6VxXKiJVcy5M0dfeVyN5U2qZ6VUpIlEwq dz9LvsupswKvzbBaDAss7mCi6v+sK2qbJhHXVT8aKEzQNGf4fjQBwRE1b rEqPz1gqr8ADamhstRJB4If9BU2rR8lupD5oi0b0bwD0UOjxc3qr+QNXx gjLdfS/z15R3Lxfj7QDPcpWqSJGEdjf6NFhu43iZtWvsxdnSjcK6z/xl5 IhqREnOyX/riHcQ+cIfx0178PYllpNEBLjp7jveuUgodNw+GDOpEsQju4 p8mpvEPltGdgjJkUn8tEdKKEHg55WQNgkupS094aNeDu0JGdaSLUlovTT Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10822"; a="373977162" X-IronPort-AV: E=Sophos;i="6.02,226,1688454000"; d="scan'208";a="373977162" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2023 05:34:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10822"; a="740749778" X-IronPort-AV: E=Sophos;i="6.02,226,1688454000"; d="scan'208";a="740749778" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by orsmga002.jf.intel.com with ESMTP; 04 Sep 2023 05:34:18 -0700 Received: from pelor.igk.intel.com (pelor.igk.intel.com [10.123.220.13]) by irvmail002.ir.intel.com (Postfix) with ESMTP id CB52F35432; Mon, 4 Sep 2023 13:34:17 +0100 (IST) From: Przemek Kitszel To: netdev@vger.kernel.org Cc: Kees Cook , Jacob Keller , intel-wired-lan@lists.osuosl.org, Alexander Lobakin , linux-hardening@vger.kernel.org, Steven Zou , Jakub Kicinski , Anthony Nguyen , David Laight , Przemek Kitszel Subject: [RFC net-next v4 5/7] ice: make use of DEFINE_FLEX() for struct ice_aqc_add_tx_qgrp Date: Mon, 4 Sep 2023 08:31:05 -0400 Message-Id: <20230904123107.116381-6-przemyslaw.kitszel@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230904123107.116381-1-przemyslaw.kitszel@intel.com> References: <20230904123107.116381-1-przemyslaw.kitszel@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Use DEFINE_FLEX() macro for 1-elem flex array use case of struct ice_aqc_add_tx_qgrp. Signed-off-by: Przemek Kitszel --- add/remove: 0/2 grow/shrink: 2/2 up/down: 220/-255 (-35) --- drivers/net/ethernet/intel/ice/ice_lib.c | 23 +++++------------------ drivers/net/ethernet/intel/ice/ice_xsk.c | 22 ++++++++-------------- 2 files changed, 13 insertions(+), 32 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 201570cd2e0b..b47d1055802e 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -1832,21 +1832,14 @@ int ice_vsi_cfg_single_rxq(struct ice_vsi *vsi, u16 q_idx) int ice_vsi_cfg_single_txq(struct ice_vsi *vsi, struct ice_tx_ring **tx_rings, u16 q_idx) { - struct ice_aqc_add_tx_qgrp *qg_buf; - int err; + DEFINE_FLEX(struct ice_aqc_add_tx_qgrp, qg_buf, txqs, 1); if (q_idx >= vsi->alloc_txq || !tx_rings || !tx_rings[q_idx]) return -EINVAL; - qg_buf = kzalloc(struct_size(qg_buf, txqs, 1), GFP_KERNEL); - if (!qg_buf) - return -ENOMEM; - qg_buf->num_txqs = 1; - err = ice_vsi_cfg_txq(vsi, tx_rings[q_idx], qg_buf); - kfree(qg_buf); - return err; + return ice_vsi_cfg_txq(vsi, tx_rings[q_idx], qg_buf); } /** @@ -1888,24 +1881,18 @@ int ice_vsi_cfg_rxqs(struct ice_vsi *vsi) static int ice_vsi_cfg_txqs(struct ice_vsi *vsi, struct ice_tx_ring **rings, u16 count) { - struct ice_aqc_add_tx_qgrp *qg_buf; - u16 q_idx = 0; + DEFINE_FLEX(struct ice_aqc_add_tx_qgrp, qg_buf, txqs, 1); int err = 0; - - qg_buf = kzalloc(struct_size(qg_buf, txqs, 1), GFP_KERNEL); - if (!qg_buf) - return -ENOMEM; + u16 q_idx; qg_buf->num_txqs = 1; for (q_idx = 0; q_idx < count; q_idx++) { err = ice_vsi_cfg_txq(vsi, rings[q_idx], qg_buf); if (err) - goto err_cfg_txqs; + break; } -err_cfg_txqs: - kfree(qg_buf); return err; } diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 2a3f0834e139..99954508184f 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -217,61 +217,55 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) */ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) { - struct ice_aqc_add_tx_qgrp *qg_buf; + DEFINE_FLEX(struct ice_aqc_add_tx_qgrp, qg_buf, txqs, 1); + u16 size = __struct_size(qg_buf); struct ice_q_vector *q_vector; struct ice_tx_ring *tx_ring; struct ice_rx_ring *rx_ring; - u16 size; int err; if (q_idx >= vsi->num_rxq || q_idx >= vsi->num_txq) return -EINVAL; - size = struct_size(qg_buf, txqs, 1); - qg_buf = kzalloc(size, GFP_KERNEL); - if (!qg_buf) - return -ENOMEM; - qg_buf->num_txqs = 1; tx_ring = vsi->tx_rings[q_idx]; rx_ring = vsi->rx_rings[q_idx]; q_vector = rx_ring->q_vector; err = ice_vsi_cfg_txq(vsi, tx_ring, qg_buf); if (err) - goto free_buf; + return err; if (ice_is_xdp_ena_vsi(vsi)) { struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_idx]; memset(qg_buf, 0, size); qg_buf->num_txqs = 1; err = ice_vsi_cfg_txq(vsi, xdp_ring, qg_buf); if (err) - goto free_buf; + return err; ice_set_ring_xdp(xdp_ring); ice_tx_xsk_pool(vsi, q_idx); } err = ice_vsi_cfg_rxq(rx_ring); if (err) - goto free_buf; + return err; ice_qvec_cfg_msix(vsi, q_vector); err = ice_vsi_ctrl_one_rx_ring(vsi, true, q_idx, true); if (err) - goto free_buf; + return err; clear_bit(ICE_CFG_BUSY, vsi->state); ice_qvec_toggle_napi(vsi, q_vector, true); ice_qvec_ena_irq(vsi, q_vector); netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); -free_buf: - kfree(qg_buf); - return err; + + return 0; } /** From patchwork Mon Sep 4 12:31:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Przemek Kitszel X-Patchwork-Id: 13373801 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 04594C2D2 for ; Mon, 4 Sep 2023 12:34:25 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0FD181B7; Mon, 4 Sep 2023 05:34:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693830864; x=1725366864; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PCXzH8JJDaRgoFGvNdNnXQNwSzVXB7b6+V2gb/zaWHE=; b=QYy9ifUnIGAmDYxV5jDWhnkAc8h6YJmPT46O0YqPFVdwDfGEBXACDMZw i6QFZbKz8xy48ZL4znDnyVVaWsq4mo3TTtxK7UjRhiwCvA/tSOZe81VqS Z3JzgNCIhboPcaHGM4oadR5pIG3MnrAAX0jEPd6uLswzywVBqrNtm8xUI byrTaKoxCzNx197gHmZTb2+HP2t/oaMdyRFXWIFi+v+JbdY+z54WruRtp HkL5VNPvHv8joS5afhYOiwfSYMVy5kqbXbA3kJxp5obL3mg26yLOwnZpK 6BCfu70NFkGJMp9WjteB/3nlecLPc8sl9KESN45U2b+aBakfxWGHQyfUZ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10822"; a="373977177" X-IronPort-AV: E=Sophos;i="6.02,226,1688454000"; d="scan'208";a="373977177" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2023 05:34:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10822"; a="740749792" X-IronPort-AV: E=Sophos;i="6.02,226,1688454000"; d="scan'208";a="740749792" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by orsmga002.jf.intel.com with ESMTP; 04 Sep 2023 05:34:19 -0700 Received: from pelor.igk.intel.com (pelor.igk.intel.com [10.123.220.13]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 8F3D135437; Mon, 4 Sep 2023 13:34:18 +0100 (IST) From: Przemek Kitszel To: netdev@vger.kernel.org Cc: Kees Cook , Jacob Keller , intel-wired-lan@lists.osuosl.org, Alexander Lobakin , linux-hardening@vger.kernel.org, Steven Zou , Jakub Kicinski , Anthony Nguyen , David Laight , Przemek Kitszel Subject: [RFC net-next v4 6/7] ice: make use of DEFINE_FLEX() for struct ice_aqc_dis_txq_item Date: Mon, 4 Sep 2023 08:31:06 -0400 Message-Id: <20230904123107.116381-7-przemyslaw.kitszel@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230904123107.116381-1-przemyslaw.kitszel@intel.com> References: <20230904123107.116381-1-przemyslaw.kitszel@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Use DEFINE_FLEX() macro for 1-elem flex array use case of struct ice_aqc_dis_txq_item. Signed-off-by: Przemek Kitszel --- add/remove: 0/0 grow/shrink: 1/1 up/down: 9/-18 (-9) --- drivers/net/ethernet/intel/ice/ice_common.c | 20 ++++---------------- 1 file changed, 4 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 80deca45ab59..068079ab5e23 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -4726,11 +4726,11 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues, enum ice_disq_rst_src rst_src, u16 vmvf_num, struct ice_sq_cd *cd) { - struct ice_aqc_dis_txq_item *qg_list; + DEFINE_FLEX(struct ice_aqc_dis_txq_item, qg_list, q_id, 1); + u16 i, buf_size = __struct_size(qg_list); struct ice_q_ctx *q_ctx; int status = -ENOENT; struct ice_hw *hw; - u16 i, buf_size; if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY) return -EIO; @@ -4748,11 +4748,6 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues, return -EIO; } - buf_size = struct_size(qg_list, q_id, 1); - qg_list = kzalloc(buf_size, GFP_KERNEL); - if (!qg_list) - return -ENOMEM; - mutex_lock(&pi->sched_lock); for (i = 0; i < num_queues; i++) { @@ -4785,7 +4780,6 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues, q_ctx->q_teid = ICE_INVAL_TEID; } mutex_unlock(&pi->sched_lock); - kfree(qg_list); return status; } @@ -4954,22 +4948,17 @@ int ice_dis_vsi_rdma_qset(struct ice_port_info *pi, u16 count, u32 *qset_teid, u16 *q_id) { - struct ice_aqc_dis_txq_item *qg_list; + DEFINE_FLEX(struct ice_aqc_dis_txq_item, qg_list, q_id, 1); + u16 qg_size = __struct_size(qg_list); struct ice_hw *hw; int status = 0; - u16 qg_size; int i; if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY) return -EIO; hw = pi->hw; - qg_size = struct_size(qg_list, q_id, 1); - qg_list = kzalloc(qg_size, GFP_KERNEL); - if (!qg_list) - return -ENOMEM; - mutex_lock(&pi->sched_lock); for (i = 0; i < count; i++) { @@ -4994,7 +4983,6 @@ ice_dis_vsi_rdma_qset(struct ice_port_info *pi, u16 count, u32 *qset_teid, } mutex_unlock(&pi->sched_lock); - kfree(qg_list); return status; } From patchwork Mon Sep 4 12:31:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Przemek Kitszel X-Patchwork-Id: 13373802 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 58636C2EE for ; Mon, 4 Sep 2023 12:34:25 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34F3E1B8; Mon, 4 Sep 2023 05:34:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693830864; x=1725366864; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SZefFP9uepkSXMi6Yk72LIhrMtlK/vBNBKmsU8au6+o=; b=jK2BfmHhvRZz9EPTvvndp1c5S4PH2TDRcJ00kdV9SbuG1YCM9VQBDm/I 0Ic2I6WRiF2hrE3ia9/StnwmKZTwWhN0Wi74W1OlhK/ITUwh1rwnNk27T Wq8iurTW3BF3fY8902iPwPXjj10J8KXqXMO5hT5fDpJS4GT8xumFLLwRU w5T7/KJ+d98ZDN7sImlxqYP1YRvxVN99QV3ZhVyXqBl818VjlPIVsrnWZ hz7JvIjaNDBNMt2LO+IjXtON6ndSzm9jzOvMzxLX2xfmPx6T5nMMRjYla rCVz7AmIUAguWX+BhrX6Euz3eSA2bu9N/j5xiJMvZHBAvK7gEBUBYs5TY g==; X-IronPort-AV: E=McAfee;i="6600,9927,10822"; a="373977185" X-IronPort-AV: E=Sophos;i="6.02,226,1688454000"; d="scan'208";a="373977185" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2023 05:34:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10822"; a="740749806" X-IronPort-AV: E=Sophos;i="6.02,226,1688454000"; d="scan'208";a="740749806" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by orsmga002.jf.intel.com with ESMTP; 04 Sep 2023 05:34:20 -0700 Received: from pelor.igk.intel.com (pelor.igk.intel.com [10.123.220.13]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 4700635453; Mon, 4 Sep 2023 13:34:19 +0100 (IST) From: Przemek Kitszel To: netdev@vger.kernel.org Cc: Kees Cook , Jacob Keller , intel-wired-lan@lists.osuosl.org, Alexander Lobakin , linux-hardening@vger.kernel.org, Steven Zou , Jakub Kicinski , Anthony Nguyen , David Laight , Przemek Kitszel Subject: [RFC net-next v4 7/7] ice: make use of DEFINE_FLEX() in ice_switch.c Date: Mon, 4 Sep 2023 08:31:07 -0400 Message-Id: <20230904123107.116381-8-przemyslaw.kitszel@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230904123107.116381-1-przemyslaw.kitszel@intel.com> References: <20230904123107.116381-1-przemyslaw.kitszel@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Use DEFINE_FLEX() macro for 1-elem flex array members of ice_switch.c Signed-off-by: Przemek Kitszel --- add/remove: 2/2 grow/shrink: 3/6 up/down: 489/-470 (19) --- drivers/net/ethernet/intel/ice/ice_switch.c | 63 +++++---------------- 1 file changed, 14 insertions(+), 49 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c index 2f77b684ff76..ee19f3aa3d19 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.c +++ b/drivers/net/ethernet/intel/ice/ice_switch.c @@ -1812,15 +1812,11 @@ ice_aq_alloc_free_vsi_list(struct ice_hw *hw, u16 *vsi_list_id, enum ice_sw_lkup_type lkup_type, enum ice_adminq_opc opc) { - struct ice_aqc_alloc_free_res_elem *sw_buf; + DEFINE_FLEX(struct ice_aqc_alloc_free_res_elem, sw_buf, elem, 1); + u16 buf_len = __struct_size(sw_buf); struct ice_aqc_res_elem *vsi_ele; - u16 buf_len; int status; - buf_len = struct_size(sw_buf, elem, 1); - sw_buf = devm_kzalloc(ice_hw_to_dev(hw), buf_len, GFP_KERNEL); - if (!sw_buf) - return -ENOMEM; sw_buf->num_elems = cpu_to_le16(1); if (lkup_type == ICE_SW_LKUP_MAC || @@ -1840,25 +1836,22 @@ ice_aq_alloc_free_vsi_list(struct ice_hw *hw, u16 *vsi_list_id, sw_buf->res_type = cpu_to_le16(ICE_AQC_RES_TYPE_VSI_LIST_PRUNE); } else { - status = -EINVAL; - goto ice_aq_alloc_free_vsi_list_exit; + return -EINVAL; } if (opc == ice_aqc_opc_free_res) sw_buf->elem[0].e.sw_resp = cpu_to_le16(*vsi_list_id); status = ice_aq_alloc_free_res(hw, sw_buf, buf_len, opc); if (status) - goto ice_aq_alloc_free_vsi_list_exit; + return status; if (opc == ice_aqc_opc_alloc_res) { vsi_ele = &sw_buf->elem[0]; *vsi_list_id = le16_to_cpu(vsi_ele->e.sw_resp); } -ice_aq_alloc_free_vsi_list_exit: - devm_kfree(ice_hw_to_dev(hw), sw_buf); - return status; + return 0; } /** @@ -2088,24 +2081,18 @@ ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap, */ int ice_alloc_recipe(struct ice_hw *hw, u16 *rid) { - struct ice_aqc_alloc_free_res_elem *sw_buf; - u16 buf_len; + DEFINE_FLEX(struct ice_aqc_alloc_free_res_elem, sw_buf, elem, 1); + u16 buf_len = __struct_size(sw_buf); int status; - buf_len = struct_size(sw_buf, elem, 1); - sw_buf = kzalloc(buf_len, GFP_KERNEL); - if (!sw_buf) - return -ENOMEM; - sw_buf->num_elems = cpu_to_le16(1); sw_buf->res_type = cpu_to_le16((ICE_AQC_RES_TYPE_RECIPE << ICE_AQC_RES_TYPE_S) | ICE_AQC_RES_TYPE_FLAG_SHARED); status = ice_aq_alloc_free_res(hw, sw_buf, buf_len, ice_aqc_opc_alloc_res); if (!status) *rid = le16_to_cpu(sw_buf->elem[0].e.sw_resp); - kfree(sw_buf); return status; } @@ -4434,28 +4421,19 @@ int ice_alloc_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items, u16 *counter_id) { - struct ice_aqc_alloc_free_res_elem *buf; - u16 buf_len; + DEFINE_FLEX(struct ice_aqc_alloc_free_res_elem, buf, elem, 1); + u16 buf_len = __struct_size(buf); int status; - /* Allocate resource */ - buf_len = struct_size(buf, elem, 1); - buf = kzalloc(buf_len, GFP_KERNEL); - if (!buf) - return -ENOMEM; - buf->num_elems = cpu_to_le16(num_items); buf->res_type = cpu_to_le16(((type << ICE_AQC_RES_TYPE_S) & ICE_AQC_RES_TYPE_M) | alloc_shared); status = ice_aq_alloc_free_res(hw, buf, buf_len, ice_aqc_opc_alloc_res); if (status) - goto exit; + return status; *counter_id = le16_to_cpu(buf->elem[0].e.sw_resp); - -exit: - kfree(buf); return status; } @@ -4471,26 +4449,19 @@ int ice_free_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items, u16 counter_id) { - struct ice_aqc_alloc_free_res_elem *buf; - u16 buf_len; + DEFINE_FLEX(struct ice_aqc_alloc_free_res_elem, buf, elem, 1); + u16 buf_len = __struct_size(buf); int status; - /* Free resource */ - buf_len = struct_size(buf, elem, 1); - buf = kzalloc(buf_len, GFP_KERNEL); - if (!buf) - return -ENOMEM; - buf->num_elems = cpu_to_le16(num_items); buf->res_type = cpu_to_le16(((type << ICE_AQC_RES_TYPE_S) & ICE_AQC_RES_TYPE_M) | alloc_shared); buf->elem[0].e.sw_resp = cpu_to_le16(counter_id); status = ice_aq_alloc_free_res(hw, buf, buf_len, ice_aqc_opc_free_res); if (status) ice_debug(hw, ICE_DBG_SW, "counter resource could not be freed\n"); - kfree(buf); return status; } @@ -4508,15 +4479,10 @@ ice_free_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items, */ int ice_share_res(struct ice_hw *hw, u16 type, u8 shared, u16 res_id) { - struct ice_aqc_alloc_free_res_elem *buf; - u16 buf_len; + DEFINE_FLEX(struct ice_aqc_alloc_free_res_elem, buf, elem, 1); + u16 buf_len = __struct_size(buf); int status; - buf_len = struct_size(buf, elem, 1); - buf = kzalloc(buf_len, GFP_KERNEL); - if (!buf) - return -ENOMEM; - buf->num_elems = cpu_to_le16(1); if (shared) buf->res_type = cpu_to_le16(((type << ICE_AQC_RES_TYPE_S) & @@ -4534,7 +4500,6 @@ int ice_share_res(struct ice_hw *hw, u16 type, u8 shared, u16 res_id) ice_debug(hw, ICE_DBG_SW, "Could not set resource type %u id %u to %s\n", type, res_id, shared ? "SHARED" : "DEDICATED"); - kfree(buf); return status; }