From patchwork Mon Aug 7 15:58:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Przemek Kitszel X-Patchwork-Id: 13344481 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 740E911C8B for ; Mon, 7 Aug 2023 16:01:44 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 495BAE72 for ; Mon, 7 Aug 2023 09:01:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691424103; x=1722960103; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZaID7aLwkl9LbkHjZUcJBGMU6NP2yhhd2YR+WdtQz0w=; b=D3aTwkKjPck4KQhFVQiFJZtrdstXvndPW0tAIbMJ3coUIMRSe6lZpZXr snw/2q02nhkDu1cVcJmUpk2SKWVsNtXkTbjVO9fhhqcigI31LToYGXEDF bUgmjlFsega61xp+AHg88q+YZYOX/nSd6SbDwShi/xsdT5A4Jr9IE+7gu IhRHexZXWP/mYqg8FxS1xhu1lYx/ztNI33QNYXZ/CFobLDf3bsKd2RZ6X JeUMP20rOLOhHjGRWvio0nkIOj32hajF+bnP/zPnS0NZ3oyRfJULAxVoy OHT3lBDKgc+GcNuTX/kNFm9gK2x80/7j+0RAZyUc0EBcPP9oWpvoze4+y Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="401553924" X-IronPort-AV: E=Sophos;i="6.01,262,1684825200"; d="scan'208";a="401553924" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2023 09:01:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="977492980" X-IronPort-AV: E=Sophos;i="6.01,262,1684825200"; d="scan'208";a="977492980" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga006.fm.intel.com with ESMTP; 07 Aug 2023 09:01:40 -0700 Received: from pelor.igk.intel.com (pelor.igk.intel.com [10.123.220.13]) by irvmail002.ir.intel.com (Postfix) with ESMTP id C9875312D4; Mon, 7 Aug 2023 17:01:39 +0100 (IST) From: Przemek Kitszel To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, Jacob Keller , Tony Nguyen , Simon Horman , Przemek Kitszel Subject: [PATCH iwl-next v3 1/3] ice: ice_aq_check_events: fix off-by-one check when filling buffer Date: Mon, 7 Aug 2023 11:58:46 -0400 Message-Id: <20230807155848.90907-2-przemyslaw.kitszel@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807155848.90907-1-przemyslaw.kitszel@intel.com> References: <20230807155848.90907-1-przemyslaw.kitszel@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Allow task's event buffer to be filled also in the case that it's size is exactly the size of the message. Fixes: d69ea414c9b4 ("ice: implement device flash update via devlink") Signed-off-by: Przemek Kitszel --- drivers/net/ethernet/intel/ice/ice_main.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index a73895483e6c..f2ad2153589a 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -1357,7 +1357,9 @@ int ice_aq_wait_for_event(struct ice_pf *pf, u16 opcode, unsigned long timeout, static void ice_aq_check_events(struct ice_pf *pf, u16 opcode, struct ice_rq_event_info *event) { + struct ice_rq_event_info *task_ev; struct ice_aq_task *task; + bool found = false; spin_lock_bh(&pf->aq_wait_lock); @@ -1365,15 +1367,15 @@ static void ice_aq_check_events(struct ice_pf *pf, u16 opcode, if (task->state || task->opcode != opcode) continue; - memcpy(&task->event->desc, &event->desc, sizeof(event->desc)); - task->event->msg_len = event->msg_len; + task_ev = task->event; + memcpy(&task_ev->desc, &event->desc, sizeof(event->desc)); + task_ev->msg_len = event->msg_len; /* Only copy the data buffer if a destination was set */ - if (task->event->msg_buf && - task->event->buf_len > event->buf_len) { - memcpy(task->event->msg_buf, event->msg_buf, + if (task_ev->msg_buf && task_ev->buf_len >= event->buf_len) { + memcpy(task_ev->msg_buf, event->msg_buf, event->buf_len); - task->event->buf_len = event->buf_len; + task_ev->buf_len = event->buf_len; } task->state = ICE_AQ_TASK_COMPLETE; From patchwork Mon Aug 7 15:58:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Przemek Kitszel X-Patchwork-Id: 13344482 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DCDA411C8B for ; Mon, 7 Aug 2023 16:01:46 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 242A6E76 for ; Mon, 7 Aug 2023 09:01:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691424105; x=1722960105; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wltPn6G/xNnBZJWCG3f0JWFLPq8uJ8cV34aHzJmN45c=; b=nePfLSIBNQPd4X5aui6+PTppN02hWvnWX7nXtD+B51deeklEmn6a7U+b pDM7Plm7J0mbItt4mVTznh/VsLhurFFQ9YwyRDo2YZVWE8uPT1CSh8AFz ubTcGL6157V1vyRkmv1Z5ubTogavHiXJcBf0BRce6ESktLbMHS2Az3sPV EWnEGuQg0HSUI2yKgxOt05DLcXQphkaDGY+sUx6sB5TPU3tHitPLJ/eNf GnGjVjgPtnm7Trz1UCtPsf1GajgHh47irj2tyLo0X0KfkDInnQFxgsYx9 j3WXS9XVai84u2ouGqT+wj/twN0OJTtEh9Nd4kYmuByxkQ12+1suYQiIV Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="401553932" X-IronPort-AV: E=Sophos;i="6.01,262,1684825200"; d="scan'208";a="401553932" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2023 09:01:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="977492991" X-IronPort-AV: E=Sophos;i="6.01,262,1684825200"; d="scan'208";a="977492991" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga006.fm.intel.com with ESMTP; 07 Aug 2023 09:01:42 -0700 Received: from pelor.igk.intel.com (pelor.igk.intel.com [10.123.220.13]) by irvmail002.ir.intel.com (Postfix) with ESMTP id B3BD7312DA; Mon, 7 Aug 2023 17:01:41 +0100 (IST) From: Przemek Kitszel To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, Jacob Keller , Tony Nguyen , Simon Horman , Przemek Kitszel Subject: [PATCH iwl-next v3 2/3] ice: embed &ice_rq_event_info event into struct ice_aq_task Date: Mon, 7 Aug 2023 11:58:47 -0400 Message-Id: <20230807155848.90907-3-przemyslaw.kitszel@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807155848.90907-1-przemyslaw.kitszel@intel.com> References: <20230807155848.90907-1-przemyslaw.kitszel@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Expose struct ice_aq_task to callers, what takes burden of memory ownership out from AQ-wait family of functions, and reduces need for heap-based allocations. Embed struct ice_rq_event_info event into struct ice_aq_task (instead of it being a ptr). to remove some more code from the callers. Subsequent commit will improve more based on this one. Signed-off-by: Przemek Kitszel --- add/remove: 0/0 grow/shrink: 3/1 up/down: 266/-68 (198) --- drivers/net/ethernet/intel/ice/ice.h | 20 ++++++++- .../net/ethernet/intel/ice/ice_fw_update.c | 42 +++++++++---------- drivers/net/ethernet/intel/ice/ice_main.c | 29 ++----------- 3 files changed, 42 insertions(+), 49 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 34be1cb1e28f..6346283c5d14 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -931,8 +931,24 @@ void ice_fdir_release_flows(struct ice_hw *hw); void ice_fdir_replay_flows(struct ice_hw *hw); void ice_fdir_replay_fltrs(struct ice_pf *pf); int ice_fdir_create_dflt_rules(struct ice_pf *pf); -int ice_aq_wait_for_event(struct ice_pf *pf, u16 opcode, unsigned long timeout, - struct ice_rq_event_info *event); + +enum ice_aq_task_state { + ICE_AQ_TASK_WAITING, + ICE_AQ_TASK_COMPLETE, + ICE_AQ_TASK_CANCELED, +}; + +struct ice_aq_task { + struct hlist_node entry; + struct ice_rq_event_info event; + enum ice_aq_task_state state; + u16 opcode; +}; + + +int ice_aq_wait_for_event(struct ice_pf *pf, struct ice_aq_task *task, + u16 opcode, unsigned long timeout); + int ice_open(struct net_device *netdev); int ice_open_internal(struct net_device *netdev); int ice_stop(struct net_device *netdev); diff --git a/drivers/net/ethernet/intel/ice/ice_fw_update.c b/drivers/net/ethernet/intel/ice/ice_fw_update.c index 3dc5662d62a6..819b70823e9c 100644 --- a/drivers/net/ethernet/intel/ice/ice_fw_update.c +++ b/drivers/net/ethernet/intel/ice/ice_fw_update.c @@ -293,13 +293,12 @@ ice_write_one_nvm_block(struct ice_pf *pf, u16 module, u32 offset, { u16 completion_module, completion_retval; struct device *dev = ice_pf_to_dev(pf); - struct ice_rq_event_info event; + struct ice_aq_task task = {}; struct ice_hw *hw = &pf->hw; + struct ice_aq_desc *desc; u32 completion_offset; int err; - memset(&event, 0, sizeof(event)); - dev_dbg(dev, "Writing block of %u bytes for module 0x%02x at offset %u\n", block_size, module, offset); @@ -319,7 +318,7 @@ ice_write_one_nvm_block(struct ice_pf *pf, u16 module, u32 offset, * is conservative and is intended to prevent failure to update when * firmware is slow to respond. */ - err = ice_aq_wait_for_event(pf, ice_aqc_opc_nvm_write, 15 * HZ, &event); + err = ice_aq_wait_for_event(pf, &task, ice_aqc_opc_nvm_write, 15 * HZ); if (err) { dev_err(dev, "Timed out while trying to flash module 0x%02x with block of size %u at offset %u, err %d\n", module, block_size, offset, err); @@ -327,11 +326,12 @@ ice_write_one_nvm_block(struct ice_pf *pf, u16 module, u32 offset, return -EIO; } - completion_module = le16_to_cpu(event.desc.params.nvm.module_typeid); - completion_retval = le16_to_cpu(event.desc.retval); + desc = &task.event.desc; + completion_module = le16_to_cpu(desc->params.nvm.module_typeid); + completion_retval = le16_to_cpu(desc->retval); - completion_offset = le16_to_cpu(event.desc.params.nvm.offset_low); - completion_offset |= event.desc.params.nvm.offset_high << 16; + completion_offset = le16_to_cpu(desc->params.nvm.offset_low); + completion_offset |= desc->params.nvm.offset_high << 16; if (completion_module != module) { dev_err(dev, "Unexpected module_typeid in write completion: got 0x%x, expected 0x%x\n", @@ -363,8 +363,8 @@ ice_write_one_nvm_block(struct ice_pf *pf, u16 module, u32 offset, */ if (reset_level && last_cmd && module == ICE_SR_1ST_NVM_BANK_PTR) { if (hw->dev_caps.common_cap.pcie_reset_avoidance) { - *reset_level = (event.desc.params.nvm.cmd_flags & - ICE_AQC_NVM_RESET_LVL_M); + *reset_level = desc->params.nvm.cmd_flags & + ICE_AQC_NVM_RESET_LVL_M; dev_dbg(dev, "Firmware reported required reset level as %u\n", *reset_level); } else { @@ -479,15 +479,14 @@ ice_erase_nvm_module(struct ice_pf *pf, u16 module, const char *component, { u16 completion_module, completion_retval; struct device *dev = ice_pf_to_dev(pf); - struct ice_rq_event_info event; + struct ice_aq_task task = {}; struct ice_hw *hw = &pf->hw; + struct ice_aq_desc *desc; struct devlink *devlink; int err; dev_dbg(dev, "Beginning erase of flash component '%s', module 0x%02x\n", component, module); - memset(&event, 0, sizeof(event)); - devlink = priv_to_devlink(pf); devlink_flash_update_timeout_notify(devlink, "Erasing", component, ICE_FW_ERASE_TIMEOUT); @@ -502,7 +501,7 @@ ice_erase_nvm_module(struct ice_pf *pf, u16 module, const char *component, goto out_notify_devlink; } - err = ice_aq_wait_for_event(pf, ice_aqc_opc_nvm_erase, ICE_FW_ERASE_TIMEOUT * HZ, &event); + err = ice_aq_wait_for_event(pf, &task, ice_aqc_opc_nvm_erase, ICE_FW_ERASE_TIMEOUT * HZ); if (err) { dev_err(dev, "Timed out waiting for firmware to respond with erase completion for %s (module 0x%02x), err %d\n", component, module, err); @@ -510,8 +509,9 @@ ice_erase_nvm_module(struct ice_pf *pf, u16 module, const char *component, goto out_notify_devlink; } - completion_module = le16_to_cpu(event.desc.params.nvm.module_typeid); - completion_retval = le16_to_cpu(event.desc.retval); + desc = &task.event.desc; + completion_module = le16_to_cpu(desc->params.nvm.module_typeid); + completion_retval = le16_to_cpu(desc->retval); if (completion_module != module) { dev_err(dev, "Unexpected module_typeid in erase completion for %s: got 0x%x, expected 0x%x\n", @@ -560,14 +560,12 @@ ice_switch_flash_banks(struct ice_pf *pf, u8 activate_flags, u8 *emp_reset_available, struct netlink_ext_ack *extack) { struct device *dev = ice_pf_to_dev(pf); - struct ice_rq_event_info event; + struct ice_aq_task task = {}; struct ice_hw *hw = &pf->hw; u16 completion_retval; u8 response_flags; int err; - memset(&event, 0, sizeof(event)); - err = ice_nvm_write_activate(hw, activate_flags, &response_flags); if (err) { dev_err(dev, "Failed to switch active flash banks, err %d aq_err %s\n", @@ -592,8 +590,8 @@ ice_switch_flash_banks(struct ice_pf *pf, u8 activate_flags, } } - err = ice_aq_wait_for_event(pf, ice_aqc_opc_nvm_write_activate, 30 * HZ, - &event); + err = ice_aq_wait_for_event(pf, &task, ice_aqc_opc_nvm_write_activate, + 30 * HZ); if (err) { dev_err(dev, "Timed out waiting for firmware to switch active flash banks, err %d\n", err); @@ -601,7 +599,7 @@ ice_switch_flash_banks(struct ice_pf *pf, u8 activate_flags, return err; } - completion_retval = le16_to_cpu(event.desc.retval); + completion_retval = le16_to_cpu(task.event.desc.retval); if (completion_retval) { dev_err(dev, "Firmware failed to switch active flash banks aq_err %s\n", ice_aq_str((enum ice_aq_err)completion_retval)); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index f2ad2153589a..36772215b8c6 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -1250,26 +1250,12 @@ ice_handle_link_event(struct ice_pf *pf, struct ice_rq_event_info *event) return status; } -enum ice_aq_task_state { - ICE_AQ_TASK_WAITING = 0, - ICE_AQ_TASK_COMPLETE, - ICE_AQ_TASK_CANCELED, -}; - -struct ice_aq_task { - struct hlist_node entry; - - u16 opcode; - struct ice_rq_event_info *event; - enum ice_aq_task_state state; -}; - /** * ice_aq_wait_for_event - Wait for an AdminQ event from firmware * @pf: pointer to the PF private structure + * @task: ptr to task structure * @opcode: the opcode to wait for * @timeout: how long to wait, in jiffies - * @event: storage for the event info * * Waits for a specific AdminQ completion event on the ARQ for a given PF. The * current thread will be put to sleep until the specified event occurs or @@ -1281,22 +1267,16 @@ struct ice_aq_task { * * Returns: zero on success, or a negative error code on failure. */ -int ice_aq_wait_for_event(struct ice_pf *pf, u16 opcode, unsigned long timeout, - struct ice_rq_event_info *event) +int ice_aq_wait_for_event(struct ice_pf *pf, struct ice_aq_task *task, + u16 opcode, unsigned long timeout) { struct device *dev = ice_pf_to_dev(pf); - struct ice_aq_task *task; unsigned long start; long ret; int err; - task = kzalloc(sizeof(*task), GFP_KERNEL); - if (!task) - return -ENOMEM; - INIT_HLIST_NODE(&task->entry); task->opcode = opcode; - task->event = event; task->state = ICE_AQ_TASK_WAITING; spin_lock_bh(&pf->aq_wait_lock); @@ -1331,7 +1311,6 @@ int ice_aq_wait_for_event(struct ice_pf *pf, u16 opcode, unsigned long timeout, spin_lock_bh(&pf->aq_wait_lock); hlist_del(&task->entry); spin_unlock_bh(&pf->aq_wait_lock); - kfree(task); return err; } @@ -1367,7 +1346,7 @@ static void ice_aq_check_events(struct ice_pf *pf, u16 opcode, if (task->state || task->opcode != opcode) continue; - task_ev = task->event; + task_ev = &task->event; memcpy(&task_ev->desc, &event->desc, sizeof(event->desc)); task_ev->msg_len = event->msg_len; From patchwork Mon Aug 7 15:58:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Przemek Kitszel X-Patchwork-Id: 13344483 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A51CF11C8B for ; Mon, 7 Aug 2023 16:01:47 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35395E72 for ; Mon, 7 Aug 2023 09:01:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691424106; x=1722960106; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DRFWB4wERpkMqlYJQIqqAgGTPpKVOYLv1Ur37d/lR1E=; b=XKcCPLFBTH/0qB8/UJGxW9NhQTTYv1IoVQra/upBuAs/W6HWbwEnjMYL uUPLyS4yB9GMWWCWUyTcbNSzQmfWdsENYgxKEQQq2mFvRYwe3EXSBwMqA qTMsYV54svwiG5AGHylhlTvnbmYRRpwgvPVQ6WvOgYKw6cXK0FUwKfA1N VtANc+nxheUkx+IaM9QWBHn8L9Kg6Iy4Rqfx5GaqfWx6rWMM7rBhwiQIv H4QMhK2Pvtt7+JKVZu44vLzuX/oxZJ3MifpVDY9fq953qOg2JRmL9qNKc FMSiGTrgTqW5jjiiwR3UI8GEjjksZ+xcZgb5HJNol1NOd19DwMhR1wLSp Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="401553937" X-IronPort-AV: E=Sophos;i="6.01,262,1684825200"; d="scan'208";a="401553937" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2023 09:01:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="977492998" X-IronPort-AV: E=Sophos;i="6.01,262,1684825200"; d="scan'208";a="977492998" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga006.fm.intel.com with ESMTP; 07 Aug 2023 09:01:43 -0700 Received: from pelor.igk.intel.com (pelor.igk.intel.com [10.123.220.13]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 0629E312D4; Mon, 7 Aug 2023 17:01:42 +0100 (IST) From: Przemek Kitszel To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, Jacob Keller , Tony Nguyen , Simon Horman , Przemek Kitszel Subject: [PATCH iwl-next v3 3/3] ice: split ice_aq_wait_for_event() func into two Date: Mon, 7 Aug 2023 11:58:48 -0400 Message-Id: <20230807155848.90907-4-przemyslaw.kitszel@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807155848.90907-1-przemyslaw.kitszel@intel.com> References: <20230807155848.90907-1-przemyslaw.kitszel@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Mitigate race between registering on wait list and receiving AQ Response from FW. ice_aq_prep_for_event() should be called before sending AQ command, ice_aq_wait_for_event() should be called after sending AQ command, to wait for AQ Response. Please note, that this was found by reading the code, an actual race has not yet materialized. Signed-off-by: Przemek Kitszel --- add/remove: 2/0 grow/shrink: 1/3 up/down: 131/-61 (70) --- drivers/net/ethernet/intel/ice/ice.h | 7 +- .../net/ethernet/intel/ice/ice_fw_update.c | 13 ++-- drivers/net/ethernet/intel/ice/ice_main.c | 67 ++++++++++++------- 3 files changed, 57 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 6346283c5d14..3ac645afbc8d 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -933,6 +933,7 @@ void ice_fdir_replay_fltrs(struct ice_pf *pf); int ice_fdir_create_dflt_rules(struct ice_pf *pf); enum ice_aq_task_state { + ICE_AQ_TASK_NOT_PREPARED, ICE_AQ_TASK_WAITING, ICE_AQ_TASK_COMPLETE, ICE_AQ_TASK_CANCELED, @@ -945,10 +946,10 @@ struct ice_aq_task { u16 opcode; }; - +void ice_aq_prep_for_event(struct ice_pf *pf, struct ice_aq_task *task, + u16 opcode); int ice_aq_wait_for_event(struct ice_pf *pf, struct ice_aq_task *task, - u16 opcode, unsigned long timeout); - + unsigned long timeout); int ice_open(struct net_device *netdev); int ice_open_internal(struct net_device *netdev); int ice_stop(struct net_device *netdev); diff --git a/drivers/net/ethernet/intel/ice/ice_fw_update.c b/drivers/net/ethernet/intel/ice/ice_fw_update.c index 819b70823e9c..319a2d6fe26c 100644 --- a/drivers/net/ethernet/intel/ice/ice_fw_update.c +++ b/drivers/net/ethernet/intel/ice/ice_fw_update.c @@ -302,6 +302,8 @@ ice_write_one_nvm_block(struct ice_pf *pf, u16 module, u32 offset, dev_dbg(dev, "Writing block of %u bytes for module 0x%02x at offset %u\n", block_size, module, offset); + ice_aq_prep_for_event(pf, &task, ice_aqc_opc_nvm_write); + err = ice_aq_update_nvm(hw, module, offset, block_size, block, last_cmd, 0, NULL); if (err) { @@ -318,7 +320,7 @@ ice_write_one_nvm_block(struct ice_pf *pf, u16 module, u32 offset, * is conservative and is intended to prevent failure to update when * firmware is slow to respond. */ - err = ice_aq_wait_for_event(pf, &task, ice_aqc_opc_nvm_write, 15 * HZ); + err = ice_aq_wait_for_event(pf, &task, 15 * HZ); if (err) { dev_err(dev, "Timed out while trying to flash module 0x%02x with block of size %u at offset %u, err %d\n", module, block_size, offset, err); @@ -491,6 +493,8 @@ ice_erase_nvm_module(struct ice_pf *pf, u16 module, const char *component, devlink_flash_update_timeout_notify(devlink, "Erasing", component, ICE_FW_ERASE_TIMEOUT); + ice_aq_prep_for_event(pf, &task, ice_aqc_opc_nvm_erase); + err = ice_aq_erase_nvm(hw, module, NULL); if (err) { dev_err(dev, "Failed to erase %s (module 0x%02x), err %d aq_err %s\n", @@ -501,7 +505,7 @@ ice_erase_nvm_module(struct ice_pf *pf, u16 module, const char *component, goto out_notify_devlink; } - err = ice_aq_wait_for_event(pf, &task, ice_aqc_opc_nvm_erase, ICE_FW_ERASE_TIMEOUT * HZ); + err = ice_aq_wait_for_event(pf, &task, ICE_FW_ERASE_TIMEOUT * HZ); if (err) { dev_err(dev, "Timed out waiting for firmware to respond with erase completion for %s (module 0x%02x), err %d\n", component, module, err); @@ -566,6 +570,8 @@ ice_switch_flash_banks(struct ice_pf *pf, u8 activate_flags, u8 response_flags; int err; + ice_aq_prep_for_event(pf, &task, ice_aqc_opc_nvm_write_activate); + err = ice_nvm_write_activate(hw, activate_flags, &response_flags); if (err) { dev_err(dev, "Failed to switch active flash banks, err %d aq_err %s\n", @@ -590,8 +596,7 @@ ice_switch_flash_banks(struct ice_pf *pf, u8 activate_flags, } } - err = ice_aq_wait_for_event(pf, &task, ice_aqc_opc_nvm_write_activate, - 30 * HZ); + err = ice_aq_wait_for_event(pf, &task, 30 * HZ); if (err) { dev_err(dev, "Timed out waiting for firmware to switch active flash banks, err %d\n", err); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 36772215b8c6..edbbadd20845 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -1251,30 +1251,24 @@ ice_handle_link_event(struct ice_pf *pf, struct ice_rq_event_info *event) } /** - * ice_aq_wait_for_event - Wait for an AdminQ event from firmware + * ice_aq_prep_for_event - Prepare to wait for an AdminQ event from firmware * @pf: pointer to the PF private structure - * @task: ptr to task structure + * @task: intermediate helper storage and identifier for waiting * @opcode: the opcode to wait for - * @timeout: how long to wait, in jiffies * - * Waits for a specific AdminQ completion event on the ARQ for a given PF. The - * current thread will be put to sleep until the specified event occurs or - * until the given timeout is reached. + * Prepares to wait for a specific AdminQ completion event on the ARQ for + * a given PF. Actual wait would be done by a call to ice_aq_wait_for_event(). * - * To obtain only the descriptor contents, pass an event without an allocated - * msg_buf. If the complete data buffer is desired, allocate the - * event->msg_buf with enough space ahead of time. + * Calls are separated to allow caller registering for event before sending + * the command, which mitigates a race between registering and FW responding. * - * Returns: zero on success, or a negative error code on failure. + * To obtain only the descriptor contents, pass an task->event with null + * msg_buf. If the complete data buffer is desired, allocate the + * task->event.msg_buf with enough space ahead of time. */ -int ice_aq_wait_for_event(struct ice_pf *pf, struct ice_aq_task *task, - u16 opcode, unsigned long timeout) +void ice_aq_prep_for_event(struct ice_pf *pf, struct ice_aq_task *task, + u16 opcode) { - struct device *dev = ice_pf_to_dev(pf); - unsigned long start; - long ret; - int err; - INIT_HLIST_NODE(&task->entry); task->opcode = opcode; task->state = ICE_AQ_TASK_WAITING; @@ -1282,12 +1276,37 @@ int ice_aq_wait_for_event(struct ice_pf *pf, struct ice_aq_task *task, spin_lock_bh(&pf->aq_wait_lock); hlist_add_head(&task->entry, &pf->aq_wait_list); spin_unlock_bh(&pf->aq_wait_lock); +} - start = jiffies; +/** + * ice_aq_wait_for_event - Wait for an AdminQ event from firmware + * @pf: pointer to the PF private structure + * @task: ptr prepared by ice_aq_prep_for_event() + * @timeout: how long to wait, in jiffies + * + * Waits for a specific AdminQ completion event on the ARQ for a given PF. The + * current thread will be put to sleep until the specified event occurs or + * until the given timeout is reached. + * + * Returns: zero on success, or a negative error code on failure. + */ +int ice_aq_wait_for_event(struct ice_pf *pf, struct ice_aq_task *task, + unsigned long timeout) +{ + enum ice_aq_task_state *state = &task->state; + struct device *dev = ice_pf_to_dev(pf); + unsigned long start = jiffies; + long ret; + int err; - ret = wait_event_interruptible_timeout(pf->aq_wait_queue, task->state, + ret = wait_event_interruptible_timeout(pf->aq_wait_queue, + *state != ICE_AQ_TASK_WAITING, timeout); - switch (task->state) { + switch (*state) { + case ICE_AQ_TASK_NOT_PREPARED: + WARN(1, "call to %s without ice_aq_prep_for_event()", __func__); + err = -EINVAL; + break; case ICE_AQ_TASK_WAITING: err = ret < 0 ? ret : -ETIMEDOUT; break; @@ -1298,7 +1317,7 @@ int ice_aq_wait_for_event(struct ice_pf *pf, struct ice_aq_task *task, err = ret < 0 ? ret : 0; break; default: - WARN(1, "Unexpected AdminQ wait task state %u", task->state); + WARN(1, "Unexpected AdminQ wait task state %u", *state); err = -EINVAL; break; } @@ -1306,7 +1325,7 @@ int ice_aq_wait_for_event(struct ice_pf *pf, struct ice_aq_task *task, dev_dbg(dev, "Waited %u msecs (max %u msecs) for firmware response to op 0x%04x\n", jiffies_to_msecs(jiffies - start), jiffies_to_msecs(timeout), - opcode); + task->opcode); spin_lock_bh(&pf->aq_wait_lock); hlist_del(&task->entry); @@ -1343,7 +1362,9 @@ static void ice_aq_check_events(struct ice_pf *pf, u16 opcode, spin_lock_bh(&pf->aq_wait_lock); hlist_for_each_entry(task, &pf->aq_wait_list, entry) { - if (task->state || task->opcode != opcode) + if (task->state != ICE_AQ_TASK_WAITING) + continue; + if (task->opcode != opcode) continue; task_ev = &task->event;