From patchwork Fri Oct 21 21:38:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacob Keller X-Patchwork-Id: 13015502 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D25F7C433FE for ; Fri, 21 Oct 2022 21:39:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229690AbiJUVjD (ORCPT ); Fri, 21 Oct 2022 17:39:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229497AbiJUVjC (ORCPT ); Fri, 21 Oct 2022 17:39:02 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9DE4E2A79C6 for ; Fri, 21 Oct 2022 14:39:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666388341; x=1697924341; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=cOUVwqBaKyUkoseB4QWUrXHg4XPinU/lp9V/jQVesk0=; b=f44keK+EC3EueaF7jw4WLYBkskEwfrztTvbdMYmglgf44Yu7FC1jakpf NOteuNLvFaaJAc91MFtbVNm7eOdKqkz7SP6uMxeC71qvPO0852nUQP3/Z tlhJT14eb5Lt7rSfeop2EPCxNbGdut9NDMBYM3K117j0l2A1AVeom32ay buGBFYqX/3Tg5H19Lh+CdOX0gwKaRIdy4n1NRNodWf1M8pyQ21F8Vw4mr qpGLmCEBFRH3qDtVvdTelfpqv9rh6YrMB5RCtdWoNktAgt0mk+8ZpKw8U GRDkYj2pcH8p9miFJwkx5kzLAJwSSU/XxjXO2QNOKZakUTQDyIx/nuyuE g==; X-IronPort-AV: E=McAfee;i="6500,9779,10507"; a="371332010" X-IronPort-AV: E=Sophos;i="5.95,203,1661842800"; d="scan'208";a="371332010" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2022 14:39:01 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10507"; a="805715268" X-IronPort-AV: E=Sophos;i="5.95,203,1661842800"; d="scan'208";a="805715268" Received: from jekeller-desk.amr.corp.intel.com ([10.166.241.7]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2022 14:39:01 -0700 From: Jacob Keller To: Jakub Kicinski , David Miller Cc: netdev@vger.kernel.org, Sylwester Dziedziuch , Mateusz Palczewski , Konrad Jankowski Subject: [PATCH net] i40e: Fix VF hang when reset is triggered on another VF Date: Fri, 21 Oct 2022 14:38:50 -0700 Message-Id: <20221021213850.3970720-1-jacob.e.keller@intel.com> X-Mailer: git-send-email 2.38.0.83.gd420dda05763 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Sylwester Dziedziuch When a reset was triggered on one VF with i40e_reset_vf global PF state __I40E_VF_DISABLE was set on a PF until the reset finished. If immediately after triggering reset on one VF there is a request to reset on another it will cause a hang on VF side because VF will be notified of incoming reset but the reset will never happen because of this global state, we will get such error message: [ +4.890195] iavf 0000:86:02.1: Never saw reset and VF will hang waiting for the reset to be triggered. Fix this by introducing new VF state I40E_VF_STATE_RESETTING that will be set on a VF if it is currently resetting instead of the global __I40E_VF_DISABLE PF state. Fixes: 3ba9bcb4b68f ("i40e: add locking around VF reset") Signed-off-by: Sylwester Dziedziuch Signed-off-by: Mateusz Palczewski Tested-by: Konrad Jankowski --- .../ethernet/intel/i40e/i40e_virtchnl_pf.c | 43 ++++++++++++++----- .../ethernet/intel/i40e/i40e_virtchnl_pf.h | 1 + 2 files changed, 33 insertions(+), 11 deletions(-) base-commit: 4d814b329a4d54cd10eee4bd2ce5a8175646cc16 prerequisite-patch-id: c24e3fc70eea2762c00e24407d82075758adb183 diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c index 7e9f6a69eb10..72ddcefc45b1 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c @@ -1536,10 +1536,12 @@ bool i40e_reset_vf(struct i40e_vf *vf, bool flr) if (test_bit(__I40E_VF_RESETS_DISABLED, pf->state)) return true; - /* If the VFs have been disabled, this means something else is - * resetting the VF, so we shouldn't continue. - */ - if (test_and_set_bit(__I40E_VF_DISABLE, pf->state)) + /* Bail out if VFs are disabled. */ + if (test_bit(__I40E_VF_DISABLE, pf->state)) + return true; + + /* If VF is being reset already we don't need to continue. */ + if (test_and_set_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) return true; i40e_trigger_vf_reset(vf, flr); @@ -1576,7 +1578,7 @@ bool i40e_reset_vf(struct i40e_vf *vf, bool flr) i40e_cleanup_reset_vf(vf); i40e_flush(hw); - clear_bit(__I40E_VF_DISABLE, pf->state); + clear_bit(I40E_VF_STATE_RESETTING, &vf->vf_states); return true; } @@ -1609,8 +1611,12 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr) return false; /* Begin reset on all VFs at once */ - for (v = 0; v < pf->num_alloc_vfs; v++) - i40e_trigger_vf_reset(&pf->vf[v], flr); + for (v = 0; v < pf->num_alloc_vfs; v++) { + vf = &pf->vf[v]; + /* If VF is being reset no need to trigger reset again */ + if (!test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) + i40e_trigger_vf_reset(&pf->vf[v], flr); + } /* HW requires some time to make sure it can flush the FIFO for a VF * when it resets it. Poll the VPGEN_VFRSTAT register for each VF in @@ -1626,9 +1632,11 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr) */ while (v < pf->num_alloc_vfs) { vf = &pf->vf[v]; - reg = rd32(hw, I40E_VPGEN_VFRSTAT(vf->vf_id)); - if (!(reg & I40E_VPGEN_VFRSTAT_VFRD_MASK)) - break; + if (!test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) { + reg = rd32(hw, I40E_VPGEN_VFRSTAT(vf->vf_id)); + if (!(reg & I40E_VPGEN_VFRSTAT_VFRD_MASK)) + break; + } /* If the current VF has finished resetting, move on * to the next VF in sequence. @@ -1656,6 +1664,10 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr) if (pf->vf[v].lan_vsi_idx == 0) continue; + /* If VF is reset in another thread just continue */ + if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) + continue; + i40e_vsi_stop_rings_no_wait(pf->vsi[pf->vf[v].lan_vsi_idx]); } @@ -1667,6 +1679,10 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr) if (pf->vf[v].lan_vsi_idx == 0) continue; + /* If VF is reset in another thread just continue */ + if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) + continue; + i40e_vsi_wait_queues_disabled(pf->vsi[pf->vf[v].lan_vsi_idx]); } @@ -1676,8 +1692,13 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr) mdelay(50); /* Finish the reset on each VF */ - for (v = 0; v < pf->num_alloc_vfs; v++) + for (v = 0; v < pf->num_alloc_vfs; v++) { + /* If VF is reset in another thread just continue */ + if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) + continue; + i40e_cleanup_reset_vf(&pf->vf[v]); + } i40e_flush(hw); clear_bit(__I40E_VF_DISABLE, pf->state); diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h index a554d0a0b09b..358bbdb58795 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h @@ -39,6 +39,7 @@ enum i40e_vf_states { I40E_VF_STATE_MC_PROMISC, I40E_VF_STATE_UC_PROMISC, I40E_VF_STATE_PRE_ENABLE, + I40E_VF_STATE_RESETTING }; /* VF capabilities */