From patchwork Mon Jun 10 23:38:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 10985655 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E6AE614B6 for ; Mon, 10 Jun 2019 23:38:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D623E28174 for ; Mon, 10 Jun 2019 23:38:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C751728694; Mon, 10 Jun 2019 23:38:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8397528174 for ; Mon, 10 Jun 2019 23:38:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390788AbfFJXiY (ORCPT ); Mon, 10 Jun 2019 19:38:24 -0400 Received: from mail-eopbgr20059.outbound.protection.outlook.com ([40.107.2.59]:50500 "EHLO EUR02-VE1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2390625AbfFJXiY (ORCPT ); Mon, 10 Jun 2019 19:38:24 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=B3+YIp37T2wVzz8su1sEtxKDVyy/z3gvnAB0jFQPV3g=; b=ZGX4TLcAkejVQERvR4irO66ZlrQXpprff0QvRSEtvEb1ExrijHGus6rmZwOPjJSA41wXOy5A1GpirKm6Okw8gM58F40507yx3wmzv6t10duBY/Aic9pz9RVlpAjf2WKf7LSPcg3VE7oEzgCQ88yoEU5/binxGIb2cMENtCEE06E= Received: from DB6PR0501MB2759.eurprd05.prod.outlook.com (10.172.227.7) by DB6PR0501MB2166.eurprd05.prod.outlook.com (10.168.55.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1965.12; Mon, 10 Jun 2019 23:38:16 +0000 Received: from DB6PR0501MB2759.eurprd05.prod.outlook.com ([fe80::3b:cb20:88ed:30bf]) by DB6PR0501MB2759.eurprd05.prod.outlook.com ([fe80::3b:cb20:88ed:30bf%5]) with mapi id 15.20.1965.017; Mon, 10 Jun 2019 23:38:16 +0000 From: Saeed Mahameed To: Saeed Mahameed , Leon Romanovsky CC: "linux-rdma@vger.kernel.org" , "netdev@vger.kernel.org" , Vu Pham , Parav Pandit , Bodong Wang Subject: [PATCH mlx5-next 02/16] net/mlx5: E-Switch, Handle representors creation in handler context Thread-Topic: [PATCH mlx5-next 02/16] net/mlx5: E-Switch, Handle representors creation in handler context Thread-Index: AQHVH+WTyv8D4pYd3UGgXayrVTvCJw== Date: Mon, 10 Jun 2019 23:38:16 +0000 Message-ID: <20190610233733.12155-3-saeedm@mellanox.com> References: <20190610233733.12155-1-saeedm@mellanox.com> In-Reply-To: <20190610233733.12155-1-saeedm@mellanox.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-mailer: git-send-email 2.21.0 x-originating-ip: [209.116.155.178] x-clientproxiedby: BYAPR01CA0015.prod.exchangelabs.com (2603:10b6:a02:80::28) To DB6PR0501MB2759.eurprd05.prod.outlook.com (2603:10a6:4:84::7) authentication-results: spf=none (sender IP is ) smtp.mailfrom=saeedm@mellanox.com; x-ms-exchange-messagesentrepresentingtype: 1 x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 64914f1a-e05d-460f-2532-08d6edfcb5c0 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0;PCL:0;RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020);SRVR:DB6PR0501MB2166; x-ms-traffictypediagnostic: DB6PR0501MB2166: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:5797; x-forefront-prvs: 0064B3273C x-forefront-antispam-report: SFV:NSPM;SFS:(10009020)(346002)(396003)(39860400002)(376002)(136003)(366004)(199004)(189003)(26005)(71190400001)(76176011)(386003)(7736002)(6506007)(73956011)(66556008)(102836004)(86362001)(66476007)(66946007)(107886003)(71200400001)(305945005)(5660300002)(64756008)(66446008)(316002)(68736007)(66066001)(1076003)(6636002)(110136005)(3846002)(54906003)(6116002)(36756003)(186003)(2616005)(81166006)(256004)(85306007)(450100002)(14454004)(50226002)(53936002)(6512007)(52116002)(25786009)(8936002)(8676002)(476003)(99286004)(6486002)(4326008)(81156014)(478600001)(2906002)(446003)(11346002)(6436002)(486006);DIR:OUT;SFP:1101;SCL:1;SRVR:DB6PR0501MB2166;H:DB6PR0501MB2759.eurprd05.prod.outlook.com;FPR:;SPF:None;LANG:en;PTR:InfoNoRecords;A:1;MX:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: mqnfPXpLDvlUZarZkKKGepdhcESV0HPCRAvJyrN+UGrjlSSkVscCBjtipV+fc4Ai9q7MzhRqIg9cr/Zo+5ytQR9/4++6lhnRzLJUuz5zPxNfZZO8R62w0GhIcbgaEem4tTO7lVKYlhlWjtxQLpPUqoE6Bcp74OLZoGAvE9i4SgvK6ZwBGF8b0wMtXignNh+szM8/UJlbIcKDIsT+jj3DCZKlFoGrj3eCh3uwomebq5KCgk+I5s/KxC3xIPHO230XdIL61a3nrZafo73KJQRU0trmhAiZIG5h/AWg8WosTGm6xr098bkOaiIC+tBnPxvfwl+qf9fK9NLLqCZlJa6FdGeTa4q9G+lR5VbE7VHyNTk5oXMdhMzizijOVrJaDN+aECcohygQyB8/vNWDeJ5ZOy5AD9FmRX+toJwUlhK7+I0= MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 64914f1a-e05d-460f-2532-08d6edfcb5c0 X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jun 2019 23:38:16.2839 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: saeedm@mellanox.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0501MB2166 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Vu Pham Unified representors creation in esw_functions_changed context handler. Emulate the esw_function_changed event for FW/HW that does not support this event. Signed-off-by: Vu Pham Reviewed-by: Parav Pandit Reviewed-by: Bodong Wang Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/eswitch.c | 12 +-- .../mellanox/mlx5/core/eswitch_offloads.c | 89 ++++++++++--------- 2 files changed, 50 insertions(+), 51 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c index d8935232964a..504c0440b0b0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c @@ -1720,7 +1720,6 @@ int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode) { struct mlx5_vport *vport; int total_nvports = 0; - u16 vf_nvports = 0; int err; int i, enabled_events; @@ -1739,15 +1738,10 @@ int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode) esw_info(esw->dev, "E-Switch enable SRIOV: nvfs(%d) mode (%d)\n", nvfs, mode); if (mode == SRIOV_OFFLOADS) { - if (mlx5_core_is_ecpf_esw_manager(esw->dev)) { - err = mlx5_esw_query_functions(esw->dev, &vf_nvports); - if (err) - return err; + if (mlx5_core_is_ecpf_esw_manager(esw->dev)) total_nvports = esw->total_vports; - } else { - vf_nvports = nvfs; + else total_nvports = nvfs + MLX5_SPECIAL_VPORTS(esw->dev); - } } esw->mode = mode; @@ -1761,7 +1755,7 @@ int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode) } else { mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_ETH); mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_IB); - err = esw_offloads_init(esw, vf_nvports, total_nvports); + err = esw_offloads_init(esw, nvfs, total_nvports); } if (err) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c index d6246ee042fa..f843d8a35a2c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c @@ -1436,34 +1436,13 @@ static int esw_offloads_load_vf_reps(struct mlx5_eswitch *esw, int nvports) return err; } -static int __load_reps_all_vport(struct mlx5_eswitch *esw, int nvports, - u8 rep_type) -{ - int err; - - /* Special vports must be loaded first. */ - err = __load_reps_special_vport(esw, rep_type); - if (err) - return err; - - err = __load_reps_vf_vport(esw, nvports, rep_type); - if (err) - goto err_vfs; - - return 0; - -err_vfs: - __unload_reps_special_vport(esw, rep_type); - return err; -} - -static int esw_offloads_load_all_reps(struct mlx5_eswitch *esw, int nvports) +static int esw_offloads_load_special_vport(struct mlx5_eswitch *esw) { u8 rep_type = 0; int err; for (rep_type = 0; rep_type < NUM_REP_TYPES; rep_type++) { - err = __load_reps_all_vport(esw, nvports, rep_type); + err = __load_reps_special_vport(esw, rep_type); if (err) goto err_reps; } @@ -1472,7 +1451,7 @@ static int esw_offloads_load_all_reps(struct mlx5_eswitch *esw, int nvports) err_reps: while (rep_type-- > 0) - __unload_reps_all_vport(esw, nvports, rep_type); + __unload_reps_special_vport(esw, rep_type); return err; } @@ -1811,6 +1790,21 @@ static void esw_functions_changed_event_handler(struct work_struct *work) kfree(host_work); } +static void esw_emulate_event_handler(struct work_struct *work) +{ + struct mlx5_host_work *host_work = + container_of(work, struct mlx5_host_work, work); + struct mlx5_eswitch *esw = host_work->esw; + int err; + + if (esw->esw_funcs.num_vfs) { + err = esw_offloads_load_vf_reps(esw, esw->esw_funcs.num_vfs); + if (err) + esw_warn(esw->dev, "Load vf reps err=%d\n", err); + } + kfree(host_work); +} + static int esw_functions_changed_event(struct notifier_block *nb, unsigned long type, void *data) { @@ -1827,7 +1821,11 @@ static int esw_functions_changed_event(struct notifier_block *nb, host_work->esw = esw; - INIT_WORK(&host_work->work, esw_functions_changed_event_handler); + if (mlx5_eswitch_is_funcs_handler(esw->dev)) + INIT_WORK(&host_work->work, + esw_functions_changed_event_handler); + else + INIT_WORK(&host_work->work, esw_emulate_event_handler); queue_work(esw->work_queue, &host_work->work); return NOTIFY_OK; @@ -1836,13 +1834,14 @@ static int esw_functions_changed_event(struct notifier_block *nb, static void esw_functions_changed_event_init(struct mlx5_eswitch *esw, u16 vf_nvports) { - if (!mlx5_eswitch_is_funcs_handler(esw->dev)) - return; - - MLX5_NB_INIT(&esw->esw_funcs.nb, esw_functions_changed_event, - ESW_FUNCTIONS_CHANGED); - mlx5_eq_notifier_register(esw->dev, &esw->esw_funcs.nb); - esw->esw_funcs.num_vfs = vf_nvports; + if (mlx5_eswitch_is_funcs_handler(esw->dev)) { + esw->esw_funcs.num_vfs = 0; + MLX5_NB_INIT(&esw->esw_funcs.nb, esw_functions_changed_event, + ESW_FUNCTIONS_CHANGED); + mlx5_eq_notifier_register(esw->dev, &esw->esw_funcs.nb); + } else { + esw->esw_funcs.num_vfs = vf_nvports; + } } static void esw_functions_changed_event_cleanup(struct mlx5_eswitch *esw) @@ -1863,7 +1862,11 @@ int esw_offloads_init(struct mlx5_eswitch *esw, int vf_nvports, if (err) return err; - err = esw_offloads_load_all_reps(esw, vf_nvports); + /* Only load special vports reps. VF reps will be loaded in + * context of functions_changed event handler through real + * or emulated event. + */ + err = esw_offloads_load_special_vport(esw); if (err) goto err_reps; @@ -1873,6 +1876,16 @@ int esw_offloads_init(struct mlx5_eswitch *esw, int vf_nvports, mlx5_rdma_enable_roce(esw->dev); + /* Call esw_functions_changed event to load VF reps: + * 1. HW does not support the event then emulate it + * Or + * 2. The event was already notified when num_vfs changed + * and eswitch was in legacy mode + */ + esw_functions_changed_event(&esw->esw_funcs.nb.nb, + MLX5_EVENT_TYPE_ESW_FUNCTIONS_CHANGED, + NULL); + return 0; err_reps: @@ -1901,18 +1914,10 @@ static int esw_offloads_stop(struct mlx5_eswitch *esw, void esw_offloads_cleanup(struct mlx5_eswitch *esw) { - u16 num_vfs; - esw_functions_changed_event_cleanup(esw); - - if (mlx5_eswitch_is_funcs_handler(esw->dev)) - num_vfs = esw->esw_funcs.num_vfs; - else - num_vfs = esw->dev->priv.sriov.num_vfs; - mlx5_rdma_disable_roce(esw->dev); esw_offloads_devcom_cleanup(esw); - esw_offloads_unload_all_reps(esw, num_vfs); + esw_offloads_unload_all_reps(esw, esw->esw_funcs.num_vfs); esw_offloads_steering_cleanup(esw); }