From patchwork Mon Apr 17 12:18:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13213765 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA130C77B72 for ; Mon, 17 Apr 2023 12:20:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230434AbjDQMUq (ORCPT ); Mon, 17 Apr 2023 08:20:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230514AbjDQMUn (ORCPT ); Mon, 17 Apr 2023 08:20:43 -0400 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2048.outbound.protection.outlook.com [40.107.93.48]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 761428A74 for ; Mon, 17 Apr 2023 05:20:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=P/X8CToPTA5FjfyU0f13JnNx5Ewf7DAI2+qMQZVNbzZppvkLdFbgwJEs/fTu9qMbRx1qJ4HeGywWf2HntwsU1HYSwxSwKP6QVz7RIp30DGl9PE40nPgrULZWXpUrp5ibhjng1U9xOGvDhc1nSjPdTQUeBqUtq8knMCNYD3blrRf4tM8NUnDpCqVZcBb1PdUnE1J+7vBsSc6XebKdmI8NYopjhGBr5VuLQEa4/hy1qb2lj9NUSVyTblSzw1QloRjput8EPS8Ssk++8vCOS5U4V3ZQVf1DO3eTcF6ChIyFqCoreE+T5X6Snua9QvZhO1W3HuNLUsA6kn8V6Jj8DT5Mww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=M1nHro+SIGGfK26WM+xW3wCwE1L5olwZQ6ctdD+gLOQ=; b=QGOvn6MhlKZMJ4kkrcVoMYbxNPX9ayErRlnr5ZsgMmuJAWEefWXE4uN4g+PNfS5Va3svsQWBazQPwo0wHo2i9ZldU8NA4RIDogLzw4SH1VYmAAr2DIwQpcDG8g1Jjn96s8/GBnKQt6IgRRv9cNsElj1TKzrcpvvTqm30t6NMmnoeSEnEKnNaV3IEmTYUq5LUGqSokwyIErLAxWbRk9D99fX7DoZw/k7oSyZJkXo2qXxomwHOv4F9Tcr6uK6aci8OdeXQZCR1JxTEok4KIP/6zu8wveJsJok/zm10VGAsdi0KVrcKL25gsgQEmaKqrjgrryMvGwOdC+8b3HyHApS7oQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=M1nHro+SIGGfK26WM+xW3wCwE1L5olwZQ6ctdD+gLOQ=; b=Qdg7FdxEpbGcs7uC78FD8LM+INiT0vg3pMQujLReovZOirtD2Md/znxEkRzPAOQa31ywPMfu8FQxbdvhGt8TmorhK5hPzeT98b4f3GizTDx0+jVCkrF8sOW16qR1ieOrVVmgjSVZfSc0gJHnwAsQ4xOpg89pJ7gpZbyuGc771/SQDSSizoKycJcWluzEDuNMn/tJkheVU/eWe9Q2R5gS+BNxf++DapPBIP1nGIoGfGTQQbqeXTA4IbqwUXH73kpl5jmh+lL+BEVWUuf0GUBVJ0k57r7BXcD2ITpsnx0+IWVZbEMOJkIFD6Yo46KMmRml0+66GxoQu/VLqHMoOwwR6w== Received: from BN9PR03CA0878.namprd03.prod.outlook.com (2603:10b6:408:13c::13) by DM6PR12MB4332.namprd12.prod.outlook.com (2603:10b6:5:21e::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 12:20:07 +0000 Received: from BN8NAM11FT087.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13c:cafe::ce) by BN9PR03CA0878.outlook.office365.com (2603:10b6:408:13c::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 12:20:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT087.mail.protection.outlook.com (10.13.177.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 12:20:06 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 05:19:51 -0700 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 05:19:50 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.37 via Frontend Transport; Mon, 17 Apr 2023 05:19:47 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski CC: Eric Dumazet , Paolo Abeni , Jesper Dangaard Brouer , Toke Hoiland-Jorgensen , , Saeed Mahameed , Lorenzo Bianconi , Gal Pressman , Henning Fehrmann , "Oliver Behnke" , Tariq Toukan Subject: [PATCH net-next 01/15] net/mlx5e: Move XDP struct and enum to XDP header Date: Mon, 17 Apr 2023 15:18:49 +0300 Message-ID: <20230417121903.46218-2-tariqt@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230417121903.46218-1-tariqt@nvidia.com> References: <20230417121903.46218-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT087:EE_|DM6PR12MB4332:EE_ X-MS-Office365-Filtering-Correlation-Id: 7b8435ef-7813-4c9e-f702-08db3f3e1469 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 8HDY32wRGMah5B/CMw08TmBLQICoY7AQshKUhy3D+h+NUFVVwETk5CgfPcpoFjXfRxgm5s3uoGaqHq56hph+tG2MLWTt1p+Jf+Xurp6JnyajdNCfxStd16Z6xUOVIb6554oKNBQXYr7oUT0i6Z/17AOZeNs0U1VR2glbt45VZW5zVkD/kdA8bgVDwYyJyLSsB+NxCAN+6ctw3SJPXj3TCv/PtNoS8fL6Dh4VgawQDOELYpW2lqOyP3zLwkT/qpzdilBln4n1ThE05l8vGzSHsRjMPkEQ8cQNIq9M8voisAnLDTesApziUXYdZWPSTNzkdVNhPYsdqn1NXWh5enuvFOyf9UpogfERgOjdmW7WWgHqYeLV26CgtjkcGZpItz9jMptgge0Cxjp4OP3n9pC4HCNJasQhHAIEaCws3apkNTTrVFIJA66WVKPBXlLIwxe54bgm1XLucbOpXXU50Nlv1F/S6VLhKnqkudn0UKezRi0C01zCI9FLiVvETFCGD1T1cvJKZDVfk0/pLAnexalQFV2TQNDSjqGYsvKcu+ZvfuUSb9TgMAgrvVtD3nYIu12eGMzBjzt/AaVaLudCEjmFhfOTXs9Zw0Rhntu+cmUfNKqMkZYDw4hG1qmAT3kfmPWbat9W1TZSE4q9e5GWa5aiZLup/t1UOBBoSptsueIHg0vvVjN8dvTsRCmPCilC2+qKlhghFY4AJuIbz7rfIhGhPmd606PuXsJ9uPOpEvm0Bqy1FS8xxs/364SHYyMTxdPImowGmNM5c6EtX6Shf0osPxVfPSU/hQ3m2n1xt57JC70= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(396003)(136003)(376002)(451199021)(46966006)(40470700004)(36840700001)(36756003)(7416002)(40460700003)(82310400005)(2906002)(5660300002)(7636003)(8936002)(8676002)(356005)(41300700001)(40480700001)(86362001)(478600001)(34020700004)(54906003)(2616005)(110136005)(36860700001)(26005)(1076003)(107886003)(186003)(7696005)(336012)(4326008)(426003)(70206006)(70586007)(82740400003)(316002)(83380400001)(47076005);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:20:06.3412 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7b8435ef-7813-4c9e-f702-08db3f3e1469 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT087.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4332 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Move struct mlx5e_xdp_info and enum mlx5e_xdp_xmit_mode from the generic en.h to the XDP header, where they belong. Reviewed-by: Saeed Mahameed Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 35 ------------------- .../net/ethernet/mellanox/mlx5/core/en/xdp.h | 35 +++++++++++++++++++ 2 files changed, 35 insertions(+), 35 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index ba615b74bb8e..3f5463d42a1e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -475,41 +475,6 @@ struct mlx5e_txqsq { cqe_ts_to_ns ptp_cyc2time; } ____cacheline_aligned_in_smp; -/* XDP packets can be transmitted in different ways. On completion, we need to - * distinguish between them to clean up things in a proper way. - */ -enum mlx5e_xdp_xmit_mode { - /* An xdp_frame was transmitted due to either XDP_REDIRECT from another - * device or XDP_TX from an XSK RQ. The frame has to be unmapped and - * returned. - */ - MLX5E_XDP_XMIT_MODE_FRAME, - - /* The xdp_frame was created in place as a result of XDP_TX from a - * regular RQ. No DMA remapping happened, and the page belongs to us. - */ - MLX5E_XDP_XMIT_MODE_PAGE, - - /* No xdp_frame was created at all, the transmit happened from a UMEM - * page. The UMEM Completion Ring producer pointer has to be increased. - */ - MLX5E_XDP_XMIT_MODE_XSK, -}; - -struct mlx5e_xdp_info { - enum mlx5e_xdp_xmit_mode mode; - union { - struct { - struct xdp_frame *xdpf; - dma_addr_t dma_addr; - } frame; - struct { - struct mlx5e_rq *rq; - struct page *page; - } page; - }; -}; - struct mlx5e_xmit_data { dma_addr_t dma_addr; void *data; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h index 10bcfa6f88c1..8208692035f8 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -50,6 +50,41 @@ struct mlx5e_xdp_buff { struct mlx5e_rq *rq; }; +/* XDP packets can be transmitted in different ways. On completion, we need to + * distinguish between them to clean up things in a proper way. + */ +enum mlx5e_xdp_xmit_mode { + /* An xdp_frame was transmitted due to either XDP_REDIRECT from another + * device or XDP_TX from an XSK RQ. The frame has to be unmapped and + * returned. + */ + MLX5E_XDP_XMIT_MODE_FRAME, + + /* The xdp_frame was created in place as a result of XDP_TX from a + * regular RQ. No DMA remapping happened, and the page belongs to us. + */ + MLX5E_XDP_XMIT_MODE_PAGE, + + /* No xdp_frame was created at all, the transmit happened from a UMEM + * page. The UMEM Completion Ring producer pointer has to be increased. + */ + MLX5E_XDP_XMIT_MODE_XSK, +}; + +struct mlx5e_xdp_info { + enum mlx5e_xdp_xmit_mode mode; + union { + struct { + struct xdp_frame *xdpf; + dma_addr_t dma_addr; + } frame; + struct { + struct mlx5e_rq *rq; + struct page *page; + } page; + }; +}; + struct mlx5e_xsk_param; int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk); bool mlx5e_xdp_handle(struct mlx5e_rq *rq, From patchwork Mon Apr 17 12:18:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13213766 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A02DC77B76 for ; Mon, 17 Apr 2023 12:20:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230454AbjDQMUs (ORCPT ); Mon, 17 Apr 2023 08:20:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229781AbjDQMUn (ORCPT ); Mon, 17 Apr 2023 08:20:43 -0400 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2057.outbound.protection.outlook.com [40.107.237.57]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D1087900B for ; Mon, 17 Apr 2023 05:20:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=D1zDBFPMRRO6YIu5WnLxZ8Nw8+oSW/50k845XxOqSCzIIOS8i3PUzmoURnsaNCB1k9jUY44tcsAab9o1Ba0yOg6eWmd4mAdmKn1/fLsrKjHjOHn6tusI5yGjOPeiXVc+2clDRz0d2NuxgZAJmYV+A9QVxVFqBfm8+gmL09reRMg16l8Kk09n0q4lZSLg2L4z3f2dSADKP1s3iHbnJlhibkf2Qu6wI7VYDK63gZaFLlC6tZncsDUDQQg1n+e9vCkkcHvk8rlHDmrFy0QozFXl41l/OKMh0GVxtI3ek1WHhptEAobaS/L6oCuTYZrHl4Yfm01dOqaz04FhYNOvkQxrwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4kZgaD6zRW3K43l68OmIO9ko0MvhPLwzvdhta2rwvcY=; b=NeTEu9SyNVNnbhyub5O8cBy0HSZQrlvqOxSpYha9/5iGAuqhfh6ktbMCgI2Qt33U1XdOnGKm0xBqiTn8a6zz6KOmjyxSC7N6KRQfxYE4VHgJixQhwE2KAn/RYyjjs+wuCRBVeM3MHpveYCS7h7ml+gqfuhuEjOJXwEzW1Vzgb+VGaHEuP20uAZSw5Um5gcgJAxfn134oKA+1HRXJOt5NuHd42TAKy6kds5T1yHP5FO8TylwnLjtrYdoaxFEMZixzkX9SF3tCnN8PLI9i+I7KPFGjon/nwW0G5DcavGDYdtCjt27zqcrEBwUp/kepuQAK2i/UetciymfmibaD1K5i4w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4kZgaD6zRW3K43l68OmIO9ko0MvhPLwzvdhta2rwvcY=; b=dxOVGmG/ilgP49/eYAEB5niSl3+UynjVsl2rQxZrNC5v/bw1k5BzGSQUqgqwxrbJeWXy8JYkrhyaG37NrXwRT4rlAm/fz5bBCYnYwiFlY0U+yfp4snorBpUtYwK3GKW8ybfQrHdkMdjEE6rEuqi7hQQd8jctLmAMKtASTasvX5yIvFiGY4iMpb1Q7TGxGAM4EUbnzymRKsCcxCTet5kWgOb5LMAEKluVR7xGmdTTz36p5G7c2mF3cupG+GOChmHsKMGE8w26IcLru2CFsiaupnGsLakj85jnbP3lvWTgfJg1Ed/vXF8yqTQMsmpmhZMn4VldfS33mkaLzk3JhLv9og== Received: from BN1PR13CA0013.namprd13.prod.outlook.com (2603:10b6:408:e2::18) by SA1PR12MB6751.namprd12.prod.outlook.com (2603:10b6:806:258::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 12:20:08 +0000 Received: from BN8NAM11FT041.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e2:cafe::9b) by BN1PR13CA0013.outlook.office365.com (2603:10b6:408:e2::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20 via Frontend Transport; Mon, 17 Apr 2023 12:20:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT041.mail.protection.outlook.com (10.13.177.18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 12:20:07 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 05:19:55 -0700 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 05:19:54 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.37 via Frontend Transport; Mon, 17 Apr 2023 05:19:51 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski CC: Eric Dumazet , Paolo Abeni , Jesper Dangaard Brouer , Toke Hoiland-Jorgensen , , Saeed Mahameed , Lorenzo Bianconi , Gal Pressman , Henning Fehrmann , "Oliver Behnke" , Tariq Toukan Subject: [PATCH net-next 02/15] net/mlx5e: Move struct mlx5e_xmit_data to datapath header Date: Mon, 17 Apr 2023 15:18:50 +0300 Message-ID: <20230417121903.46218-3-tariqt@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230417121903.46218-1-tariqt@nvidia.com> References: <20230417121903.46218-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT041:EE_|SA1PR12MB6751:EE_ X-MS-Office365-Filtering-Correlation-Id: f6064c55-4e2f-46ab-08d6-08db3f3e1524 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: IxHjv+cffmZg5SGicJoBmw5bWcTUkY2EE41jzyMFW7yUxrKfQDcEFBCUiKt+Tv9JKqA/M1nR3fh4mXRBpazNkR9yseVCH19DMlm0R+UDShyRSQNjRw0BCiVFNmfd0Jp4AaOe1r+aZWsHMVfr/Mb46y19ueRRJrkiz/bKuIj98lM8pObAFczMWWQA0Hou39soIic4VpEm0amrIX9yNHZ+whUd34VtDur9pa9pXosZ7JAnwwZrFGF04RdikamauAIryYh+z0HqUV+5/M8ky5AZ8jCjxJlQcPqQeIT4+Zqp3XczJod9m9Qv8nBTnGFj1jdn47PDDuWx0X8mtySyPyaHs7Hn3/ugb9Rf5CXrJavXY+baGmXVm6G90xyKuAUvV1AoUk5floMuZhP8EATenZngbTJ6a9k/7Gxp6RDEf/fndLlh9XjbKn8KN/OgbtvWRqxYP+NVnbb+kVIQV8PXIkB6/Xc8w+pCvOqBSUC3Pl7zb/M2OTme3C3JdSXhhHwC62CHn65GQkEeZPBWyyq55K4im+wfqVrRFI7NffcYSgGr50rwKjFt39P/1MUqImwCOvJUYXIHWQ7HMToWU6fC2SqMu99x12vGzMA9P5U01mgbP4G0PT0hI1wlV7gJcF7TN6ddxf9IyhVT9qVPcWmha+OAaXCyayLKbeLPmb2cPFChjm3iswCamCt0tOTZ4mGXwBZh6oFbdLebNqdtUDCMicoxPzO8iUJpjnv2pPvcA2U2p4Hb/Vjo7EqQqPCFYWu5Nenp5MthZ1eCRD/04GdRGjlY/BHRH72+kUvSQlZQv+E78iA= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(39860400002)(346002)(396003)(451199021)(46966006)(36840700001)(40470700004)(478600001)(34020700004)(8936002)(8676002)(316002)(41300700001)(82740400003)(4326008)(70586007)(40480700001)(70206006)(54906003)(7636003)(110136005)(356005)(40460700003)(186003)(107886003)(2906002)(36756003)(1076003)(26005)(426003)(336012)(86362001)(83380400001)(47076005)(82310400005)(2616005)(36860700001)(5660300002)(7696005)(7416002);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:20:07.6231 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f6064c55-4e2f-46ab-08d6-08db3f3e1524 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT041.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB6751 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Move TX datapath struct from the generic en.h to the datapath txrx.h header, where it belongs. Reviewed-by: Saeed Mahameed Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 7 +------ drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h | 6 ++++++ 2 files changed, 7 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 3f5463d42a1e..479979318c50 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -475,12 +475,6 @@ struct mlx5e_txqsq { cqe_ts_to_ns ptp_cyc2time; } ____cacheline_aligned_in_smp; -struct mlx5e_xmit_data { - dma_addr_t dma_addr; - void *data; - u32 len; -}; - struct mlx5e_xdp_info_fifo { struct mlx5e_xdp_info *xi; u32 *cc; @@ -489,6 +483,7 @@ struct mlx5e_xdp_info_fifo { }; struct mlx5e_xdpsq; +struct mlx5e_xmit_data; typedef int (*mlx5e_fp_xmit_xdp_frame_check)(struct mlx5e_xdpsq *); typedef bool (*mlx5e_fp_xmit_xdp_frame)(struct mlx5e_xdpsq *, struct mlx5e_xmit_data *, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h index 651be7aaf7d5..6f7ebedda279 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h @@ -77,6 +77,12 @@ static inline bool mlx5e_rx_hw_stamp(struct hwtstamp_config *config) } /* TX */ +struct mlx5e_xmit_data { + dma_addr_t dma_addr; + void *data; + u32 len; +}; + netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev); bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget); void mlx5e_free_txqsq_descs(struct mlx5e_txqsq *sq); From patchwork Mon Apr 17 12:18:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13213769 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6F8BC77B76 for ; Mon, 17 Apr 2023 12:20:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230428AbjDQMUx (ORCPT ); Mon, 17 Apr 2023 08:20:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229971AbjDQMUq (ORCPT ); Mon, 17 Apr 2023 08:20:46 -0400 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1FCA81738 for ; Mon, 17 Apr 2023 05:20:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ST5lj9TRF592er8Q27YlRLr3AYqZvAvo4ULpeQC7ctGgMvjV65J1svjejskDZFYCeP0u8qDNvIcloRIlnzt0aJnplpF01kUt+YqrerL0RXRFpk6c2hEZkcONRC9z1nUqT6ih9y1ICzVyGU1mcQ/wokeX0EATYGZxuqeP6jX0dJUer19DKhGqoZXfCOFUU4grAe3aZ8t71PmeXK+CLeg+UIXia41p25NcBdKuJMW4pNQxDsZGF+KTeK8nMvi3kUSXUueBPAwXZMLbS94e1nGrA6UBRlAxHoSNWiyi0mjrzosyGZsEZPrUe7oK0G8JPYBWm+74ayGBT+7YcsmZjkvlew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Fb1elEc6l/NPPJRJRXhLVmQRxgCtv67g8vqnsCNF62c=; b=UNeXJIG/9kBlNT0WDoVQM/ErGRksUtZXrMNOTpSZSBcLyI3XhM3zV4kvUemYR90NRTXyYlCd/cXnEmekHnSLkigwrN05Lwp9knAENcEXW5D6yAGQY8gRpmdNckOuzQ3DyyZswtJmW8mKiN8InmUxihe8rwlWyQuiSU57gL/UrU4hHZ1NA+gN3v5VdLeS04hsgHzGOWPHQC4myw2CWZ8sts97Srl0El1/V9qlWF1sjY+SK9lWmeIG44akME/siGFqzWf0b0meyPPK0acOnFpzncN4d/g0ziIrCBga0r71EsPEesxumwbrwEt7yadFibrsgX00QiDfqiLgiKZy4Bpcdg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Fb1elEc6l/NPPJRJRXhLVmQRxgCtv67g8vqnsCNF62c=; b=rFcJZWt4sUJjBOlcKa5oj7yYzt3Arn/6QikexXSBnWoFn5r1e28/i8Q/navmLHRgXXOUfpR+dZLrfAbi4LEjZXukDAjw4N2aPl9nLiuZfiCTTmKuOHboGVlhErjBmLg2eZ67lKr8Q9KgUw0aZ1tkwdlebpPRa/vUOzsFFWCbWd1GXVWfury+GGL4AkwDZgtYxXP5i7h72KkmJGqULrwWqRCPRwItgFNna//pNtjWJRUQl4zpHSmPZG7zTtogMwnxybhxUMPdeB8/b4IxVPEXZFnZcszwwCcV20AYgwJEwAtTEcCbxn7CL+j1o5fg8rHD5r9Er3YLvS9USmvVUvHc6A== Received: from BN0PR04CA0160.namprd04.prod.outlook.com (2603:10b6:408:eb::15) by PH7PR12MB8177.namprd12.prod.outlook.com (2603:10b6:510:2b4::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 12:20:10 +0000 Received: from BN8NAM11FT102.eop-nam11.prod.protection.outlook.com (2603:10b6:408:eb:cafe::f1) by BN0PR04CA0160.outlook.office365.com (2603:10b6:408:eb::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 12:20:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT102.mail.protection.outlook.com (10.13.177.27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 12:20:09 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 05:19:58 -0700 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 05:19:58 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.37 via Frontend Transport; Mon, 17 Apr 2023 05:19:55 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski CC: Eric Dumazet , Paolo Abeni , Jesper Dangaard Brouer , Toke Hoiland-Jorgensen , , Saeed Mahameed , Lorenzo Bianconi , Gal Pressman , Henning Fehrmann , "Oliver Behnke" , Tariq Toukan Subject: [PATCH net-next 03/15] net/mlx5e: Introduce extended version for mlx5e_xmit_data Date: Mon, 17 Apr 2023 15:18:51 +0300 Message-ID: <20230417121903.46218-4-tariqt@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230417121903.46218-1-tariqt@nvidia.com> References: <20230417121903.46218-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT102:EE_|PH7PR12MB8177:EE_ X-MS-Office365-Filtering-Correlation-Id: e8617ea4-f253-4901-d440-08db3f3e1648 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: K7qPkaVFxetmajS3xC3OsjaEMTpXULDUdpaayNekYYocBTk228YDrHbr+u66PrDGfjLFU1o3X/FlBaB8z7l3NvGZ8pEfc19w933VKpVUem6CI++iGIXyDWosgxUoqN+JJaczIPwhPaX6XPavvpLzG4ZwhbOQN7wVfIQbcQ828skcPrkgzuCCxuJGxBECQx5lDCuXlJ+gjodRb3zy6mbiaJNcz6LMASZVEZruDrRZOMvDhRZFmLOhgvRP3LYI+InkCSIz6hWRNXk0Wlm+tj8ynGdDOdCm8DlilYSJHHES3sTJ/V7VbJXAllqrhr+3JeCSZhTKo75C0zi0QlbG+MuxhuvipQo0iNCozd8lp205FbnsICbse16oS8cXsFE0sB1vFypi3/11R3UTc1hO5Hjyi2cU/7XplN6/wi8UF0v9Xd2JxrA6tNBNUs5yu7nnwxzveqGFpNqebndbfMcHPSg3hA4O9wEkuAFy+h53j1ca5Z3Xctm+8z91e24+mwdAe4vrQF2MaS6xGtaAFm+mMnQ499iorRObmbq7StdW4Gv0LltU9NRWV8iwHtx1XDVImsNNvpFkCgXJK4mqShiYKlzZk1NJDxLHYikJmZuom9H6jnScnavaJaR2rokvF2cQHHkgApydM4MUVQeV81IbfewWPdAiIXE/ulUIDluOCNmIiCMK//g1ipYfGOQPTrhsj2duC1FVtlGScBwH1fcVk+Fc+zOzt2Ajk/+u+ZtQpVcEdbUrgBJP2rFRyz0WbJ0IDHXH X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(39860400002)(136003)(396003)(451199021)(46966006)(36840700001)(356005)(7636003)(82740400003)(7696005)(34020700004)(6666004)(110136005)(54906003)(2906002)(478600001)(40480700001)(83380400001)(336012)(426003)(47076005)(107886003)(186003)(2616005)(1076003)(26005)(82310400005)(36860700001)(30864003)(4326008)(5660300002)(316002)(7416002)(70586007)(70206006)(86362001)(41300700001)(36756003)(8936002)(8676002);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:20:09.5419 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e8617ea4-f253-4901-d440-08db3f3e1648 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT102.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB8177 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Introduce struct mlx5e_xmit_data_frags to be used for non-linear xmit buffers. Let it include sinfo pointer. Take one bit from the len field to indicate if the descriptor has fragments and can be casted-up into the extended version. Zero-init to make sure has_frags, and potentially future fields, are zero when not explicitly assigned. Another field will be added in a downstream patch to indicate and point to dma addresses of the different frags, for redirect-in requests. This simplifies the mlx5e_xmit_xdp_frame/mlx5e_xmit_xdp_frame_mpwqe functions params. Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 1 - .../net/ethernet/mellanox/mlx5/core/en/txrx.h | 8 ++- .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 63 ++++++++++--------- .../net/ethernet/mellanox/mlx5/core/en/xdp.h | 2 - .../ethernet/mellanox/mlx5/core/en/xsk/tx.c | 4 +- 5 files changed, 44 insertions(+), 34 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 479979318c50..386f5a498e52 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -487,7 +487,6 @@ struct mlx5e_xmit_data; typedef int (*mlx5e_fp_xmit_xdp_frame_check)(struct mlx5e_xdpsq *); typedef bool (*mlx5e_fp_xmit_xdp_frame)(struct mlx5e_xdpsq *, struct mlx5e_xmit_data *, - struct skb_shared_info *, int); struct mlx5e_xdpsq { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h index 6f7ebedda279..1302f52db883 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h @@ -80,7 +80,13 @@ static inline bool mlx5e_rx_hw_stamp(struct hwtstamp_config *config) struct mlx5e_xmit_data { dma_addr_t dma_addr; void *data; - u32 len; + u32 len : 31; + u32 has_frags : 1; +}; + +struct mlx5e_xmit_data_frags { + struct mlx5e_xmit_data xd; + struct skb_shared_info *sinfo; }; netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index c8b532cea7d1..3e7ebf0f0f01 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -61,8 +61,8 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, struct xdp_buff *xdp) { struct page *page = virt_to_page(xdp->data); - struct skb_shared_info *sinfo = NULL; - struct mlx5e_xmit_data xdptxd; + struct mlx5e_xmit_data_frags xdptxdf = {}; + struct mlx5e_xmit_data *xdptxd; struct mlx5e_xdp_info xdpi; struct xdp_frame *xdpf; dma_addr_t dma_addr; @@ -72,8 +72,10 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, if (unlikely(!xdpf)) return false; - xdptxd.data = xdpf->data; - xdptxd.len = xdpf->len; + xdptxd = &xdptxdf.xd; + xdptxd->data = xdpf->data; + xdptxd->len = xdpf->len; + xdptxd->has_frags = xdp_frame_has_frags(xdpf); if (xdp->rxq->mem.type == MEM_TYPE_XSK_BUFF_POOL) { /* The xdp_buff was in the UMEM and was copied into a newly @@ -90,19 +92,22 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, xdpi.mode = MLX5E_XDP_XMIT_MODE_FRAME; - dma_addr = dma_map_single(sq->pdev, xdptxd.data, xdptxd.len, + if (unlikely(xdptxd->has_frags)) + return false; + + dma_addr = dma_map_single(sq->pdev, xdptxd->data, xdptxd->len, DMA_TO_DEVICE); if (dma_mapping_error(sq->pdev, dma_addr)) { xdp_return_frame(xdpf); return false; } - xdptxd.dma_addr = dma_addr; + xdptxd->dma_addr = dma_addr; xdpi.frame.xdpf = xdpf; xdpi.frame.dma_addr = dma_addr; if (unlikely(!INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, - mlx5e_xmit_xdp_frame, sq, &xdptxd, NULL, 0))) + mlx5e_xmit_xdp_frame, sq, xdptxd, 0))) return false; mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); @@ -119,13 +124,13 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, xdpi.page.rq = rq; dma_addr = page_pool_get_dma_addr(page) + (xdpf->data - (void *)xdpf); - dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd.len, DMA_BIDIRECTIONAL); + dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd->len, DMA_BIDIRECTIONAL); - if (unlikely(xdp_frame_has_frags(xdpf))) { - sinfo = xdp_get_shared_info_from_frame(xdpf); + if (unlikely(xdptxd->has_frags)) { + xdptxdf.sinfo = xdp_get_shared_info_from_frame(xdpf); - for (i = 0; i < sinfo->nr_frags; i++) { - skb_frag_t *frag = &sinfo->frags[i]; + for (i = 0; i < xdptxdf.sinfo->nr_frags; i++) { + skb_frag_t *frag = &xdptxdf.sinfo->frags[i]; dma_addr_t addr; u32 len; @@ -137,18 +142,18 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, } } - xdptxd.dma_addr = dma_addr; + xdptxd->dma_addr = dma_addr; if (unlikely(!INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, - mlx5e_xmit_xdp_frame, sq, &xdptxd, sinfo, 0))) + mlx5e_xmit_xdp_frame, sq, xdptxd, 0))) return false; xdpi.page.page = page; mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); - if (unlikely(xdp_frame_has_frags(xdpf))) { - for (i = 0; i < sinfo->nr_frags; i++) { - skb_frag_t *frag = &sinfo->frags[i]; + if (unlikely(xdptxd->has_frags)) { + for (i = 0; i < xdptxdf.sinfo->nr_frags; i++) { + skb_frag_t *frag = &xdptxdf.sinfo->frags[i]; xdpi.page.page = skb_frag_page(frag); mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); @@ -381,23 +386,23 @@ INDIRECT_CALLABLE_SCOPE int mlx5e_xmit_xdp_frame_check_mpwqe(struct mlx5e_xdpsq INDIRECT_CALLABLE_SCOPE bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, - struct skb_shared_info *sinfo, int check_result); + int check_result); INDIRECT_CALLABLE_SCOPE bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, - struct skb_shared_info *sinfo, int check_result) + int check_result) { struct mlx5e_tx_mpwqe *session = &sq->mpwqe; struct mlx5e_xdpsq_stats *stats = sq->stats; - if (unlikely(sinfo)) { + if (unlikely(xdptxd->has_frags)) { /* MPWQE is enabled, but a multi-buffer packet is queued for * transmission. MPWQE can't send fragmented packets, so close * the current session and fall back to a regular WQE. */ if (unlikely(sq->mpwqe.wqe)) mlx5e_xdp_mpwqe_complete(sq); - return mlx5e_xmit_xdp_frame(sq, xdptxd, sinfo, 0); + return mlx5e_xmit_xdp_frame(sq, xdptxd, 0); } if (unlikely(xdptxd->len > sq->hw_mtu)) { @@ -446,8 +451,10 @@ INDIRECT_CALLABLE_SCOPE int mlx5e_xmit_xdp_frame_check(struct mlx5e_xdpsq *sq) INDIRECT_CALLABLE_SCOPE bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, - struct skb_shared_info *sinfo, int check_result) + int check_result) { + struct mlx5e_xmit_data_frags *xdptxdf = + container_of(xdptxd, struct mlx5e_xmit_data_frags, xd); struct mlx5_wq_cyc *wq = &sq->wq; struct mlx5_wqe_ctrl_seg *cseg; struct mlx5_wqe_data_seg *dseg; @@ -476,9 +483,9 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, if (!check_result) { int stop_room = 1; - if (unlikely(sinfo)) { - ds_cnt += sinfo->nr_frags; - num_frags = sinfo->nr_frags; + if (unlikely(xdptxd->has_frags)) { + ds_cnt += xdptxdf->sinfo->nr_frags; + num_frags = xdptxdf->sinfo->nr_frags; num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS); /* Assuming MLX5_CAP_GEN(mdev, max_wqe_sz_sq) is big * enough to hold all fragments. @@ -529,7 +536,7 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, dseg->lkey = sq->mkey_be; for (i = 0; i < num_frags; i++) { - skb_frag_t *frag = &sinfo->frags[i]; + skb_frag_t *frag = &xdptxdf->sinfo->frags[i]; dma_addr_t addr; addr = page_pool_get_dma_addr(skb_frag_page(frag)) + @@ -718,7 +725,7 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, for (i = 0; i < n; i++) { struct xdp_frame *xdpf = frames[i]; - struct mlx5e_xmit_data xdptxd; + struct mlx5e_xmit_data xdptxd = {}; struct mlx5e_xdp_info xdpi; bool ret; @@ -735,7 +742,7 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, xdpi.frame.dma_addr = xdptxd.dma_addr; ret = INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, - mlx5e_xmit_xdp_frame, sq, &xdptxd, NULL, 0); + mlx5e_xmit_xdp_frame, sq, &xdptxd, 0); if (unlikely(!ret)) { dma_unmap_single(sq->pdev, xdptxd.dma_addr, xdptxd.len, DMA_TO_DEVICE); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h index 8208692035f8..8e97c68d11f4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -101,11 +101,9 @@ extern const struct xdp_metadata_ops mlx5e_xdp_metadata_ops; INDIRECT_CALLABLE_DECLARE(bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, - struct skb_shared_info *sinfo, int check_result)); INDIRECT_CALLABLE_DECLARE(bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, - struct skb_shared_info *sinfo, int check_result)); INDIRECT_CALLABLE_DECLARE(int mlx5e_xmit_xdp_frame_check_mpwqe(struct mlx5e_xdpsq *sq)); INDIRECT_CALLABLE_DECLARE(int mlx5e_xmit_xdp_frame_check(struct mlx5e_xdpsq *sq)); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c index 367a9505ca4f..b370a4daddfd 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c @@ -61,7 +61,6 @@ static void mlx5e_xsk_tx_post_err(struct mlx5e_xdpsq *sq, bool mlx5e_xsk_tx(struct mlx5e_xdpsq *sq, unsigned int budget) { struct xsk_buff_pool *pool = sq->xsk_pool; - struct mlx5e_xmit_data xdptxd; struct mlx5e_xdp_info xdpi; bool work_done = true; bool flush = false; @@ -73,6 +72,7 @@ bool mlx5e_xsk_tx(struct mlx5e_xdpsq *sq, unsigned int budget) mlx5e_xmit_xdp_frame_check_mpwqe, mlx5e_xmit_xdp_frame_check, sq); + struct mlx5e_xmit_data xdptxd = {}; struct xdp_desc desc; bool ret; @@ -97,7 +97,7 @@ bool mlx5e_xsk_tx(struct mlx5e_xdpsq *sq, unsigned int budget) xsk_buff_raw_dma_sync_for_device(pool, xdptxd.dma_addr, xdptxd.len); ret = INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, - mlx5e_xmit_xdp_frame, sq, &xdptxd, NULL, + mlx5e_xmit_xdp_frame, sq, &xdptxd, check_result); if (unlikely(!ret)) { if (sq->mpwqe.wqe) From patchwork Mon Apr 17 12:18:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13213768 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4525DC77B70 for ; Mon, 17 Apr 2023 12:20:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230498AbjDQMUz (ORCPT ); Mon, 17 Apr 2023 08:20:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229461AbjDQMUr (ORCPT ); Mon, 17 Apr 2023 08:20:47 -0400 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2088.outbound.protection.outlook.com [40.107.223.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A604993EB for ; Mon, 17 Apr 2023 05:20:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=THRyRkUXONXnO4VG5DjWE/lRp0YSk9eG2I3Nyb9xA87jJOMQFYP15DFz+tdeLXCbWs8OKGiXkOiRUQqTgFIFY0qPoJxp3wAMQr+6B+mG+yd9H1qinEbWGoyweewWURZ8bwCVFSs0qdB5zV2HxrRzAbY8YlLSVUZKp8Opb9aVQHzrerhCoJkSlwFPM91ijuDMzK/37jGh+46cZsLLXYcOSfXKa+HRKaE/SFsgvBXXX18p9rQYpxexg+UQaCzhLfrFhsJ47DqZUV3dpFDbGc/Q1wbvmklqavVq60rY80k++cucpz/Lrs8/1DOIL1O298lZoZLC5tcy7GkOYIv2Pfv63A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0T9zXb+L5ev63wZRt8khhfP9tUMO9bILmoVpjSD6YgI=; b=H5YupG+qEmzQl9c5824M33GA1fRZMLqasABQogRKpQlk2MIJEevS+NXLjn4mCHyR9piCsVQssieIfQs+r3al7XpJY+53HR0Br9F6E6OREjj70JX/HT/qQZU8tK8FRjbg6PXYVpL+NK60bBi1N5XCQnkvS6GnxwR0tsV8xmLGGfeKiioZ+yRlhnU7zDbipxDOenKsmQg+O290t5rqg0ttJ6GqpCnlenEWkSRrJpkxg6JEewVotsn1/HHQsAnUduJJQyzXIsCxW5hvamw5qX+m+hvboV/Vl1XJskO6tbxwQ4b+BRlYjqVTi8OF0v28ceyfry3KEclXJfRy7NDFD7iGyg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0T9zXb+L5ev63wZRt8khhfP9tUMO9bILmoVpjSD6YgI=; b=Qv7l2tBUgaWEO18nzkH2BMb3pqphVHKItd/YT0t9ZI2X/X7JXbyT8LjYrs05Q5CYnBirFmQBfR9fPZjYutCXdJh+imfXytDQemRJGG2lcra0t2qvWIfdBbPp6AKFf9y+SOBFXmuK68kT+qL8lzbKVoR+rnJtihoZw7hwGoHjraRcl7wM6akPX3wx8uLYX4VXcn6ZYi533Ep/r4kN/qVP+k3h4P5YhILQogg1lgICYtmQoO3Yitc5NIQsdktX8SVoH93VvMUZaH/2oNOLyZ1QF3wQWEyQJkdOnunyat43CHoq8t72g83MBpqe0tqTT1XIrdcx7RfXCqc/H1v3QaxslQ== Received: from DS0PR17CA0009.namprd17.prod.outlook.com (2603:10b6:8:191::9) by CH3PR12MB9022.namprd12.prod.outlook.com (2603:10b6:610:171::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.49; Mon, 17 Apr 2023 12:20:13 +0000 Received: from DM6NAM11FT047.eop-nam11.prod.protection.outlook.com (2603:10b6:8:191:cafe::6b) by DS0PR17CA0009.outlook.office365.com (2603:10b6:8:191::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 12:20:13 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DM6NAM11FT047.mail.protection.outlook.com (10.13.172.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 12:20:13 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 05:20:03 -0700 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 05:20:02 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.37 via Frontend Transport; Mon, 17 Apr 2023 05:19:59 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski CC: Eric Dumazet , Paolo Abeni , Jesper Dangaard Brouer , Toke Hoiland-Jorgensen , , Saeed Mahameed , Lorenzo Bianconi , Gal Pressman , Henning Fehrmann , "Oliver Behnke" , Tariq Toukan Subject: [PATCH net-next 04/15] net/mlx5e: XDP, Remove doubtful unlikely calls Date: Mon, 17 Apr 2023 15:18:52 +0300 Message-ID: <20230417121903.46218-5-tariqt@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230417121903.46218-1-tariqt@nvidia.com> References: <20230417121903.46218-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT047:EE_|CH3PR12MB9022:EE_ X-MS-Office365-Filtering-Correlation-Id: 761a7c7b-fa01-41d3-5cb3-08db3f3e1871 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: HsTBF3z4i43MnGeOXWXEvUWEwNCaYV1t3hMp4/8ACW+3naua11LjHwG8jSs+nI30BQu8eXxIqR7Q/i1A5OmK5TpZXT+4yT/FjBrq+G1nuCQrGoblqlVK1u/V2lgxbWk+j4s8VjWHaAmDXgioZsbGPpXaVir3fSY29jcnJR31appLO19BG329NPjOH0BbHRJKX53qg1tQyYVQQT/fR1l3SwN/u2a6dRoJl70CcnFa/xrd4WC0kvDQPKBunCfOmBUFjuK2GYTrhYxnfHDfesyNRT5ZIJxanSpWsVIKa5ZuTLUmZhhgK68Ittvifk1wUNnUhScB1PAvcbzd38nRtBlYLBCZRtOHOvPdIEN+QxE9unenQeNiN9BGJOJTTwArq2V97qU+91Lt1hpRDwceRXR7hxurQkfe3kLjFnHPrZ8Q69WrJNLA+bnqAEHblAzpbE40dSpdR9lZFUFcJUPTvCKNKmm4FhkkyXvdqO+Kdv+lqZAgLnksdjh/hB3uTwjtsB6ep5Da35lZBT+qu/5p3dMVoLXkj10WSB3pajaQCKUYrcTQwt9FHtvQRxNnfFEiBpYgnCk6FTF01NuM02hlOWggFu0Vlqs3hdohK73S2MzItRYXsPMt1lHlHibg0fVAKdGHpZkmiZTN+ftYR78QRLPO/VtC1JQVChZKlJZF0f04Bws16vPO3yc3ZE3fAv4c4gnuWrGFfydKS3N3jllwJTR90OOwQ9xAsTl6QgZKWtFZDuZHJJpMnxkLZzy0jLSoIsRygvz8LU6ObwcdXl1oPXRJSPe0naVriOnsoLyji5mjSiU= X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(376002)(346002)(136003)(451199021)(36840700001)(46966006)(40470700004)(41300700001)(356005)(7636003)(82740400003)(8676002)(70586007)(70206006)(4326008)(5660300002)(86362001)(40480700001)(36756003)(8936002)(7416002)(82310400005)(40460700003)(110136005)(186003)(7696005)(336012)(426003)(47076005)(83380400001)(316002)(2616005)(478600001)(2906002)(54906003)(26005)(1076003)(6666004)(107886003)(36860700001)(34020700004);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:20:13.2099 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 761a7c7b-fa01-41d3-5cb3-08db3f3e1871 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT047.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB9022 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org It is not likely nor unlikely that the xdp buff has fragments, it depends on the program loaded and size of the packet received. Reviewed-by: Saeed Mahameed Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index 3e7ebf0f0f01..dcae2d4e2c03 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -126,7 +126,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, dma_addr = page_pool_get_dma_addr(page) + (xdpf->data - (void *)xdpf); dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd->len, DMA_BIDIRECTIONAL); - if (unlikely(xdptxd->has_frags)) { + if (xdptxd->has_frags) { xdptxdf.sinfo = xdp_get_shared_info_from_frame(xdpf); for (i = 0; i < xdptxdf.sinfo->nr_frags; i++) { @@ -151,7 +151,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, xdpi.page.page = page; mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); - if (unlikely(xdptxd->has_frags)) { + if (xdptxd->has_frags) { for (i = 0; i < xdptxdf.sinfo->nr_frags; i++) { skb_frag_t *frag = &xdptxdf.sinfo->frags[i]; @@ -395,7 +395,7 @@ mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptx struct mlx5e_tx_mpwqe *session = &sq->mpwqe; struct mlx5e_xdpsq_stats *stats = sq->stats; - if (unlikely(xdptxd->has_frags)) { + if (xdptxd->has_frags) { /* MPWQE is enabled, but a multi-buffer packet is queued for * transmission. MPWQE can't send fragmented packets, so close * the current session and fall back to a regular WQE. @@ -483,7 +483,7 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, if (!check_result) { int stop_room = 1; - if (unlikely(xdptxd->has_frags)) { + if (xdptxd->has_frags) { ds_cnt += xdptxdf->sinfo->nr_frags; num_frags = xdptxdf->sinfo->nr_frags; num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS); @@ -525,7 +525,7 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, cseg->opmod_idx_opcode = cpu_to_be32((sq->pc << 8) | MLX5_OPCODE_SEND); - if (unlikely(test_bit(MLX5E_SQ_STATE_XDP_MULTIBUF, &sq->state))) { + if (test_bit(MLX5E_SQ_STATE_XDP_MULTIBUF, &sq->state)) { u8 num_pkts = 1 + num_frags; int i; From patchwork Mon Apr 17 12:18:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13213770 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 882B0C77B72 for ; Mon, 17 Apr 2023 12:21:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229751AbjDQMVK (ORCPT ); Mon, 17 Apr 2023 08:21:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230181AbjDQMUw (ORCPT ); Mon, 17 Apr 2023 08:20:52 -0400 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2069.outbound.protection.outlook.com [40.107.220.69]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8DFE1706 for ; Mon, 17 Apr 2023 05:20:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jeQe4agStsEj2MdtkJ5CksvgH+o85/OckORrIcyv22bJDRo9Bqsoz046mRnS5CfuqNvUxfr9ln7MvDL++pPA+GsafhhN3F0/mp+A+3cDW8KaIVbYUNoprjX0F8W62K6Cyolfdf2o6KqQmD2xHQXXb5kUiL5RZwxeX2ADZZiA6PohbPevO49ORyKkhlluZ85qIUetvQ39V0LI3SsxZMl+H760bgTiZEj8mlEy5g374pNXbEBRHjTlHGHJsg9vK5Ae6vIj3UFzqMuh9mqqv8CubkAiwNfiSluYpNU++EvUdak0lYNmjLr0N6RfnrXyUhuDPsf0eKrmsoQjVNq5OMTyfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=r5DUHPfGGDd8fi/6ZCgAiNy2dVbDmg8o/O1w73bVQ9E=; b=Ow7NykkbSWdirECwLOkY1EXQg8qRwpxs5sInkbp8++69mAzXN+6b/8VhgO1lAmN6Y/DO9mOD7yTI9+8KWt3hUgrMv2fBksrbqWCaufUHYtaMdofvSdr/iipRPBcW+tzvL4FeUsGIYLCVm6biSpSiVOlsVyn4j4sv17uh4mo6bQdg11p3Db4o1lAi1qtqAJkrO8/bXjDadmGr+zQ0ShRxCYVJWidBW1XF97FXkAFKnmSauoARrtbiElm5ffcPqgaOdljbGyOw7pXM3aYtvQ/nxA8+0/gjrAsKFiy2pfrEfQ/y3CPGBD+X88TcNUwkdZsixkuz4zchJtWlS88qdPs5vA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=r5DUHPfGGDd8fi/6ZCgAiNy2dVbDmg8o/O1w73bVQ9E=; b=jZBAYS7wlbczNcKELazWZ1zRIPruDIp7FLjL8yctytnj6Ta/O9TqysKUuHlUWjNhWDYSko7mcJTJT+o7XdntoMX1FwjNMiJwo6Z4SvFvBApTI2oF5UCfwltQ+50zHla3NyzAJfX3AqYWghcsn2PbHAVAlfoUcJ6SFBBrvYCOiWqgdpaHhRuk6bhZHJFzJgIaeP/tGtqn+3Yltw1w8WQGYUPzpHaCfU4q9v1D08ZKdyo3TBecXcQ3OPESMyn+0gaZmi8iJiG6zuy80IebMWm726lPDf5xj7JCyW5v6qKqDYY/E+TUClPXpyyFUP4j0b6El4ljzMr5tl6oM+IToj+NjA== Received: from BN8PR04CA0005.namprd04.prod.outlook.com (2603:10b6:408:70::18) by MN2PR12MB4342.namprd12.prod.outlook.com (2603:10b6:208:264::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 12:20:18 +0000 Received: from BN8NAM11FT032.eop-nam11.prod.protection.outlook.com (2603:10b6:408:70:cafe::d0) by BN8PR04CA0005.outlook.office365.com (2603:10b6:408:70::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 12:20:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT032.mail.protection.outlook.com (10.13.177.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.31 via Frontend Transport; Mon, 17 Apr 2023 12:20:18 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 05:20:06 -0700 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 05:20:05 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.37 via Frontend Transport; Mon, 17 Apr 2023 05:20:02 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski CC: Eric Dumazet , Paolo Abeni , Jesper Dangaard Brouer , Toke Hoiland-Jorgensen , , Saeed Mahameed , Lorenzo Bianconi , Gal Pressman , Henning Fehrmann , "Oliver Behnke" , Tariq Toukan Subject: [PATCH net-next 05/15] net/mlx5e: XDP, Use multiple single-entry objects in xdpi_fifo Date: Mon, 17 Apr 2023 15:18:53 +0300 Message-ID: <20230417121903.46218-6-tariqt@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230417121903.46218-1-tariqt@nvidia.com> References: <20230417121903.46218-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT032:EE_|MN2PR12MB4342:EE_ X-MS-Office365-Filtering-Correlation-Id: e347950b-a071-45b9-47c5-08db3f3e1b5c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +Zay803WQpqDPEzwGau6LzMWMqoYSF0aAKGhHXLSNi3GpPFT9LTRKdzJxyI+FZU1TxsVLke3BLMsduylYBzTCr4C2YzlmjiKmwj7v740zvrnQKfCioj7aUQgSK//Zb8eGzkAo2NRiNaBlaqEdOMk2jwzRGRjJoXV1qTUTABBg3twr3M0tjivyjDWOuvd/SkJTH1DCudzjTAsmNG6o+Tq4hJiHZ/OgoTNCWKcvIddyN2B/fdq3L4ywbZvW17jYJJbPdwkEp3F2/4yNZWBozsFadKBTD12uKzTRwBCMpu61rJj4WzItWDshyS2DihlHkhkzvQsEd3Lgj86tV3o/qkTkas/SgjXAYqFV9XEOOK0ZUhGU1p9GZQPrDHazSqTUfloq9J7dYG8AhznasUl1SBK8gj+E9fVGOI4XMrE5klhh7lze1lQcVnMZQXYbQW6eVqMyQ4o0OY25mv5XPMK81Nv8eNU1l6W8cZw3zFRaeukuH0BYS2pLpXjZd7s1GnkuTUNm4rgvufa3qhS+TJttg0V2+R+VXq8M4h0bWAfRAdPr7PeH9KGcAe5u6auffzk70BSpuoMG2uvCJresVhC/m+xiKyHvaAECZ6yU9tMVMSAj1OXYcOtCcolk0Bkk/WUKMu/NSnTLAVhpxr3KMnEIwgQHl+Lukj5mRnsjoO+wHXMj1W38ViAbcnRJfdaBBbaIwIXYHLgBdl+yNPMKCTWFz6TkJwltW4i5O0XIqdOkoE8ru06bkVOCnbPigXGOWEQSoUiLPxnWi4wZwyeoHivPgGka3kC2/T1PtCL+vaUZz+CTJw= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(376002)(136003)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(86362001)(36756003)(110136005)(316002)(4326008)(70586007)(54906003)(8676002)(70206006)(41300700001)(7696005)(478600001)(40480700001)(82310400005)(7416002)(5660300002)(8936002)(30864003)(2906002)(36860700001)(34020700004)(7636003)(356005)(82740400003)(186003)(107886003)(6666004)(1076003)(2616005)(26005)(336012)(426003)(47076005)(83380400001)(40460700003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:20:18.0424 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e347950b-a071-45b9-47c5-08db3f3e1b5c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT032.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4342 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Here we fix the current wi->num_pkts abuse, as it was used to indicate multiple xdpi entries in the xdpi_fifo. Instead, reduce mlx5e_xdp_info to the size of a single field, making it a union of unions. Per packet, use as many instances as needed to provide the information needed at the time of completion. The sequence of xdpi instances pushed is well defined, derived by the xmit_mode. Reviewed-by: Saeed Mahameed Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 2 +- .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 95 +++++++++++++------ .../net/ethernet/mellanox/mlx5/core/en/xdp.h | 38 +++++--- .../ethernet/mellanox/mlx5/core/en/xsk/tx.c | 8 +- .../net/ethernet/mellanox/mlx5/core/en_main.c | 8 +- 5 files changed, 101 insertions(+), 50 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 386f5a498e52..0e15afbe1673 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -476,7 +476,7 @@ struct mlx5e_txqsq { } ____cacheline_aligned_in_smp; struct mlx5e_xdp_info_fifo { - struct mlx5e_xdp_info *xi; + union mlx5e_xdp_info *xi; u32 *cc; u32 *pc; u32 mask; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index dcae2d4e2c03..5dab9012dc2a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -63,7 +63,6 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, struct page *page = virt_to_page(xdp->data); struct mlx5e_xmit_data_frags xdptxdf = {}; struct mlx5e_xmit_data *xdptxd; - struct mlx5e_xdp_info xdpi; struct xdp_frame *xdpf; dma_addr_t dma_addr; int i; @@ -90,8 +89,6 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, */ __set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags); /* non-atomic */ - xdpi.mode = MLX5E_XDP_XMIT_MODE_FRAME; - if (unlikely(xdptxd->has_frags)) return false; @@ -103,14 +100,18 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, } xdptxd->dma_addr = dma_addr; - xdpi.frame.xdpf = xdpf; - xdpi.frame.dma_addr = dma_addr; if (unlikely(!INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, mlx5e_xmit_xdp_frame, sq, xdptxd, 0))) return false; - mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); + /* xmit_mode == MLX5E_XDP_XMIT_MODE_FRAME */ + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, + (union mlx5e_xdp_info) { .mode = MLX5E_XDP_XMIT_MODE_FRAME }); + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, + (union mlx5e_xdp_info) { .frame.xdpf = xdpf }); + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, + (union mlx5e_xdp_info) { .frame.dma_addr = dma_addr }); return true; } @@ -120,9 +121,6 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, * mode. */ - xdpi.mode = MLX5E_XDP_XMIT_MODE_PAGE; - xdpi.page.rq = rq; - dma_addr = page_pool_get_dma_addr(page) + (xdpf->data - (void *)xdpf); dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd->len, DMA_BIDIRECTIONAL); @@ -148,16 +146,28 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, mlx5e_xmit_xdp_frame, sq, xdptxd, 0))) return false; - xdpi.page.page = page; - mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); + /* xmit_mode == MLX5E_XDP_XMIT_MODE_PAGE */ + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, + (union mlx5e_xdp_info) { .mode = MLX5E_XDP_XMIT_MODE_PAGE }); if (xdptxd->has_frags) { + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, + (union mlx5e_xdp_info) + { .page.num = 1 + xdptxdf.sinfo->nr_frags }); + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, + (union mlx5e_xdp_info) { .page.page = page }); for (i = 0; i < xdptxdf.sinfo->nr_frags; i++) { skb_frag_t *frag = &xdptxdf.sinfo->frags[i]; - xdpi.page.page = skb_frag_page(frag); - mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, + (union mlx5e_xdp_info) + { .page.page = skb_frag_page(frag) }); } + } else { + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, + (union mlx5e_xdp_info) { .page.num = 1 }); + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, + (union mlx5e_xdp_info) { .page.page = page }); } return true; @@ -526,7 +536,6 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, cseg->opmod_idx_opcode = cpu_to_be32((sq->pc << 8) | MLX5_OPCODE_SEND); if (test_bit(MLX5E_SQ_STATE_XDP_MULTIBUF, &sq->state)) { - u8 num_pkts = 1 + num_frags; int i; memset(&cseg->trailer, 0, sizeof(cseg->trailer)); @@ -552,7 +561,7 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, sq->db.wqe_info[pi] = (struct mlx5e_xdp_wqe_info) { .num_wqebbs = num_wqebbs, - .num_pkts = num_pkts, + .num_pkts = 1, }; sq->pc += num_wqebbs; @@ -577,20 +586,46 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq, u16 i; for (i = 0; i < wi->num_pkts; i++) { - struct mlx5e_xdp_info xdpi = mlx5e_xdpi_fifo_pop(xdpi_fifo); + union mlx5e_xdp_info xdpi = mlx5e_xdpi_fifo_pop(xdpi_fifo); switch (xdpi.mode) { - case MLX5E_XDP_XMIT_MODE_FRAME: + case MLX5E_XDP_XMIT_MODE_FRAME: { /* XDP_TX from the XSK RQ and XDP_REDIRECT */ - dma_unmap_single(sq->pdev, xdpi.frame.dma_addr, - xdpi.frame.xdpf->len, DMA_TO_DEVICE); - xdp_return_frame_bulk(xdpi.frame.xdpf, bq); + struct xdp_frame *xdpf; + dma_addr_t dma_addr; + + xdpi = mlx5e_xdpi_fifo_pop(xdpi_fifo); + xdpf = xdpi.frame.xdpf; + xdpi = mlx5e_xdpi_fifo_pop(xdpi_fifo); + dma_addr = xdpi.frame.dma_addr; + + dma_unmap_single(sq->pdev, dma_addr, + xdpf->len, DMA_TO_DEVICE); + xdp_return_frame_bulk(xdpf, bq); break; - case MLX5E_XDP_XMIT_MODE_PAGE: + } + case MLX5E_XDP_XMIT_MODE_PAGE: { /* XDP_TX from the regular RQ */ - page_pool_put_defragged_page(xdpi.page.rq->page_pool, - xdpi.page.page, -1, true); + u8 num, n = 0; + + xdpi = mlx5e_xdpi_fifo_pop(xdpi_fifo); + num = xdpi.page.num; + + do { + struct page *page; + + xdpi = mlx5e_xdpi_fifo_pop(xdpi_fifo); + page = xdpi.page.page; + + /* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE) + * as we know this is a page_pool page. + */ + page_pool_put_defragged_page(page->pp, + page, -1, true); + } while (++n < num); + break; + } case MLX5E_XDP_XMIT_MODE_XSK: /* AF_XDP send */ (*xsk_frames)++; @@ -726,7 +761,6 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, for (i = 0; i < n; i++) { struct xdp_frame *xdpf = frames[i]; struct mlx5e_xmit_data xdptxd = {}; - struct mlx5e_xdp_info xdpi; bool ret; xdptxd.data = xdpf->data; @@ -737,10 +771,6 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, if (unlikely(dma_mapping_error(sq->pdev, xdptxd.dma_addr))) break; - xdpi.mode = MLX5E_XDP_XMIT_MODE_FRAME; - xdpi.frame.xdpf = xdpf; - xdpi.frame.dma_addr = xdptxd.dma_addr; - ret = INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, mlx5e_xmit_xdp_frame, sq, &xdptxd, 0); if (unlikely(!ret)) { @@ -748,7 +778,14 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, xdptxd.len, DMA_TO_DEVICE); break; } - mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); + + /* xmit_mode == MLX5E_XDP_XMIT_MODE_FRAME */ + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, + (union mlx5e_xdp_info) { .mode = MLX5E_XDP_XMIT_MODE_FRAME }); + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, + (union mlx5e_xdp_info) { .frame.xdpf = xdpf }); + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, + (union mlx5e_xdp_info) { .frame.dma_addr = xdptxd.dma_addr }); nxmit++; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h index 8e97c68d11f4..9e8e6184f9e4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -71,18 +71,30 @@ enum mlx5e_xdp_xmit_mode { MLX5E_XDP_XMIT_MODE_XSK, }; -struct mlx5e_xdp_info { +/* xmit_mode entry is pushed to the fifo per packet, followed by multiple + * entries, as follows: + * + * MLX5E_XDP_XMIT_MODE_FRAME: + * xdpf, dma_addr_1, dma_addr_2, ... , dma_addr_num. + * 'num' is derived from xdpf. + * + * MLX5E_XDP_XMIT_MODE_PAGE: + * num, page_1, page_2, ... , page_num. + * + * MLX5E_XDP_XMIT_MODE_XSK: + * none. + */ +union mlx5e_xdp_info { enum mlx5e_xdp_xmit_mode mode; union { - struct { - struct xdp_frame *xdpf; - dma_addr_t dma_addr; - } frame; - struct { - struct mlx5e_rq *rq; - struct page *page; - } page; - }; + struct xdp_frame *xdpf; + dma_addr_t dma_addr; + } frame; + union { + struct mlx5e_rq *rq; + u8 num; + struct page *page; + } page; }; struct mlx5e_xsk_param; @@ -212,14 +224,14 @@ mlx5e_xdp_mpwqe_add_dseg(struct mlx5e_xdpsq *sq, static inline void mlx5e_xdpi_fifo_push(struct mlx5e_xdp_info_fifo *fifo, - struct mlx5e_xdp_info *xi) + union mlx5e_xdp_info xi) { u32 i = (*fifo->pc)++ & fifo->mask; - fifo->xi[i] = *xi; + fifo->xi[i] = xi; } -static inline struct mlx5e_xdp_info +static inline union mlx5e_xdp_info mlx5e_xdpi_fifo_pop(struct mlx5e_xdp_info_fifo *fifo) { return fifo->xi[(*fifo->cc)++ & fifo->mask]; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c index b370a4daddfd..597f319d4770 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c @@ -44,7 +44,7 @@ int mlx5e_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags) * same. */ static void mlx5e_xsk_tx_post_err(struct mlx5e_xdpsq *sq, - struct mlx5e_xdp_info *xdpi) + union mlx5e_xdp_info *xdpi) { u16 pi = mlx5_wq_cyc_ctr2ix(&sq->wq, sq->pc); struct mlx5e_xdp_wqe_info *wi = &sq->db.wqe_info[pi]; @@ -54,14 +54,14 @@ static void mlx5e_xsk_tx_post_err(struct mlx5e_xdpsq *sq, wi->num_pkts = 1; nopwqe = mlx5e_post_nop(&sq->wq, sq->sqn, &sq->pc); - mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, xdpi); + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, *xdpi); sq->doorbell_cseg = &nopwqe->ctrl; } bool mlx5e_xsk_tx(struct mlx5e_xdpsq *sq, unsigned int budget) { struct xsk_buff_pool *pool = sq->xsk_pool; - struct mlx5e_xdp_info xdpi; + union mlx5e_xdp_info xdpi; bool work_done = true; bool flush = false; @@ -105,7 +105,7 @@ bool mlx5e_xsk_tx(struct mlx5e_xdpsq *sq, unsigned int budget) mlx5e_xsk_tx_post_err(sq, &xdpi); } else { - mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, xdpi); } flush = true; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index ec72743b64e2..0b5aafaefe4c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -1300,17 +1300,19 @@ static int mlx5e_alloc_xdpsq_fifo(struct mlx5e_xdpsq *sq, int numa) { struct mlx5e_xdp_info_fifo *xdpi_fifo = &sq->db.xdpi_fifo; int wq_sz = mlx5_wq_cyc_get_size(&sq->wq); - int dsegs_per_wq = wq_sz * MLX5_SEND_WQEBB_NUM_DS; + int entries = wq_sz * MLX5_SEND_WQEBB_NUM_DS * 2; /* upper bound for maximum num of + * entries of all xmit_modes. + */ size_t size; - size = array_size(sizeof(*xdpi_fifo->xi), dsegs_per_wq); + size = array_size(sizeof(*xdpi_fifo->xi), entries); xdpi_fifo->xi = kvzalloc_node(size, GFP_KERNEL, numa); if (!xdpi_fifo->xi) return -ENOMEM; xdpi_fifo->pc = &sq->xdpi_fifo_pc; xdpi_fifo->cc = &sq->xdpi_fifo_cc; - xdpi_fifo->mask = dsegs_per_wq - 1; + xdpi_fifo->mask = entries - 1; return 0; } From patchwork Mon Apr 17 12:18:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13213771 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39AD3C77B72 for ; Mon, 17 Apr 2023 12:21:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231156AbjDQMVM (ORCPT ); Mon, 17 Apr 2023 08:21:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230483AbjDQMVA (ORCPT ); Mon, 17 Apr 2023 08:21:00 -0400 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2043.outbound.protection.outlook.com [40.107.237.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DB7A46AC for ; Mon, 17 Apr 2023 05:20:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lzaEr77JdpQiXjyxBEcpB1JNLCYf4sP7s2pDP4aEYq5gSaU7sNnVHovLajTOwbW/oW0svXNv5JswSLac0bgFwMSD900aGQ/x/uSgTPlLSSJfPb5jFg5axREjuXTnW8FpB6uJCuUzxCvX1IqowZiU+ZK3rwiRLliXTYVl6f+PKrAgKVuAlihyWspHWao5DTgFrO9HbwXwON8eu9dz4/qhIuMf+UBpHaHFpt3u1K8puJBM3p1jbZmmpuYHL4kCtjmWTPJ9AAK64AjekjeAfFfhLxt2JI+EKRpipELkF6ZQnbaS6krkGYGBPaTdXRaNDILsW3rcZIxevmEZw6XbRuOLyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=K6q5Qj5Wfvs+ye9i5ZP2QNXqH11aMhbsuVV7ojUhc28=; b=WMAoaRdoUEfohKKtsuGRG9LvhmWLfcnL7qo7KsZLrAej3ojPp7xrL7HnmTzGHG3R0utmVndyvajvRTFDkh5WKqVkGEkXHldJHZqdUBRhLlci76bAaq1rltt3oh8PAx4m/jgNir9uk/yKZtxj6hSZIAUNCMV4kw0Q7ACfy9J+8tez4pE8BzZvlNMoFXCTv4dL44RM4GEowLtptWSMkjaRIAXJwWthAMUgctbKInDpOjbQbjnHcI7N6D4EHC+PoFzg7AqvvXAJ2AtElhkVQEj6argGe5QEF+3ESJm2Rqs3gps85G1LwvjHLBo2bQuj9LPO1DuvjPFacUM1So4Aei/rKw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=K6q5Qj5Wfvs+ye9i5ZP2QNXqH11aMhbsuVV7ojUhc28=; b=VeHy7+LtwLkmXQptt5A8kCRkm7t+sHzN9uimCbzSIAUlLLeiGXDVGmkYtXlugOJmuDYfexQ+ZwYJGQNiIn8jLwKkcEcNjbzQu2kD+JvcPoxwbonaPvCBZKUZuPBz6/X0bAkYgfPnk8qyCTCBrNW3ZWvUtsLTdUm4fnoeQkIPWcmMcoOkMDZJQQwlpq35E9G54B3UvACZdOIwrMN0WSb+PH6rPLzNzmPDYfEk0UqHADKkUA500G6OX0sC+1B1Tq2CJHHaL0ZMQDOjRgTrDSlDEqwSLh5xX41b+EQaMPJKogu7qLG+H7P/o0TsWar1+YHWXsmMPA83uz8OsihnOESwCQ== Received: from BN0PR10CA0013.namprd10.prod.outlook.com (2603:10b6:408:143::17) by DS7PR12MB8252.namprd12.prod.outlook.com (2603:10b6:8:ee::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 12:20:21 +0000 Received: from BN8NAM11FT108.eop-nam11.prod.protection.outlook.com (2603:10b6:408:143:cafe::b2) by BN0PR10CA0013.outlook.office365.com (2603:10b6:408:143::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 12:20:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT108.mail.protection.outlook.com (10.13.176.155) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 12:20:20 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 05:20:10 -0700 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 05:20:09 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.37 via Frontend Transport; Mon, 17 Apr 2023 05:20:06 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski CC: Eric Dumazet , Paolo Abeni , Jesper Dangaard Brouer , Toke Hoiland-Jorgensen , , Saeed Mahameed , Lorenzo Bianconi , Gal Pressman , Henning Fehrmann , "Oliver Behnke" , Tariq Toukan Subject: [PATCH net-next 06/15] net/mlx5e: XDP, Add support for multi-buffer XDP redirect-in Date: Mon, 17 Apr 2023 15:18:54 +0300 Message-ID: <20230417121903.46218-7-tariqt@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230417121903.46218-1-tariqt@nvidia.com> References: <20230417121903.46218-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT108:EE_|DS7PR12MB8252:EE_ X-MS-Office365-Filtering-Correlation-Id: 7cc697e3-cefb-4d12-47e5-08db3f3e1d12 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: GwCBUdnD4sJhbWs3csmBe6uqONHFo6KskVfjbhYMP64edLCURnPEU0L4ie4kbv/KdFibfsaGnkbwbSAEW0kX460pf5W76RXUqGq2X8YpMrGDZ4RhPXKSjyrp2ij4VpEXICv+riR3XEMqpw/in8b5R4SoDQlUQUPWUMO/rFV/0dIy4lrZFVfzXVJtBIdGSxEhz/0iYsYq9P9lzdqVlPD+lrwH0mQ7Hv0oRd4Pva6EQJC0TEPyu7/7NYyY6x4N+gEmcfEgiNUtHjLh5UVyBg5Qnl5C6hvVCZkmEDd0Tx08x4x7Ky0pzdJ4CpDc33cRMVSaQa2KEPOhYZ24AXXGxZKcBP9yr56WTS96BIkfr+NSrLumBMU6PlY77DuB/vwr+abJHXKHRwC6Wjw0ocgs3pHqSxHaDwHYiHkHFbb5BsCV0JEuYcrC5ZZ8L26YqDorzWzxeJglJEkBfHnOISJQJ2o4ccGb5hImWuBm6NV1FJz/QczN/HaTeGFPr3S16GRB8iThpgvuuetEE0hA7uafdd9++e/svDsyC+Zsj6nqI90nmWZQ0G5W2tmpkF4WpnuZ+oXL72MmM1FGD27fpUbg0sFlNvsGIDsylYrPNKoCuVviIxFElU5xrlppvG4CkJONWwFN3RnijZDI+l8I3WjmHvDcHVNUUbEZxEcGqG7Z56oigsCJm5TberbWiWBTt21YzsCwD3LoXCow0rd3mkRgnQqxBCRnukV4ksccQkLi5quIeykDFUXeQlRGiCJb9DkiZB75E+OR6RK726rgten3GeGQ1A== X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(39860400002)(346002)(396003)(451199021)(46966006)(36840700001)(40470700004)(478600001)(6666004)(34020700004)(8936002)(8676002)(316002)(41300700001)(82740400003)(4326008)(70586007)(40480700001)(70206006)(54906003)(7636003)(110136005)(356005)(40460700003)(186003)(107886003)(2906002)(36756003)(1076003)(26005)(426003)(336012)(86362001)(83380400001)(47076005)(82310400005)(2616005)(36860700001)(5660300002)(7696005)(7416002);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:20:20.9441 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7cc697e3-cefb-4d12-47e5-08db3f3e1d12 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT108.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB8252 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Handle multi-buffer XDP redirect-in requests coming through mlx5e_xdp_xmit. Extend struct mlx5e_xmit_data_frags with an additional dma_arr field, to point to the fragments dma mapping, as they cannot be retrieved via the page_pool_get_dma_addr() function. Push a dma_addr xdpi instance per each fragment, and use them in the completion flow to dma_unmap the frags. Finally, remove the restriction in mlx5e_open_xdpsq, and set the flag in xdp_features. Reviewed-by: Saeed Mahameed Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/en/txrx.h | 1 + .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 82 ++++++++++++++++--- .../net/ethernet/mellanox/mlx5/core/en_main.c | 9 +- 3 files changed, 75 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h index 1302f52db883..47381e949f1f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h @@ -87,6 +87,7 @@ struct mlx5e_xmit_data { struct mlx5e_xmit_data_frags { struct mlx5e_xmit_data xd; struct skb_shared_info *sinfo; + dma_addr_t *dma_arr; }; netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index 5dab9012dc2a..c266d073e2f2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -126,6 +126,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, if (xdptxd->has_frags) { xdptxdf.sinfo = xdp_get_shared_info_from_frame(xdpf); + xdptxdf.dma_arr = NULL; for (i = 0; i < xdptxdf.sinfo->nr_frags; i++) { skb_frag_t *frag = &xdptxdf.sinfo->frags[i]; @@ -548,7 +549,8 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, skb_frag_t *frag = &xdptxdf->sinfo->frags[i]; dma_addr_t addr; - addr = page_pool_get_dma_addr(skb_frag_page(frag)) + + addr = xdptxdf->dma_arr ? xdptxdf->dma_arr[i] : + page_pool_get_dma_addr(skb_frag_page(frag)) + skb_frag_off(frag); dseg++; @@ -601,6 +603,21 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq, dma_unmap_single(sq->pdev, dma_addr, xdpf->len, DMA_TO_DEVICE); + if (xdp_frame_has_frags(xdpf)) { + struct skb_shared_info *sinfo; + int j; + + sinfo = xdp_get_shared_info_from_frame(xdpf); + for (j = 0; j < sinfo->nr_frags; j++) { + skb_frag_t *frag = &sinfo->frags[j]; + + xdpi = mlx5e_xdpi_fifo_pop(xdpi_fifo); + dma_addr = xdpi.frame.dma_addr; + + dma_unmap_single(sq->pdev, dma_addr, + skb_frag_size(frag), DMA_TO_DEVICE); + } + } xdp_return_frame_bulk(xdpf, bq); break; } @@ -759,23 +776,57 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, sq = &priv->channels.c[sq_num]->xdpsq; for (i = 0; i < n; i++) { + struct mlx5e_xmit_data_frags xdptxdf = {}; struct xdp_frame *xdpf = frames[i]; - struct mlx5e_xmit_data xdptxd = {}; + dma_addr_t dma_arr[MAX_SKB_FRAGS]; + struct mlx5e_xmit_data *xdptxd; bool ret; - xdptxd.data = xdpf->data; - xdptxd.len = xdpf->len; - xdptxd.dma_addr = dma_map_single(sq->pdev, xdptxd.data, - xdptxd.len, DMA_TO_DEVICE); + xdptxd = &xdptxdf.xd; + xdptxd->data = xdpf->data; + xdptxd->len = xdpf->len; + xdptxd->has_frags = xdp_frame_has_frags(xdpf); + xdptxd->dma_addr = dma_map_single(sq->pdev, xdptxd->data, + xdptxd->len, DMA_TO_DEVICE); - if (unlikely(dma_mapping_error(sq->pdev, xdptxd.dma_addr))) + if (unlikely(dma_mapping_error(sq->pdev, xdptxd->dma_addr))) break; + if (xdptxd->has_frags) { + int j; + + xdptxdf.sinfo = xdp_get_shared_info_from_frame(xdpf); + xdptxdf.dma_arr = dma_arr; + for (j = 0; j < xdptxdf.sinfo->nr_frags; j++) { + skb_frag_t *frag = &xdptxdf.sinfo->frags[j]; + + dma_arr[j] = dma_map_single(sq->pdev, skb_frag_address(frag), + skb_frag_size(frag), DMA_TO_DEVICE); + + if (!dma_mapping_error(sq->pdev, dma_arr[j])) + continue; + /* mapping error */ + while (--j >= 0) + dma_unmap_single(sq->pdev, dma_arr[j], + skb_frag_size(&xdptxdf.sinfo->frags[j]), + DMA_TO_DEVICE); + goto out; + } + } + ret = INDIRECT_CALL_2(sq->xmit_xdp_frame, mlx5e_xmit_xdp_frame_mpwqe, - mlx5e_xmit_xdp_frame, sq, &xdptxd, 0); + mlx5e_xmit_xdp_frame, sq, xdptxd, 0); if (unlikely(!ret)) { - dma_unmap_single(sq->pdev, xdptxd.dma_addr, - xdptxd.len, DMA_TO_DEVICE); + int j; + + dma_unmap_single(sq->pdev, xdptxd->dma_addr, + xdptxd->len, DMA_TO_DEVICE); + if (!xdptxd->has_frags) + break; + for (j = 0; j < xdptxdf.sinfo->nr_frags; j++) + dma_unmap_single(sq->pdev, dma_arr[j], + skb_frag_size(&xdptxdf.sinfo->frags[j]), + DMA_TO_DEVICE); break; } @@ -785,10 +836,19 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, (union mlx5e_xdp_info) { .frame.xdpf = xdpf }); mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, - (union mlx5e_xdp_info) { .frame.dma_addr = xdptxd.dma_addr }); + (union mlx5e_xdp_info) { .frame.dma_addr = xdptxd->dma_addr }); + if (xdptxd->has_frags) { + int j; + + for (j = 0; j < xdptxdf.sinfo->nr_frags; j++) + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, + (union mlx5e_xdp_info) + { .frame.dma_addr = dma_arr[j] }); + } nxmit++; } +out: if (flags & XDP_XMIT_FLUSH) { if (sq->mpwqe.wqe) mlx5e_xdp_mpwqe_complete(sq); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 0b5aafaefe4c..ccf7bb136f50 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -1862,11 +1862,7 @@ int mlx5e_open_xdpsq(struct mlx5e_channel *c, struct mlx5e_params *params, csp.min_inline_mode = sq->min_inline_mode; set_bit(MLX5E_SQ_STATE_ENABLED, &sq->state); - /* Don't enable multi buffer on XDP_REDIRECT SQ, as it's not yet - * supported by upstream, and there is no defined trigger to allow - * transmitting redirected multi-buffer frames. - */ - if (param->is_xdp_mb && !is_redirect) + if (param->is_xdp_mb) set_bit(MLX5E_SQ_STATE_XDP_MULTIBUF, &sq->state); err = mlx5e_create_sq_rdy(c->mdev, param, &csp, 0, &sq->sqn); @@ -4068,7 +4064,8 @@ void mlx5e_set_xdp_feature(struct net_device *netdev) val = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | NETDEV_XDP_ACT_XSK_ZEROCOPY | - NETDEV_XDP_ACT_NDO_XMIT; + NETDEV_XDP_ACT_NDO_XMIT | + NETDEV_XDP_ACT_NDO_XMIT_SG; if (params->rq_wq_type == MLX5_WQ_TYPE_CYCLIC) val |= NETDEV_XDP_ACT_RX_SG; xdp_set_features_flag(netdev, val); From patchwork Mon Apr 17 12:18:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13213772 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21D19C77B70 for ; Mon, 17 Apr 2023 12:21:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231168AbjDQMVZ (ORCPT ); Mon, 17 Apr 2023 08:21:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230487AbjDQMVK (ORCPT ); Mon, 17 Apr 2023 08:21:10 -0400 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2072.outbound.protection.outlook.com [40.107.93.72]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7C3C7296 for ; Mon, 17 Apr 2023 05:20:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mso4E8VMgf0BRNSkv0w9VNTTbPrBQ8+naopTLAB87KIjTvx9cIs7BJ0tEdw308ilRKZHhqCjNj2DTlYKdVT77nQPRxwrpJ0bkoYDYmn6NViSOg1jKqW9ZpSxTeUT64Gjya+gBS4ETxcb6DvT1grQZkZPJ+cMZIbEo4xp9mEsmFNAKUKE/FiNUYZfdQoIPxq6WUuUqB6uECVd/ZyvE7I99lZeteC1T04xN1hYzkJFUszr9I2HtLY4KIBkcbDPNQmnMVg3zyHPEjCRR+1zDEL04Uupfp7VBzhklZvQJVgtEwWHQFUMHsihJe+9nvt41jFhkxh1WzWq4R5oJm0gKdsuhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=46N5RQSX6XSpSsxEhLu4uWtIlFMmJWvWri0P8BIvxQ0=; b=UAgb3HPbmsUiXksQuR7/eCxxEHu85l06sd3Jta7udkksvaMhc+NLIRZXzDm+23UlntP9SGU/TDH+KMKYPNUVG2AIK8Esbw0JHi7X4M4SSrf0ubSBVumjNL58F0Fx6tdC5qBcix7mdydWLc829jxssqQ9ajGMNCiANEjgLfSMbk9ryQqqj/hOdEibPhB68ud+pxsWdX6goq78PGY8YB34pgIa/M4InmTq4R5CejSsSPNjZziKaBd9yn9m3lPFXym5lbuBdlrAhHOAv83FiJR7fLXRvHB7JFoky53jMjUhkUBXW7q38X/l6MprrqPQ3epzQnjeMhl4m/34QMsPkQsDTA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=46N5RQSX6XSpSsxEhLu4uWtIlFMmJWvWri0P8BIvxQ0=; b=NiHltqXxww60dl8OAPGM0mR3HEAU2GW/EeiVB5D0j6lnb0NJTwFtXwEDEll1a4aYQO2xFjFFmwqlL0ak/1SJnEoh8dzuacWVtscQCVzlNt4W8YP+E322US97T0WCM9VHQ3BZEwDk9tVYyQd6EAdzUIZfSeX/fwIVVE5VpFyaxOGRXPWvOVHDs7VtQ3idFw6gSV2QK9767/eEGOqqiixaXj936oXkQLS6x/KKlwguox/CrpcxCJGLmgMQTHRjdwkFh2HrFKAU7dDQlOXdMzYIrJNHk5Al2EYh/6+sPftgOx7QZrFWUNWml5q0gw2Zfr8n1AgE7brE00e77P6XLD69mQ== Received: from BN0PR07CA0027.namprd07.prod.outlook.com (2603:10b6:408:141::29) by MN0PR12MB6055.namprd12.prod.outlook.com (2603:10b6:208:3cd::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 12:20:32 +0000 Received: from BN8NAM11FT008.eop-nam11.prod.protection.outlook.com (2603:10b6:408:141:cafe::73) by BN0PR07CA0027.outlook.office365.com (2603:10b6:408:141::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 12:20:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT008.mail.protection.outlook.com (10.13.177.95) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.17 via Frontend Transport; Mon, 17 Apr 2023 12:20:31 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 05:20:13 -0700 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 05:20:13 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.37 via Frontend Transport; Mon, 17 Apr 2023 05:20:10 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski CC: Eric Dumazet , Paolo Abeni , Jesper Dangaard Brouer , Toke Hoiland-Jorgensen , , Saeed Mahameed , Lorenzo Bianconi , Gal Pressman , Henning Fehrmann , "Oliver Behnke" , Tariq Toukan Subject: [PATCH net-next 07/15] net/mlx5e: XDP, Improve Striding RQ check with XDP Date: Mon, 17 Apr 2023 15:18:55 +0300 Message-ID: <20230417121903.46218-8-tariqt@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230417121903.46218-1-tariqt@nvidia.com> References: <20230417121903.46218-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT008:EE_|MN0PR12MB6055:EE_ X-MS-Office365-Filtering-Correlation-Id: 8ec7ca7c-ee0f-4e74-de92-08db3f3e2364 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: WTTDHhMoi+y0owf42ppsqAOgUQ2UooDJgeyA0XDuPUTGB7Im2l4FO10avfpj5GrXC2uELxkQUoPorrOJ+J7IDU/PcmgxFS7ZEmxdNywNTxfFftUA05I2b85fiuqww3/XvfPrV67kxu2H+vZUIWEqbKQ7NY2RccvmMaWoavNTNbA7vX6p9PPrEPL8VfdFLH+NMVBNAFx7uFmDRJHK4DgLw4L1mQwGZI2M9CBa3QX/5lrCnKl95bP6L/oygXf2eZIfWt1LT4RIzH1R/yzev4vN3m40fzHzVc7/fSYEphmfwzcuJc1dyyn1BwcNAt+L6MwXaxUTbW7fDBlorJ+rDZOO39IfKLy9shJ1Seq2v8S2HmyDSynt6aM0HbouIUsZJkAcx1RX4i2z0IPhoOM8F1P5F0ziDIvklcX7Qoq1VgQZiqYShVovPHvsCkMCCPJdjor/NGQk1WK5c2fsNqpZRcfNIEUX1hasjwVQwIu4lXz9osg3TR792dU3WQuwKi1VGdKY8co6ptE5Dz/JzYpSL33eiPdlf6u9tajHRWNBaevTKuIwGh7CrvnabPBUAPKQx1irBYRWOzAv8gbaXhGSj0znzQ7pRC4CYGEpqBjhxIRVfr4U05pyGSQmjrrjNeWLsVkwQHu9xdHPnjLWFQvi+oxOb6h412YTI+tmQHhs0c8emP8O2jxjrvpnQSSJnYWGNTdnXUJrNsNOGuRpOl024UWNsj0kYKCP8ndr4uCGzAlsgz5QyMqbroiPuSnbaR+HOf6n X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(39860400002)(346002)(396003)(451199021)(46966006)(36840700001)(478600001)(6666004)(34020700004)(8936002)(8676002)(316002)(41300700001)(82740400003)(4326008)(70586007)(40480700001)(70206006)(54906003)(7636003)(110136005)(356005)(186003)(107886003)(2906002)(36756003)(1076003)(26005)(426003)(336012)(86362001)(83380400001)(47076005)(82310400005)(2616005)(36860700001)(5660300002)(7696005)(7416002);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:20:31.5361 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8ec7ca7c-ee0f-4e74-de92-08db3f3e2364 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT008.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6055 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Non-linear mem scheme of Striding RQ does not yet support XDP at this point. Take the check where it belongs, inside the params validation function mlx5e_params_validate_xdp(). Reviewed-by: Gal Pressman Reviewed-by: Saeed Mahameed Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/en_main.c | 23 ++++++++----------- 1 file changed, 9 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index ccf7bb136f50..faae443770bb 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -4261,8 +4261,16 @@ static bool mlx5e_params_validate_xdp(struct net_device *netdev, /* No XSK params: AF_XDP can't be enabled yet at the point of setting * the XDP program. */ - is_linear = mlx5e_rx_is_linear_skb(mdev, params, NULL); + is_linear = params->rq_wq_type == MLX5_WQ_TYPE_CYCLIC ? + mlx5e_rx_is_linear_skb(mdev, params, NULL) : + mlx5e_rx_mpwqe_is_linear_skb(mdev, params, NULL); + /* XDP affects striding RQ parameters. Block XDP if striding RQ won't be + * supported with the new parameters: if PAGE_SIZE is bigger than + * MLX5_MPWQE_LOG_STRIDE_SZ_MAX, striding RQ can't be used, even though + * the MTU is small enough for the linear mode, because XDP uses strides + * of PAGE_SIZE on regular RQs. + */ if (!is_linear && params->rq_wq_type != MLX5_WQ_TYPE_CYCLIC) { netdev_warn(netdev, "XDP is not allowed with striding RQ and MTU(%d) > %d\n", params->sw_mtu, @@ -4817,19 +4825,6 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog) new_params = priv->channels.params; new_params.xdp_prog = prog; - /* XDP affects striding RQ parameters. Block XDP if striding RQ won't be - * supported with the new parameters: if PAGE_SIZE is bigger than - * MLX5_MPWQE_LOG_STRIDE_SZ_MAX, striding RQ can't be used, even though - * the MTU is small enough for the linear mode, because XDP uses strides - * of PAGE_SIZE on regular RQs. - */ - if (reset && MLX5E_GET_PFLAG(&new_params, MLX5E_PFLAG_RX_STRIDING_RQ)) { - /* Checking for regular RQs here; XSK RQs were checked on XSK bind. */ - err = mlx5e_mpwrq_validate_regular(priv->mdev, &new_params); - if (err) - goto unlock; - } - old_prog = priv->channels.params.xdp_prog; err = mlx5e_safe_switch_params(priv, &new_params, NULL, NULL, reset); From patchwork Mon Apr 17 12:18:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13213773 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F158C77B70 for ; Mon, 17 Apr 2023 12:21:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231203AbjDQMVe (ORCPT ); Mon, 17 Apr 2023 08:21:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230523AbjDQMVM (ORCPT ); Mon, 17 Apr 2023 08:21:12 -0400 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2073.outbound.protection.outlook.com [40.107.93.73]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D1377DA7 for ; Mon, 17 Apr 2023 05:20:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mEhaASO/FeIXF1lDY5rvVyd588tRwblg88ZNwsYYdsbseWpNAGE6gmjRHjNSJOgwsFzUzqPWVjPfieWvCt4+GFGogW/aj8xS1/WQMAp8NX+Z6IOkvOJX+2JliACz7Tc2hnwivRI5ltceoJc/oYkHxPcTYSTY4WzCvhDpmgha0As2+zsFg1ofDCMcy4DZ8tcn2Y/YKZBw7H8tIHb9qgB103cvrHzL6vLtixvx6qSICjzOLQ1UDWMGSvKMYprNzZUmbw2ReqvO9m4mAG6daSmA0oPDb+AT7tqg3KfZsc1ex8/hvxy7dxWIyDsFVlEm1LHuo9ykY8QayMFZ5agOzNOWLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JpSovxxu0w/FldQa8kWpECApiGI1zFbWcM0R+WK9slE=; b=JKU6N+UsLYlvj4lb8n0IGEamobolWTO161u98KgGNB3UE5j48q+liqQyGQxK/5JevOVfVPkVTTQRdwMc/G7jGCKIBi962Ci+wNowgb7vpHUkN7QqOKtskLFWl/3UOydLMczCik+6G1kANgC/atXcUZ+K33OvAOFq1Ft+nMmvrRXeYWbPnVaqVzkCW37f9d1yoyD6tIFhLcFoIn39KqhAzBAe7RDlYkn9GhcI4ursINaUuwQ9Pahq1m8P6CDk00+Q9AAZnBcuisFEzjA5xQHtJ04cTTXbnlW6EIe57mh/LVBpYJA2jm5pp/C0GfVDEaUsXzjGsm79ZABpjs8/6d9msw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JpSovxxu0w/FldQa8kWpECApiGI1zFbWcM0R+WK9slE=; b=lOnK9rQeEow+NO1VYJm9JPMBNu7LF+V7sjO7D+jq0frCZpOkBw4PRy5A63HmwIhVimQAfT6ukutsr6nFI3Ho11tXBQjO45D8oxh/zY0/Nod1zmFEYdJx9Gv7S5WbqW4AdqxmZOF9ApTZnPu9lQ++vtBxzT7YLBLW7AoHHX3LWQ9CHx3cGqYrcNkVeac21oMV0+3rGEL0uXfIWpzb7wJ97i/LyM7KhsY2SGl6sUIJVrwvjJzp39i0RSk7Ppx4ewnSVBhVIflQivqyKFZcbUSj4cmH3e1+F85B3v3+Tyaho1sczb5z5Di3mJY7gs1GH7UJZ65MdJ9TALjrgBiah7cZHQ== Received: from BN1PR10CA0023.namprd10.prod.outlook.com (2603:10b6:408:e0::28) by BN9PR12MB5179.namprd12.prod.outlook.com (2603:10b6:408:11c::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 12:20:38 +0000 Received: from BN8NAM11FT005.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e0:cafe::aa) by BN1PR10CA0023.outlook.office365.com (2603:10b6:408:e0::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 12:20:38 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT005.mail.protection.outlook.com (10.13.176.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 12:20:38 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 05:20:17 -0700 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 05:20:16 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.37 via Frontend Transport; Mon, 17 Apr 2023 05:20:13 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski CC: Eric Dumazet , Paolo Abeni , Jesper Dangaard Brouer , Toke Hoiland-Jorgensen , , Saeed Mahameed , Lorenzo Bianconi , Gal Pressman , Henning Fehrmann , "Oliver Behnke" , Tariq Toukan Subject: [PATCH net-next 08/15] net/mlx5e: XDP, Let XDP checker function get the params as input Date: Mon, 17 Apr 2023 15:18:56 +0300 Message-ID: <20230417121903.46218-9-tariqt@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230417121903.46218-1-tariqt@nvidia.com> References: <20230417121903.46218-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT005:EE_|BN9PR12MB5179:EE_ X-MS-Office365-Filtering-Correlation-Id: 3d1e8d96-9eb9-4f88-2202-08db3f3e2752 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 8q+qbptDwAhsgDWmwEKUDsATCf1IRLbAU5Wgzw3/UHqPqqplZqPFgRxzLm0f+KiIocyDg/qNFra4pX7rdgKetJfhiWxInQcBFVaJE0+Mw1tEmtZxh2ebS0lopH6j+2qKcF1UCpUOksgQchawyoDfgEvRhH0JBBb4xCkThL1lVpJ71T03GYyGJZ1NZLO8hbd40fR5/dOBVr+I0PT/q9HTTshE4kIg1HM34fvqcdHPMsNc4a2HOMdAxz9VF2IK90fxs/CnGFvNvBZ2HuO7kdPx2zByX7+GNKT1RWtLZw7dD1veYADrGjSDB8Qret1CkRGwZNKAlpXmEbO7R/Uor+H26Pmrz8ItPt6vu70sHYNkviwA23xSzWLa41J1g8A1V9p7VGqM/kxPFTjGRAGFhGCJQwKVwjhj+tHYre9JqJxPyC/lfhgSg2paebL6PCgC4pB5UWpsDVizdzHvF1ugDSdCWfD4vF8aichfSslg/G0/ytbu34IgkZIkSHzR03OGmasc9+iWRxzChha/vD7dS/kqahR3h6N9DWThC2YAb2dBThjSZJm3lFAT7r2cRYmB6YymMnBx/76Lyon3rditnFy5PLPHP6Q3mM6Bh4/zZQBmB2da5pR58dtRHkoHzTZI2koeFEXZbLANoTsXyOtnOZUvstBoiMc5+KCqOMkRDnfAUniaL3/Vt1+N8bpemUjuoVQVTlXAS/kcrpR1QnLXdYo36GhbSSEW3TcsdXyi31qKgf3NuGhfFn1QCSbLyxyluHOcE+LX9oVUZEyT5SfpB374PUrty+NJnpJKRgOJmgldqCTCgRtDXshaJ/lXZm1t7ltY X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(376002)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(426003)(7416002)(5660300002)(82310400005)(2616005)(336012)(86362001)(47076005)(83380400001)(186003)(356005)(1076003)(26005)(7636003)(82740400003)(107886003)(36860700001)(34020700004)(8936002)(8676002)(110136005)(54906003)(478600001)(40480700001)(6666004)(7696005)(41300700001)(316002)(40460700003)(36756003)(4326008)(70206006)(70586007)(2906002)(15583001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:20:38.1242 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3d1e8d96-9eb9-4f88-2202-08db3f3e2752 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT005.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5179 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Change mlx5e_xdp_allowed() so it gets the params structure with the xdp_prog applied, rather than creating a local copy based on the current params in priv. This reduces the amount of memory on the stack, and acts on the exact params instance that's about to be applied. Reviewed-by: Saeed Mahameed Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/en_main.c | 21 +++++++------------ 1 file changed, 8 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index faae443770bb..6a278901b40b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -4773,20 +4773,15 @@ static void mlx5e_tx_timeout(struct net_device *dev, unsigned int txqueue) queue_work(priv->wq, &priv->tx_timeout_work); } -static int mlx5e_xdp_allowed(struct mlx5e_priv *priv, struct bpf_prog *prog) +static int mlx5e_xdp_allowed(struct net_device *netdev, struct mlx5_core_dev *mdev, + struct mlx5e_params *params) { - struct net_device *netdev = priv->netdev; - struct mlx5e_params new_params; - - if (priv->channels.params.packet_merge.type != MLX5E_PACKET_MERGE_NONE) { + if (params->packet_merge.type != MLX5E_PACKET_MERGE_NONE) { netdev_warn(netdev, "can't set XDP while HW-GRO/LRO is on, disable them first\n"); return -EINVAL; } - new_params = priv->channels.params; - new_params.xdp_prog = prog; - - if (!mlx5e_params_validate_xdp(netdev, priv->mdev, &new_params)) + if (!mlx5e_params_validate_xdp(netdev, mdev, params)) return -EINVAL; return 0; @@ -4813,8 +4808,11 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog) mutex_lock(&priv->state_lock); + new_params = priv->channels.params; + new_params.xdp_prog = prog; + if (prog) { - err = mlx5e_xdp_allowed(priv, prog); + err = mlx5e_xdp_allowed(netdev, priv->mdev, &new_params); if (err) goto unlock; } @@ -4822,9 +4820,6 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog) /* no need for full reset when exchanging programs */ reset = (!priv->channels.params.xdp_prog || !prog); - new_params = priv->channels.params; - new_params.xdp_prog = prog; - old_prog = priv->channels.params.xdp_prog; err = mlx5e_safe_switch_params(priv, &new_params, NULL, NULL, reset); From patchwork Mon Apr 17 12:18:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13213776 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD573C77B72 for ; Mon, 17 Apr 2023 12:21:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230505AbjDQMVl (ORCPT ); Mon, 17 Apr 2023 08:21:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231162AbjDQMVX (ORCPT ); Mon, 17 Apr 2023 08:21:23 -0400 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2083.outbound.protection.outlook.com [40.107.223.83]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6C2B7EE1 for ; Mon, 17 Apr 2023 05:20:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Wv/AHpc4HiFuM4hdonmvHYaFBGi/LbFBnPj6/0fMA5aqJpwpsgzJ+ItSx0VB3Hp78nx1ItZ6Lro1zetWGn/gNyvCIXTCIyYXlZgutNlgnZ1/elGIIlg+6qRZZlUIlgYoYgFDKXoHMM83NibDOLD21SYCW6zYfCJIAXVAxDfaBHGZdj5+wPvdQnMDzSSL7L3DNsmlujWB7Szf2rrcQh698oU7XyJKWkFTbrA1jz1MK5ie+QWZ5PLecW+wDy+ViRtmIumOofsVs0c22KKgP87T4OmNQomiUafvIT+pFm4aqGIROeqK+ORA4cb03s3uOi5+10qNOSDCXwl8BUOyLgbvYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vO2UyVBMOzUnObnmpt5DnbcBfdgiAhd/8UrYmhKdrCw=; b=gQeUflD25Ykq6KHCiZ91A1W+GeZxFs3qSJLEAwo0T4wZJBzEaThhbQ6fdo8lhEWYPWmZXxPU4+7HGGSFsHMasICCSZe9UTPG+XAVgYYuhWGDgkDJdZNX919tjCkm2dFKF7Q+sQBb0st3VVHEOegIToSdPJ3JnXiFL5OCd36BLba0OjK+qijfDCFetmyWDJp0h9tIXFfibKuG4AwmDOJQqVbpUABBJawO7zV9kf8LWbnsiiLUOHk22fIvSnIVLXh+5SBLUooPgVFrBWhIAtH1kt0LbXD/DROJiZd8+dZ6wk4Bmjv8htw+WLSwg3vqdoBhAYWul2GuBJFrlafiojCiYg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vO2UyVBMOzUnObnmpt5DnbcBfdgiAhd/8UrYmhKdrCw=; b=Yjb/g9ug/LC5ozYwINK+g+e8qCzEOZ67wps8/jJCB8v0llJ+VwZYZXueZE+O/VQbXRiH4RSBjqOXn/+rt6Z+bLnxeUM0GzCRsqOyglD83P5GrHMBhiTazYt7JV1jSYDPJjQmqJme4DhsgttmkV5fLY3f2Wy2UbOiUSzOz2eciaG4Z50z1XiDRQhB6iJeioH/CxIKNAl2kTTWD7Nh8YE8SdCCPnbgv56Wk3nQ86NL/hCfmjeRi5yjcTZEe9V17vIoGbLZTelV3bogvuTQaGZlm+jocAy8EsfN/YT+yENGjPrFL+NZAk2At2nLFhGZVHxLBa5D0sF2fc3ShRA4Jg881w== Received: from BN1PR10CA0029.namprd10.prod.outlook.com (2603:10b6:408:e0::34) by DS0PR12MB8444.namprd12.prod.outlook.com (2603:10b6:8:128::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr 2023 12:20:41 +0000 Received: from BN8NAM11FT005.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e0:cafe::ae) by BN1PR10CA0029.outlook.office365.com (2603:10b6:408:e0::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 12:20:40 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT005.mail.protection.outlook.com (10.13.176.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 12:20:40 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 05:20:20 -0700 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 05:20:20 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.37 via Frontend Transport; Mon, 17 Apr 2023 05:20:17 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski CC: Eric Dumazet , Paolo Abeni , Jesper Dangaard Brouer , Toke Hoiland-Jorgensen , , Saeed Mahameed , Lorenzo Bianconi , Gal Pressman , Henning Fehrmann , "Oliver Behnke" , Tariq Toukan Subject: [PATCH net-next 09/15] net/mlx5e: XDP, Consider large muti-buffer packets in Striding RQ params calculations Date: Mon, 17 Apr 2023 15:18:57 +0300 Message-ID: <20230417121903.46218-10-tariqt@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230417121903.46218-1-tariqt@nvidia.com> References: <20230417121903.46218-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT005:EE_|DS0PR12MB8444:EE_ X-MS-Office365-Filtering-Correlation-Id: a2cba755-749b-4d3d-f534-08db3f3e28d1 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: yncr8qPWUW9VeFK/38VTMQCKcNjmQCuHGBKdrfCbkSZs4OaSVX/sxWKNlIYIGmuRjmGSz47FuUPeAEK7LEqpol7+DbybN11z+eLPSDufTfLRSXIdguUb0HmDzxZw3T0bE4pes5VeXVB7aZW0zbQ7/Qsw6xjzdCEnXIzQt+U4jfXATvvyMCg6pQHm0Dl3JYtEboV0qRbNR9fVij+FiLFKECPeCWPpLP09ymzSVJbB1G2MANainGeIplyShSrT8ZIGbORYUw750b1jhnOJ15cNUpkh2YnB9aza4/Jctp/rmR4jGVcAlQwit7eGFVu74fnYiQ8ICwVXGKp1hVWTNdgkY5IcZKubTcYmJ/7MkEClpfkkvvL/XGWF4+WaL7BSGCXH9FSUKN9mauDzPkzPnbjq3m3sW2YCNmpJk2L9HV9L04aSGve5Uj3kPg4Mlqlrg6J614cia3gmz9IxM8WgPf5yVwtDLeNazT+nlH8kZxHDAxCGblkFp4TTsbYQyMyArldlVhyloqHre343rqvui7cn0szGaBLZPN7hugEck2kDLG/jPJdytVP4+5wpp9i8udF6qpiVTcEa5rHsojvQbzCl+RQBBY7Oo1PeL3uURSGgUFg+hWXn2gRXxU2S9jk3Dnk9MNdglz9FIzk3g4ZIs64rwsPN3HgACxQpUroVPgFNLPI8kseLGlz2w8H4EwICyWWFMMupzTp4TI+sDxksmlMyGfJ0UkxuUz4KOaCtnfxOGlaUhOYrOwIllasIk2ZuOGJ6zfuQN6hppoorbm16J/lN8oHRQxr/PukLzE0cIFBvhFk= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(346002)(136003)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(2906002)(336012)(5660300002)(7416002)(41300700001)(2616005)(8936002)(426003)(82310400005)(34020700004)(186003)(83380400001)(47076005)(1076003)(356005)(7636003)(86362001)(82740400003)(107886003)(26005)(36860700001)(8676002)(316002)(7696005)(6666004)(478600001)(54906003)(4326008)(36756003)(110136005)(40460700003)(70586007)(40480700001)(70206006);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:20:40.6084 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a2cba755-749b-4d3d-f534-08db3f3e28d1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT005.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB8444 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Function mlx5e_rx_get_linear_stride_sz() returns PAGE_SIZE immediately in case an XDP program is attached. The more accurate formula is ALIGN(sz, PAGE_SIZE), to prevent two packets from residing on the same page. The assumption behind the current code is that sz <= PAGE_SIZE holds for all cases with XDP program set. This is true because it is being called from: - 3 times from Striding RQ flows, in which XDP is not supported for such large packets. - 1 time from Legacy RQ flow, under the condition mlx5e_rx_is_linear_skb(). No functional change here, just removing the implied assumption in preparation for supporting XDP multi-buffer in Striding RQ. Reviewed-by: Saeed Mahameed Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en/params.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c index 31f3c6e51d9e..196862e67af3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c @@ -253,17 +253,20 @@ static u32 mlx5e_rx_get_linear_stride_sz(struct mlx5_core_dev *mdev, struct mlx5e_xsk_param *xsk, bool mpwqe) { + u32 sz; + /* XSK frames are mapped as individual pages, because frames may come in * an arbitrary order from random locations in the UMEM. */ if (xsk) return mpwqe ? 1 << mlx5e_mpwrq_page_shift(mdev, xsk) : PAGE_SIZE; - /* XDP in mlx5e doesn't support multiple packets per page. */ - if (params->xdp_prog) - return PAGE_SIZE; + sz = roundup_pow_of_two(mlx5e_rx_get_linear_sz_skb(params, false)); - return roundup_pow_of_two(mlx5e_rx_get_linear_sz_skb(params, false)); + /* XDP in mlx5e doesn't support multiple packets per page. + * Do not assume sz <= PAGE_SIZE if params->xdp_prog is set. + */ + return params->xdp_prog && sz < PAGE_SIZE ? PAGE_SIZE : sz; } static u8 mlx5e_mpwqe_log_pkts_per_wqe(struct mlx5_core_dev *mdev, From patchwork Mon Apr 17 12:18:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13213775 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BDADC77B76 for ; Mon, 17 Apr 2023 12:21:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231220AbjDQMVj (ORCPT ); Mon, 17 Apr 2023 08:21:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230505AbjDQMVX (ORCPT ); Mon, 17 Apr 2023 08:21:23 -0400 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2043.outbound.protection.outlook.com [40.107.223.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7921D7D91 for ; Mon, 17 Apr 2023 05:20:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EQJMLM76DWNg0rjfOo85joBKYUVDeWXmD01wqtVGIoTKg/4MK6Odkep+eCsDioLigCnVgyCmi6OdG1PjKNSz6MMN/dbOFYA7/+Y/gh8yRAUFvw3i+ARxTFVKjjed420uTfDiSg21IUvla8BaPqhLrQ1xfccPAJaPIbSz7GB6u18Y5RgRsMrUuB0ELUvZ+WqbUDEJ2jutaZjR6Ts4uqM2WIXkuZMGyKqIkRPMpj6g6J1Ag9ed+MpC7fnYf/GMD813bORVfeaemZEyEd9cCibJibkf02V02bqGgepwdTRl72gbjScwxamfYh9bg4zyj6BbMttzNN5IrnIH12LXkHkKfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gwT0MMsRIomJoBM5ssorK67E1JZYgozP9H/1n9CM3x8=; b=DO6xcSmt7TNzkq239AdT4wtr9YO2QlHXdT9yGkX3AZB3f7Se2w5c5X/xOBCpcS221ATS86BudLUFv6AXc34QfkUVCx9ndCfPUCtKBnsVQ+F8+5b17h/C0yJ2L3XGVR26lPcqxUk19IlmU14wtU5MdsNgY0WaUyND4okKUBFgkwB9ZjXT3Phcg6LpyFMm42b0h4u2UXNlsijs73W1MRuIbuRqyIzEEAUIAFdW2guu8Yt6yOgdc+i8/Y213agoMkv5AkjjTD6VdDfruSMr3x8lXPdV/x7gwUAa+EoXQlBuxsTV/5LZjVdPfoSkn6VYIpjZTlzMGSDkwoFrXD82DDqt/A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gwT0MMsRIomJoBM5ssorK67E1JZYgozP9H/1n9CM3x8=; b=D0/ZH0E+3zHQ2q3RXGyAYWZHlZgrOXk6MypguTBZ82O4ywi7aS56hGt5Jb9ms9F3vJ8za8xSq+4d1aft7AoGRT05llsmGZxl7OtSS2KlHTQw7fBirmJCrcZjTa/cW6OHo8rYDADgODbmxOfGfQEYcRIG1rCKtdJWOFu1QUvy5zrUhbuUe+Jjwi8HZBOujNuF2Sg296YvMCL1HbcMTexVfhxDhITO/cpmEivC7noQQv3oPh8vak9vToRD4SVlSxnNbekwl9plkHvzmKrnhX/cHwleCkXLDhU9jg7qvnZijJA1G4/WpURspbkZFUXqBgYIibUaPOGZLrbXdkXhiYmVYQ== Received: from DS7PR03CA0098.namprd03.prod.outlook.com (2603:10b6:5:3b7::13) by SJ1PR12MB6076.namprd12.prod.outlook.com (2603:10b6:a03:45d::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 12:20:37 +0000 Received: from DM6NAM11FT058.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3b7:cafe::fb) by DS7PR03CA0098.outlook.office365.com (2603:10b6:5:3b7::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 12:20:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DM6NAM11FT058.mail.protection.outlook.com (10.13.172.216) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20 via Frontend Transport; Mon, 17 Apr 2023 12:20:36 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 05:20:24 -0700 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 05:20:24 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.37 via Frontend Transport; Mon, 17 Apr 2023 05:20:20 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski CC: Eric Dumazet , Paolo Abeni , Jesper Dangaard Brouer , Toke Hoiland-Jorgensen , , Saeed Mahameed , Lorenzo Bianconi , Gal Pressman , Henning Fehrmann , "Oliver Behnke" , Tariq Toukan Subject: [PATCH net-next 10/15] net/mlx5e: XDP, Remove un-established assumptions on XDP buffer Date: Mon, 17 Apr 2023 15:18:58 +0300 Message-ID: <20230417121903.46218-11-tariqt@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230417121903.46218-1-tariqt@nvidia.com> References: <20230417121903.46218-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT058:EE_|SJ1PR12MB6076:EE_ X-MS-Office365-Filtering-Correlation-Id: 36b4dfb0-abf3-47de-661d-08db3f3e262e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xcsSbfaG0kUx2dF5ynU+Dt3BBv2pamKFBedHShWAx6nnA5IVjD+6lxqbcipRJgDq2G6H4IzKJAjtBHSf4MLvwmboAvDPJObeqnFzqky4HPQCZHleAskhnz9u1ADz50SjN/7IeFzI68gQxj5PLtwELj/4zTU7RqTDFMXNcTnt2fOeOmWoR5K4MpPaBfoe96YKBnh5wO81hlHqtiR/rudVlbQtF8+uZ/wgYytC0cAJfHhr6xfM9+ZVBJnEP7EB8MPSX43OSXlmpXHC5/yc/V2lJ/90cxfabwYPIxfa5A+g2mKfY7oJDssPQQCKoEdB79Ny953Ckeyl8HzoQModkYdp7U1/my84dS+OsjrOIPvFarHSGhfVdE3sb5jrM3mHNztcOs8P1WcTBROU3SIYCJyhdbcEkmj80NNZ6K2k0YNRNCUTLwLX92qyoI6H8Z+dCMo1m9YS9yH48NN2DEu3ff5No6momWrEl5+OclDuXhXlpAh5BtERevZyaphG+ruYtbJO7P7Hi8aqXE5AhJY/xns9guPS5V+NL7RQGTCin+3BfNSrSDbUszo71w4xaR1tZyyhYZoH52g7mCIzxp5udLE7hz9oucgkIa28bozjA5OrNoqlBTn3o9XY21czj7VkrpkB1BWSIZEOJU5cpdjDqdh+INpsq9KopV5mHia+tsXAt3aul/8xFsmvNoxKCzi6pVdET9wtG9KYrC3js9CYSqd8T6tiCwtYKnc480DyXI7fb3HhtWD4qtr2Uk0161iiiUoYLMLonkXiT/rss690RIRJITfNYSpOV/6SnyZPIK6r50nW3eKlJFAJsTxnIOUP2v1i X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(346002)(39860400002)(396003)(451199021)(36840700001)(46966006)(40470700004)(7696005)(6666004)(86362001)(478600001)(110136005)(34020700004)(36860700001)(36756003)(2616005)(47076005)(426003)(336012)(83380400001)(107886003)(26005)(40480700001)(186003)(1076003)(40460700003)(82740400003)(82310400005)(7636003)(356005)(316002)(4326008)(70206006)(70586007)(2906002)(8676002)(8936002)(5660300002)(7416002)(41300700001)(54906003)(309714004);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:20:36.2748 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 36b4dfb0-abf3-47de-661d-08db3f3e262e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT058.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ1PR12MB6076 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Remove the assumption of non-zero linear length in the XDP xmit function, used to serve both internal XDP_TX operations as well as redirected-in requests. Do not apply the MLX5E_XDP_MIN_INLINE check unless necessary. Reviewed-by: Saeed Mahameed Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 36 +++++++++++-------- .../net/ethernet/mellanox/mlx5/core/en_main.c | 4 --- 2 files changed, 22 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index c266d073e2f2..d89f934570ee 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -477,18 +477,26 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, u16 ds_cnt, inline_hdr_sz; u8 num_wqebbs = 1; int num_frags = 0; + bool inline_ok; + bool linear; u16 pi; struct mlx5e_xdpsq_stats *stats = sq->stats; - if (unlikely(dma_len < MLX5E_XDP_MIN_INLINE || sq->hw_mtu < dma_len)) { + inline_ok = sq->min_inline_mode == MLX5_INLINE_MODE_NONE || + dma_len >= MLX5E_XDP_MIN_INLINE; + + if (unlikely(!inline_ok || sq->hw_mtu < dma_len)) { stats->err++; return false; } - ds_cnt = MLX5E_TX_WQE_EMPTY_DS_COUNT + 1; + inline_hdr_sz = 0; if (sq->min_inline_mode != MLX5_INLINE_MODE_NONE) - ds_cnt++; + inline_hdr_sz = MLX5E_XDP_MIN_INLINE; + + linear = !!(dma_len - inline_hdr_sz); + ds_cnt = MLX5E_TX_WQE_EMPTY_DS_COUNT + linear + !!inline_hdr_sz; /* check_result must be 0 if sinfo is passed. */ if (!check_result) { @@ -517,22 +525,23 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, eseg = &wqe->eth; dseg = wqe->data; - inline_hdr_sz = 0; - /* copy the inline part if required */ - if (sq->min_inline_mode != MLX5_INLINE_MODE_NONE) { + if (inline_hdr_sz) { memcpy(eseg->inline_hdr.start, xdptxd->data, sizeof(eseg->inline_hdr.start)); memcpy(dseg, xdptxd->data + sizeof(eseg->inline_hdr.start), - MLX5E_XDP_MIN_INLINE - sizeof(eseg->inline_hdr.start)); - dma_len -= MLX5E_XDP_MIN_INLINE; - dma_addr += MLX5E_XDP_MIN_INLINE; - inline_hdr_sz = MLX5E_XDP_MIN_INLINE; + inline_hdr_sz - sizeof(eseg->inline_hdr.start)); + dma_len -= inline_hdr_sz; + dma_addr += inline_hdr_sz; dseg++; } /* write the dma part */ - dseg->addr = cpu_to_be64(dma_addr); - dseg->byte_count = cpu_to_be32(dma_len); + if (linear) { + dseg->addr = cpu_to_be64(dma_addr); + dseg->byte_count = cpu_to_be32(dma_len); + dseg->lkey = sq->mkey_be; + dseg++; + } cseg->opmod_idx_opcode = cpu_to_be32((sq->pc << 8) | MLX5_OPCODE_SEND); @@ -543,7 +552,6 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, memset(eseg, 0, sizeof(*eseg) - sizeof(eseg->trailer)); eseg->inline_hdr.sz = cpu_to_be16(inline_hdr_sz); - dseg->lkey = sq->mkey_be; for (i = 0; i < num_frags; i++) { skb_frag_t *frag = &xdptxdf->sinfo->frags[i]; @@ -553,10 +561,10 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, page_pool_get_dma_addr(skb_frag_page(frag)) + skb_frag_off(frag); - dseg++; dseg->addr = cpu_to_be64(addr); dseg->byte_count = cpu_to_be32(skb_frag_size(frag)); dseg->lkey = sq->mkey_be; + dseg++; } cseg->qpn_ds = cpu_to_be32((sq->sqn << 8) | ds_cnt); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 6a278901b40b..a95ce206391b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -1886,7 +1886,6 @@ int mlx5e_open_xdpsq(struct mlx5e_channel *c, struct mlx5e_params *params, struct mlx5e_tx_wqe *wqe = mlx5_wq_cyc_get_wqe(&sq->wq, i); struct mlx5_wqe_ctrl_seg *cseg = &wqe->ctrl; struct mlx5_wqe_eth_seg *eseg = &wqe->eth; - struct mlx5_wqe_data_seg *dseg; sq->db.wqe_info[i] = (struct mlx5e_xdp_wqe_info) { .num_wqebbs = 1, @@ -1895,9 +1894,6 @@ int mlx5e_open_xdpsq(struct mlx5e_channel *c, struct mlx5e_params *params, cseg->qpn_ds = cpu_to_be32((sq->sqn << 8) | ds_cnt); eseg->inline_hdr.sz = cpu_to_be16(inline_hdr_sz); - - dseg = (struct mlx5_wqe_data_seg *)cseg + (ds_cnt - 1); - dseg->lkey = sq->mkey_be; } } From patchwork Mon Apr 17 12:18:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13213774 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42AE9C77B72 for ; Mon, 17 Apr 2023 12:21:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231128AbjDQMVg (ORCPT ); Mon, 17 Apr 2023 08:21:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41100 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230494AbjDQMVX (ORCPT ); Mon, 17 Apr 2023 08:21:23 -0400 Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2063.outbound.protection.outlook.com [40.107.102.63]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7D34977E for ; Mon, 17 Apr 2023 05:20:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=m/BWFUSEE48JcWuTrmASzH1PNkmHIYXJ9oSQxsg9dasuTEFktOA74BQEWRwmSUn0kxF4biwLQaNMSvRdE2rId1IjiTiERp744imKU1CD7u3USpAfMR9t7KMf1KYCMAdEqg5+kipB8kOtopI2pbqQfsL+iFmqCzSHC40Xnzb2AL9SSLiMUc82xJLmgPX26bqrCyQRSavTt5UGbOE9f44TajIBrG308eXxbZrWyn0zLwTw5J0fjjIu5Q7pjUbnivF2iK4vHzz9bq0C/0IITSPBELZDlSkBeSM6RcpKQcfy0/lVlPa6V9E9JuE6iKEC5vG2OCeDk5rv/IQyshrMz73tag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=w0hwg1lxZDIPaM7shl4zn1cy5P0ReW29IVBmA00E5C8=; b=fm/7Eyt1G/lhANuHCiBTOrKkWe6QXVgXtvFd085hBbCcKbnSevOZdcXnMkI7sDMVFI1qePCaL/l5QhvtN5H+LAIPbH+YEYbSgQ1nZhWt+bGO0S6FSzGDVniM9hBO3Isr+0CS8QxP8ljLY4FwenR9oKuKg5rUkqTavqVrw4gtdEVyblic0k0QpQ8q6orrtm0DqJdGkUocKvUGmgdwH6ELom28s7XV9fgCEVFoFIZo/7IZUL4IfFVIHmNomodfyruYjMphyL1/fZs8pZM0LYqg5mOWbSagXIcg/4FGSKYVtbRBbgHvH6XUNzGFSyIgoOOIDvWevSDeujwbLERvwq/N9Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=w0hwg1lxZDIPaM7shl4zn1cy5P0ReW29IVBmA00E5C8=; b=Kwac0wBMBoXIEH1fNiuai4JzMxPpZtesaXQDRnkX4e4gbCMOoHEkv2VsKxx2y6cmTHAPs3BMXV2fNIr+RCW7qaMPbsd/6t1IrZDcbBO5LtKAJnsapoTq2fx9Fh9TWj5XA3/w0m37BHpkO5V49SV74ce05oM1bnjhfS0rRJsba5GdcL7Fx/MQZkZFp+KwUjcM2Zaz7iMlhd2uTPp+YciFWvV4n46hzILc/bmRZCpuWf1yXD/shhXCUJNQkhYR/wESVvBdcaJAmA0RHiS5AkE67UCUBsSQbAcECwJ8sanBvzrAGcdkrNG7DR5nvwE/tBn7GORfx21ISHTqRkajVn7AKw== Received: from DS7PR03CA0111.namprd03.prod.outlook.com (2603:10b6:5:3b7::26) by MW6PR12MB8834.namprd12.prod.outlook.com (2603:10b6:303:23c::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 12:20:39 +0000 Received: from DM6NAM11FT058.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3b7:cafe::76) by DS7PR03CA0111.outlook.office365.com (2603:10b6:5:3b7::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend Transport; Mon, 17 Apr 2023 12:20:39 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DM6NAM11FT058.mail.protection.outlook.com (10.13.172.216) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20 via Frontend Transport; Mon, 17 Apr 2023 12:20:39 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 05:20:28 -0700 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 05:20:27 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.37 via Frontend Transport; Mon, 17 Apr 2023 05:20:24 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski CC: Eric Dumazet , Paolo Abeni , Jesper Dangaard Brouer , Toke Hoiland-Jorgensen , , Saeed Mahameed , Lorenzo Bianconi , Gal Pressman , Henning Fehrmann , "Oliver Behnke" , Tariq Toukan Subject: [PATCH net-next 11/15] net/mlx5e: XDP, Allow non-linear single-segment frames in XDP TX MPWQE Date: Mon, 17 Apr 2023 15:18:59 +0300 Message-ID: <20230417121903.46218-12-tariqt@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230417121903.46218-1-tariqt@nvidia.com> References: <20230417121903.46218-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT058:EE_|MW6PR12MB8834:EE_ X-MS-Office365-Filtering-Correlation-Id: 5db88547-5ef4-4525-f9b8-08db3f3e27f0 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: QY2KfRIqMUW9s/tZx0aLCBdhBIsp4Bh7eNEyMok/w/n6KnlzDwbf0NIwimCIRT1WKH6bBQYRmtccdGm76MEfLneNYGoESox4bra0AV/UrLGvs1VBTr76UzCyo4/ts+qd+ujSuQYtZ4TBU9LesbCCRwRGR7vRBFdpWzBgLXdCzyn7Z8JLqbOYVRVjfR4bj+R+Z+DTweqMG8KpZ1CpKEy3eaiinLard1NceL+ptzlyudBFFzLH28KMl1skcUeyGChykOzwaH/o4jR1wNykjBCSOIhKZkugGuSPFOMX7KIDzk4s6cVP9O16D9di+D+CMoVL6zshxZE7ctpsLMYycE8MNhvJkTTgg6s/m9VfOB49t7KxKAO3ZTZKO/mXM1QUJSmcpFtyNCfwD9AWQcxUpVkOSiC65jo1oiZBzLsyQQdj4xdSxglAzIirEhhkNYB8O2QNJuGBLxSm0T4+b6AS2BQOwxC6slNRHX+Srm1oDoEVEpDxafoImlg7OF4pYhRO4JWjyTaDnRJvJkHo70Nhm+VCI+2feZZZNXFsgIjKJMVzIkH3zCJq8LLbkm2xUXMmgPXqBzp4JEdu9Wv3Q0n9X2DLCsKsO5avyTdRwG/QqMQsAk/X0TqcQu660DAZKy2QEu9Vm3CiklRvxqMMxHBgzKzDpLQGwhtWmx3b+zj2Nf+E97XUbpxONm2bwJJj6RvZ3qXrhjP20Kof6MPbDOmEvF1aqxPwj6eX0ur4NG9zrcGMPhNsP3KNoXAeGlcHYfZTAW7zI6RkOLCnOvf/qm+ONgagpg== X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(396003)(136003)(376002)(451199021)(36840700001)(40470700004)(46966006)(40480700001)(40460700003)(110136005)(70586007)(82740400003)(70206006)(4326008)(478600001)(316002)(54906003)(5660300002)(8676002)(8936002)(41300700001)(7416002)(356005)(7636003)(186003)(83380400001)(2616005)(426003)(336012)(47076005)(36860700001)(6666004)(34020700004)(7696005)(107886003)(26005)(1076003)(86362001)(82310400005)(36756003)(2906002);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:20:39.2277 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5db88547-5ef4-4525-f9b8-08db3f3e27f0 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT058.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8834 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Under a few restrictions, TX MPWQE feature can serve multiple TX packets in a single TX descriptor. It requires each of the packets to have a single scatter entry / segment. Today we allow only linear frames to use this feature, although there's no real problem with non-linear ones where the whole packet reside in the first fragment. Expand the XDP TX MPWQE feature support to include such frames. This is in preparation for the downstream patch, in which we will generate such non-linear frames. Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 35 ++++++++++++++----- 1 file changed, 26 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index d89f934570ee..f0e6095809fa 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -405,18 +405,35 @@ mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptx { struct mlx5e_tx_mpwqe *session = &sq->mpwqe; struct mlx5e_xdpsq_stats *stats = sq->stats; + struct mlx5e_xmit_data *p = xdptxd; + struct mlx5e_xmit_data tmp; if (xdptxd->has_frags) { - /* MPWQE is enabled, but a multi-buffer packet is queued for - * transmission. MPWQE can't send fragmented packets, so close - * the current session and fall back to a regular WQE. - */ - if (unlikely(sq->mpwqe.wqe)) - mlx5e_xdp_mpwqe_complete(sq); - return mlx5e_xmit_xdp_frame(sq, xdptxd, 0); + struct mlx5e_xmit_data_frags *xdptxdf = + container_of(xdptxd, struct mlx5e_xmit_data_frags, xd); + + if (!!xdptxd->len + xdptxdf->sinfo->nr_frags > 1) { + /* MPWQE is enabled, but a multi-buffer packet is queued for + * transmission. MPWQE can't send fragmented packets, so close + * the current session and fall back to a regular WQE. + */ + if (unlikely(sq->mpwqe.wqe)) + mlx5e_xdp_mpwqe_complete(sq); + return mlx5e_xmit_xdp_frame(sq, xdptxd, 0); + } + if (!xdptxd->len) { + skb_frag_t *frag = &xdptxdf->sinfo->frags[0]; + + tmp.data = skb_frag_address(frag); + tmp.len = skb_frag_size(frag); + tmp.dma_addr = xdptxdf->dma_arr ? xdptxdf->dma_arr[0] : + page_pool_get_dma_addr(skb_frag_page(frag)) + + skb_frag_off(frag); + p = &tmp; + } } - if (unlikely(xdptxd->len > sq->hw_mtu)) { + if (unlikely(p->len > sq->hw_mtu)) { stats->err++; return false; } @@ -434,7 +451,7 @@ mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptx mlx5e_xdp_mpwqe_session_start(sq); } - mlx5e_xdp_mpwqe_add_dseg(sq, xdptxd, stats); + mlx5e_xdp_mpwqe_add_dseg(sq, p, stats); if (unlikely(mlx5e_xdp_mpwqe_is_full(session, sq->max_sq_mpw_wqebbs))) mlx5e_xdp_mpwqe_complete(sq); From patchwork Mon Apr 17 12:19:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13213778 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99C53C77B70 for ; Mon, 17 Apr 2023 12:21:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231167AbjDQMVz (ORCPT ); Mon, 17 Apr 2023 08:21:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231146AbjDQMV0 (ORCPT ); Mon, 17 Apr 2023 08:21:26 -0400 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2042.outbound.protection.outlook.com [40.107.93.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7ACD793EB for ; Mon, 17 Apr 2023 05:20:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Q8KuqWJjYKeC584zTfbIvbTDRZ042UxflZXkDLucJOb/re2zgUhYizcQ63FLZ5VlIJw64mkn+zAeI4DYPu59KGBf7XgEoB+FCpk5Jpj7VP0C24EbU4E8fMqfijOqLAe8xg0e1kkqowhP5YpiXnp5Whv4n25dMwLF6ngzbQigRBEZKZ7ku82XZPMt0dojY43bDN1nAmPosJKxPbCtwNahglIHDxl2HUTW9CR7EtKJE876IttcfxiYzoZrAT7s2YBtMi5uhsZqztqVQ3sjzjrEeCdDnu26Ovr0Wl5Jni/fZNCQ6dQK6Cnpq95qiZjjStOinpahq9riTn89HhMFesfG0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1LrxrrSUppEW5BR6DvZ5UaYfEY/F6nRauGTQYep4c9w=; b=MhWC5j6B0tVNNJBbStJfaBn/rHAV0puk2sE8XX5G3f1O0YC8Q0LtPq5JiNvjwrGmaaY15wBxWxGu9iKnrR2b3H4iYYLbWbbbla/Np5QmGSGvlknr6i+2qJwa97YcNaqFxH+FsKXXSuRPim3AsOhHp+aem6pUKuXAUfKSsPWV/3RmX81Rnnl6ZuqFyoaQwS1g3xXtJMD77cBrVO+y2zExlpZ32v7n0LsG1n7CXjkCFApQDUZCNIJMK//XYYJIDUztR+k3qUMgW4+AUfIIq/8tna70J+AFbKZJOuKaJc3OszpjvlGSe6Bm72TMCHdyazUPlCqn4rvdXa+jqAAZPZSN7g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1LrxrrSUppEW5BR6DvZ5UaYfEY/F6nRauGTQYep4c9w=; b=lVBygfOzNLExqCc9z9ENQMyyfygSgZ+tbnw0JdpDmEM/zLpJeq9HLkMw5YSky5GSG6l59XIaKYGbg12V589NfC8qmKJObvVyd4NrrcFMgAcFHyLeZJpIPyBG3FNOZ/a8W05h3b4vi0mWUa1TPW171uVy3lICVxDMvfF++C4a9Cf/MlV30tlAwlxOOxW92bJz7UQABOFPWWe2PYgFXNmLkpan+zP7Wd3S7Enx3VFWvQQawN9a3GwOdirW2OfH1n2nrVKf0CGW6wg4gsD2xeRWMr0LVlMsE5JlKmihQaantYv0S5dEx3WnVednYUeTNcgpWP7q3eH1aJjrmsTeOs/bsg== Received: from BN0PR04CA0208.namprd04.prod.outlook.com (2603:10b6:408:e9::33) by SN7PR12MB8147.namprd12.prod.outlook.com (2603:10b6:806:32e::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr 2023 12:20:49 +0000 Received: from BN8NAM11FT019.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e9:cafe::81) by BN0PR04CA0208.outlook.office365.com (2603:10b6:408:e9::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 12:20:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT019.mail.protection.outlook.com (10.13.176.158) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 12:20:49 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 05:20:31 -0700 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 05:20:31 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.37 via Frontend Transport; Mon, 17 Apr 2023 05:20:28 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski CC: Eric Dumazet , Paolo Abeni , Jesper Dangaard Brouer , Toke Hoiland-Jorgensen , , Saeed Mahameed , Lorenzo Bianconi , Gal Pressman , Henning Fehrmann , "Oliver Behnke" , Tariq Toukan Subject: [PATCH net-next 12/15] net/mlx5e: RX, Take shared info fragment addition into a function Date: Mon, 17 Apr 2023 15:19:00 +0300 Message-ID: <20230417121903.46218-13-tariqt@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230417121903.46218-1-tariqt@nvidia.com> References: <20230417121903.46218-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT019:EE_|SN7PR12MB8147:EE_ X-MS-Office365-Filtering-Correlation-Id: f0454229-d6ea-4d8c-17ff-08db3f3e2df1 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: YYWieRwPWl0daoqHhFRfixD2W2LOfSFHbI7j1hZ50iFSOgUWx260p6DsBR/a3vSA0O7kQU+Rja1pvfcGpXJ0VMlG7T3kSI3dttdnBG3t0o0tZL3HECfBWVgS5ITwmJLpVDaH67lYmBYWlEz7g7eSIrafD3C2ghVEaXH+3eZZjMLTu+aHy5RAhbRGwSwVFrplkNRXdgwJrYs4ZjwIhD8qIjyJSc+X89qE/60IOh49SqmMVxDoJ0wxW67LawAqFQ6CKd1L5/+tXEdwNCCp78t0nE8TQN0VWQ4RgEg0HRlQfPUsQO9yKwUH1hSoc3ZYJF6arBwMyocmAKoksGuQC/cVkcrmowV+kaq+QK02G4E36vvNKugnjZrpNbBiNa1AVQz6sfPktAcR+WENquAA9j9aumNVEfJr2UPxWalnkfGhvolyghwAzF3VCw5HchN2+iSw6cAYGjpq175fj4LCc/D2FBGZRSC1mmWR4DwRBwxeBGIOe+8X/33pk7Veaq+l6NPoJPLpNnFCqvGkHmAoUEi8eNTr4nGh473ciMGzwrQul4IhKFMDQpZ2NR8/lbKh444kaYMQri77hnTxkoMOrErmgXcwZSatNXetLpFXCoXtaJFKTQOvJlLGH2eUXlKHcKJcu67GC4tZ6vzXAdjJjFjrKwIE9gRDleYFoJkA6fMW8AHe/hTGBDPIJgWB0t0407j+V6rEEvGNAyDMKbJLgSYvcKtKR8X+JXmAAB66QQAplxTQ67QMUOkYKFhg7crqI9CEJ32gE1tukADcKxYEoe+WFQPJ/lf7vnC2GkJHe4Q4ioU= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(346002)(376002)(396003)(451199021)(40470700004)(36840700001)(46966006)(316002)(4326008)(82740400003)(70206006)(70586007)(34020700004)(5660300002)(2616005)(336012)(426003)(47076005)(82310400005)(6666004)(7696005)(36756003)(86362001)(41300700001)(40460700003)(107886003)(54906003)(40480700001)(1076003)(26005)(186003)(2906002)(8676002)(83380400001)(7416002)(8936002)(36860700001)(478600001)(356005)(7636003)(110136005);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:20:49.2323 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f0454229-d6ea-4d8c-17ff-08db3f3e2df1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT019.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB8147 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Introduce mlx5e_add_skb_shared_info_frag(), a function dedicated for adding a fragment into a struct skb_shared_info object. Use it in the Legacy RQ flow. Similar usage will be added in a downstream patch by the corresponding Striding RQ flow. Reviewed-by: Saeed Mahameed Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 56 ++++++++++--------- 1 file changed, 31 insertions(+), 25 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 1049805571c6..1118327f6467 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -471,6 +471,35 @@ static int mlx5e_refill_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk) return i; } +static void +mlx5e_add_skb_shared_info_frag(struct mlx5e_rq *rq, struct skb_shared_info *sinfo, + struct xdp_buff *xdp, struct mlx5e_frag_page *frag_page, + u32 frag_offset, u32 len) +{ + skb_frag_t *frag; + + dma_addr_t addr = page_pool_get_dma_addr(frag_page->page); + + dma_sync_single_for_cpu(rq->pdev, addr + frag_offset, len, rq->buff.map_dir); + if (!xdp_buff_has_frags(xdp)) { + /* Init on the first fragment to avoid cold cache access + * when possible. + */ + sinfo->nr_frags = 0; + sinfo->xdp_frags_size = 0; + xdp_buff_set_frags_flag(xdp); + } + + frag = &sinfo->frags[sinfo->nr_frags++]; + __skb_frag_set_page(frag, frag_page->page); + skb_frag_off_set(frag, frag_offset); + skb_frag_size_set(frag, len); + + if (page_is_pfmemalloc(frag_page->page)) + xdp_buff_set_frag_pfmemalloc(xdp); + sinfo->xdp_frags_size += len; +} + static inline void mlx5e_add_skb_frag(struct mlx5e_rq *rq, struct sk_buff *skb, struct page *page, u32 frag_offset, u32 len, @@ -1694,35 +1723,12 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi wi++; while (cqe_bcnt) { - skb_frag_t *frag; - frag_page = wi->frag_page; frag_consumed_bytes = min_t(u32, frag_info->frag_size, cqe_bcnt); - addr = page_pool_get_dma_addr(frag_page->page); - dma_sync_single_for_cpu(rq->pdev, addr + wi->offset, - frag_consumed_bytes, rq->buff.map_dir); - - if (!xdp_buff_has_frags(&mxbuf.xdp)) { - /* Init on the first fragment to avoid cold cache access - * when possible. - */ - sinfo->nr_frags = 0; - sinfo->xdp_frags_size = 0; - xdp_buff_set_frags_flag(&mxbuf.xdp); - } - - frag = &sinfo->frags[sinfo->nr_frags++]; - - __skb_frag_set_page(frag, frag_page->page); - skb_frag_off_set(frag, wi->offset); - skb_frag_size_set(frag, frag_consumed_bytes); - - if (page_is_pfmemalloc(frag_page->page)) - xdp_buff_set_frag_pfmemalloc(&mxbuf.xdp); - - sinfo->xdp_frags_size += frag_consumed_bytes; + mlx5e_add_skb_shared_info_frag(rq, sinfo, &mxbuf.xdp, frag_page, + wi->offset, frag_consumed_bytes); truesize += frag_info->frag_stride; cqe_bcnt -= frag_consumed_bytes; From patchwork Mon Apr 17 12:19:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13213777 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A442C77B72 for ; Mon, 17 Apr 2023 12:21:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230487AbjDQMVx (ORCPT ); Mon, 17 Apr 2023 08:21:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41172 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231173AbjDQMV0 (ORCPT ); Mon, 17 Apr 2023 08:21:26 -0400 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2077.outbound.protection.outlook.com [40.107.94.77]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 888068A5B for ; Mon, 17 Apr 2023 05:20:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IjVbRo1adMGu3A1Vm54PxuRc4lbL+5apTwBHxS625ffoE/7506s4o91o6M7IzTiUSyV1ZtaRldLv3qHlyRpk0H3cEBIDJ4Oclxc4DvFJX8MAgvHlCmL98sWFD+dtJijBkSsBSVkrNyNhnKN6DirYBOZERHbUSqnVpihz+N/ixRvRAdX2CPdnwk8iGGO7fj+W3U6jSSm7Llun7LKUm2RlkmppqcO5Ja5ubnMD5LV6EOMJqQ1sl5VwrpGiGW8od7jX3MK0losHDpAvFPxeveAY0dm2YChKuEiXTzfiJUrmDm8Nh8joIsGGs+NrlReeNLc021vGEDWJpRolpk7/VgqEBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0yNMVqE1TpCawff5sY7F/k+w7uuZ6nDXklzkzSWhdj4=; b=HiFwnM7kqV8IiKfFR/G/Xc+1weqgsjPro/QK/9fn4JZrL6sj+wNB7oFOxEmv+8Z7RFPXQryccA3X5vIhrMlmmw7Ea0pFLtnpqW7NTG7pw2CxP2XRCiUWtmX6BwAOTttDoYtPETJzps0lrZ+NkhsgMtoO+8ulCppIcvvl4rPN7B0SdZY17q2PSVz+5a3O0RzINYYNSVmyHg4M3pQN2rVa/4WEnQzPEeeW9I12LQo0SGzGM/lbqLup+FzLWS/Bj5DB/+0n0IeSMRzy92CG2QuEPs3076nvejy5MlC8jzSmLAfbR+9JdCdN62cB0oA2y0gruPGozejHGoWK2Fvyf8K6tg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0yNMVqE1TpCawff5sY7F/k+w7uuZ6nDXklzkzSWhdj4=; b=m0Iww2Rcy3fg9dgWuctEZIKdJYdijnPWRQNQWjox5+dv80Nw9Xygb0XBKayacFrRz8rR8M9sBO0CyOGiKqep+aA1B/DGSIbEVIovXn/+2nm8fLB1MlFh9m3syNETFKTfOgZpRvGXCqLlcbBq6WnzH6VNVGjToU9EPc01h/aGDq/jxKALe4aWwcqsCijcVFv7VIIm8ZRDjnLaumtnD3S8yFnBKSCYmEf33d0Ggd3NmkSLs+ofHcm+5waOAACpwW1Q3gwZtx3xv2hKwgdUTIQcfCii51gdeNY/rBtQYvsCvUypj4ovqh7rlmOySTUcVVke5YnRtS2dnYEOtgKd5KzwPQ== Received: from DM6PR02CA0163.namprd02.prod.outlook.com (2603:10b6:5:332::30) by CH0PR12MB5281.namprd12.prod.outlook.com (2603:10b6:610:d4::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 12:20:46 +0000 Received: from DM6NAM11FT010.eop-nam11.prod.protection.outlook.com (2603:10b6:5:332:cafe::ba) by DM6PR02CA0163.outlook.office365.com (2603:10b6:5:332::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 12:20:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DM6NAM11FT010.mail.protection.outlook.com (10.13.172.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20 via Frontend Transport; Mon, 17 Apr 2023 12:20:45 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 05:20:35 -0700 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 05:20:34 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.37 via Frontend Transport; Mon, 17 Apr 2023 05:20:31 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski CC: Eric Dumazet , Paolo Abeni , Jesper Dangaard Brouer , Toke Hoiland-Jorgensen , , Saeed Mahameed , Lorenzo Bianconi , Gal Pressman , Henning Fehrmann , "Oliver Behnke" , Tariq Toukan Subject: [PATCH net-next 13/15] net/mlx5e: RX, Generalize mlx5e_fill_mxbuf() Date: Mon, 17 Apr 2023 15:19:01 +0300 Message-ID: <20230417121903.46218-14-tariqt@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230417121903.46218-1-tariqt@nvidia.com> References: <20230417121903.46218-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT010:EE_|CH0PR12MB5281:EE_ X-MS-Office365-Filtering-Correlation-Id: e17e08a6-41d5-4845-4b12-08db3f3e2be3 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zXbfGebUv64IslF2im3in6fHKAbszWScVVt5wBO96xCuRGHfPCt/EwHhyDuNMEzF0UkS/XhcgrpJBW1DHhujMSg6EACkXT8iyi3/CuyBqHcgUqQ+5cOhYp9Caa6MQr0lzIU+ckNDSuBEvAToFZchSW0CqtR3ZhfczXLw14EVQMsTLqbT3ZwZADYuaVOhZ3qhLFd7XvcrzzD+2fxnen15HJkX03TkEdOOqb6JLyxcGwnKMoWcFFw99uRDt+TFCYccIeOa0B3rdSpPg4qvZ+zxfIJ71zhpj73wCGqXeEehJdmvrucpTxl4ymFNoiQeD706gOrHxFuZh/WNcs+fK/P+Pyh2xY6oe1jIRfP714Ca1HR9tAFdvb6FtgZ/FpIbvJeta6IW7x3TAjnM2+Ev+p4fx024tih85n2ZLl4j1AMZqo8QYlFlpsPbolYd3EpNAigurP/U7cp2sXawd+CWJriZWaDGjzB7f8f+RMnJ1P/TlbWQOnjEef+54hxuVVw80x3q82Orhe/6hpLYRA2S35GCyKkfPfIQaHxwMOTPLGCvintyR94m/X37KUOLP1eXKo4tYQJXmBf74Vvf2c9g+lZYtQJnzwRsxb+uhExnwuY89qzez+ioJlGIzbfWuTRBq3fD2OwKIglZH0Q6aDgHsi45/hrRtSzx0e5KHAokXDGBhBMz/gC9XP+aEIIkfXb0YZ4WQbgrl3AfyvIA98QTWrsqy+w96QSzwy9mQ4qvz/Ex3wk5OyoRvNN7SJB0oOhcdHn3085HFlohv7P51ZNBs6KczbEM7b0IbwalyYlje4dQkyw= X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(136003)(39860400002)(346002)(451199021)(36840700001)(40470700004)(46966006)(336012)(7696005)(6666004)(86362001)(478600001)(110136005)(34020700004)(36860700001)(2616005)(47076005)(426003)(36756003)(83380400001)(26005)(40480700001)(107886003)(186003)(1076003)(40460700003)(82740400003)(82310400005)(7636003)(356005)(70206006)(70586007)(316002)(2906002)(4326008)(8936002)(8676002)(5660300002)(7416002)(41300700001)(54906003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:20:45.8333 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e17e08a6-41d5-4845-4b12-08db3f3e2be3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT010.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5281 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Make the function more generic. Let it get an additional frame_sz parameter instead of deriving it from the RQ struct. No functional change here, just a preparation for a downstream patch. Reviewed-by: Saeed Mahameed Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 1118327f6467..a2c4b3df5757 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -1630,10 +1630,10 @@ struct sk_buff *mlx5e_build_linear_skb(struct mlx5e_rq *rq, void *va, } static void mlx5e_fill_mxbuf(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, - void *va, u16 headroom, u32 len, + void *va, u16 headroom, u32 frame_sz, u32 len, struct mlx5e_xdp_buff *mxbuf) { - xdp_init_buff(&mxbuf->xdp, rq->buff.frame0_sz, &rq->xdp_rxq); + xdp_init_buff(&mxbuf->xdp, frame_sz, &rq->xdp_rxq); xdp_prepare_buff(&mxbuf->xdp, va, headroom, len, true); mxbuf->cqe = cqe; mxbuf->rq = rq; @@ -1666,7 +1666,8 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, struct mlx5e_xdp_buff mxbuf; net_prefetchw(va); /* xdp_frame data area */ - mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, cqe_bcnt, &mxbuf); + mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, rq->buff.frame0_sz, + cqe_bcnt, &mxbuf); if (mlx5e_xdp_handle(rq, prog, &mxbuf)) return NULL; /* page/packet was consumed by XDP */ @@ -1714,7 +1715,8 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi net_prefetchw(va); /* xdp_frame data area */ net_prefetch(va + rx_headroom); - mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, frag_consumed_bytes, &mxbuf); + mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, rq->buff.frame0_sz, + frag_consumed_bytes, &mxbuf); sinfo = xdp_get_shared_info_from_buff(&mxbuf.xdp); truesize = 0; @@ -2042,7 +2044,8 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, struct mlx5e_xdp_buff mxbuf; net_prefetchw(va); /* xdp_frame data area */ - mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, cqe_bcnt, &mxbuf); + mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, rq->buff.frame0_sz, + cqe_bcnt, &mxbuf); if (mlx5e_xdp_handle(rq, prog, &mxbuf)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) __set_bit(page_idx, wi->skip_release_bitmap); /* non-atomic */ From patchwork Mon Apr 17 12:19:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13213779 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84BBCC77B72 for ; Mon, 17 Apr 2023 12:21:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230518AbjDQMV4 (ORCPT ); Mon, 17 Apr 2023 08:21:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231180AbjDQMV0 (ORCPT ); Mon, 17 Apr 2023 08:21:26 -0400 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2069.outbound.protection.outlook.com [40.107.244.69]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85D509EEB for ; Mon, 17 Apr 2023 05:20:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PiWm0Qq258oXH6m2nnt7O7OfgC8iq/inDTdbLfAMr2DBBg8IhPSRcPvPe6FZdLvLLXSVObIvHekHU4ehwvVIJeON/qY+xx8gVcxVd5lMUVoVCfjDmZNmS4rq6L3WEbJ9ge90H7K3JVNh7YUAYbeyDKRkxRSIRL7OPWkRotaJMw7TFrbhRC5ICEd4ipeBUgIxHKiTNqeFNHI3SqdTNnmfTdXgP7Lvt+6A+6CC8fKZJ+OgpFnZKB6+/FQ0IQoPQx6tdgxSR04d1cfIK3Sp9h+FwFD5AZCEs/JBjsI6wkpWOGzqcXpPOI51yCiwHnEZXHBag5ollS6SINVJ7r9tAh7M+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=hrfDdbSkr8cDSHryZ3qgF8g9Z2SqsYEXpGoXu3d0RhQ=; b=IcqRmwbnzlvODYO1/ghZq4TLr+gux577Xqbg6Wby093ndyd23E7fmoRrg58LbMNwT7I4KH60d7APTKupBkUDFrhXs16m9hS0avMnJoYSFxN5JMZV63V3D/6o8BZPEAG7cP4xGoZZS8nciAU/1SkTozBKNdtis+G+3hCbLF6Iwshar137XHKEJefDGlkmV73VC9wqlLuj2BLvxsoyOst29pfGc8Wwxbh4AMI8sIqCoPzFJz73DxFEsacX1AUiXW9ecdJfIrJ1y2Tgc3kU5nAX8Ti0kN83EHk9mZbmJLtlJdyjjLJe1zhCco1Ap/jGJef/K6/6AceX2HrJrSH/Ps6ZlQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hrfDdbSkr8cDSHryZ3qgF8g9Z2SqsYEXpGoXu3d0RhQ=; b=lW3cj7AlfPvCF/aH9wsWAaDYeh1RsfuoQf9hV5pPKSdZLsRmI3zy5SfSDgTd/WFgaUBiE3xa/tWf3+piZX/SVO4+gOWG0ShbGe2/oxHuv/XxyuLAzyZRorEg6X/nOOUXz3pPI3SjTFcVTO6E6gqnAC1PTSsD7FaHsk8ou6A8iYNJ2GQTCsDSgqeThHxiGQNBKJUS+3hsMFzHw08lMp409VwmNSwvfh0vtPpFfvhES32cHR34PDGLO9b9CyqrU/5kABFW7EkbigxahROv/q3VmfW8kib4pZnD3DVDdPY1dtu59fsrdW366C0abTuwAF/UA4mBbSI3yPn5+kO12Enfjg== Received: from DS7P222CA0011.NAMP222.PROD.OUTLOOK.COM (2603:10b6:8:2e::18) by DM4PR12MB6232.namprd12.prod.outlook.com (2603:10b6:8:a5::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 12:20:48 +0000 Received: from DM6NAM11FT006.eop-nam11.prod.protection.outlook.com (2603:10b6:8:2e:cafe::99) by DS7P222CA0011.outlook.office365.com (2603:10b6:8:2e::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 12:20:48 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DM6NAM11FT006.mail.protection.outlook.com (10.13.173.104) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 12:20:47 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 05:20:39 -0700 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 05:20:38 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.37 via Frontend Transport; Mon, 17 Apr 2023 05:20:35 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski CC: Eric Dumazet , Paolo Abeni , Jesper Dangaard Brouer , Toke Hoiland-Jorgensen , , Saeed Mahameed , Lorenzo Bianconi , Gal Pressman , Henning Fehrmann , "Oliver Behnke" , Tariq Toukan Subject: [PATCH net-next 14/15] net/mlx5e: RX, Prepare non-linear striding RQ for XDP multi-buffer support Date: Mon, 17 Apr 2023 15:19:02 +0300 Message-ID: <20230417121903.46218-15-tariqt@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230417121903.46218-1-tariqt@nvidia.com> References: <20230417121903.46218-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT006:EE_|DM4PR12MB6232:EE_ X-MS-Office365-Filtering-Correlation-Id: 94f6bde5-0a36-49e3-f420-08db3f3e2d24 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xwGFNhUMebs1UaXMB1Z28pyTfkWjxuycp1Q3NUcYc88ia72HcQqoxQrZbTURpmSgrD1tPiT0yCRlNaFiQHG2EsLIfMN5JBcNwJn1Z8/fREBXZ3dd5YGBPA/z+KSH9sRr2yxLbohRWvuaS4Zg4f+0AIK0GrybjTnT7uHYqBcYdOCudGUXiZcGnKeTLsFh7hNgZoOaE4W4Nf+pSj9TjtjZnX8rvpM1EBRS7An3I0FGRxbL2gAMTjtCqPA3GFLYth5KyRwchtSxO8jlEeiiGehfdU6HjtMGArUBJEmMFl0zOP45F6Kah93uMV+jn81eYZbQjdWLodPuAA4vgfUt5ZQZxgtqUCiHi4tMPmk1gZTqp5Ckj7AKi5gY/8wy9EFXISj5dglzi6BBKpwJ4sBCwtZVE1tOG0NaFggKCQ8N0SEIxAbanvGsqoAIH5TC5nKRjERwYIVzWUVwt+a2ficCKAUNObCsoPBMONOTCHxg67JittkXEKQLpiieHP6kPUogFPTm+9TAamDAv4vyxSnLHnVBNNnckVeHXs1wNp1zVNeEePYl8gZICEnQHfl6ZeJj3OyL39yvFeanBuR+Q/wFKsV4zLDYiTfuxU9/bvwo+miTPIlgvfmDr3B7AeBJ4w3+dnB+pKh7x+ienVUFvvCn/ZyW/9n3b7RZOvUCgVl+wCgXa+TeigoBx0nfo+bkZYLmnw3nqnLJNpKmiY13zgo8Hd8DdhgHXBmXKDl3P02ItykcJpDC6PdUTiT6x4KOBoMLrp2ccoscT9nt7q9jFgJpIsdmvrgi+JQh7I9bL0rCI/aob8Q= X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(136003)(39860400002)(346002)(451199021)(36840700001)(40470700004)(46966006)(336012)(7696005)(6666004)(86362001)(478600001)(110136005)(34020700004)(36860700001)(2616005)(47076005)(426003)(36756003)(83380400001)(26005)(40480700001)(107886003)(186003)(1076003)(40460700003)(82740400003)(82310400005)(7636003)(356005)(70206006)(70586007)(316002)(2906002)(4326008)(8936002)(8676002)(5660300002)(7416002)(41300700001)(54906003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:20:47.9413 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 94f6bde5-0a36-49e3-f420-08db3f3e2d24 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT006.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6232 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org In preparation for supporting XDP multi-buffer in striding RQ, use xdp_buff struct to describe the packet. Make its skb_shared_info collide the one of the allocated SKB, then add the fragments using the xdp_buff API. Signed-off-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 51 +++++++++++++++++-- 1 file changed, 47 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index a2c4b3df5757..2e99bef49dd6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -1977,10 +1977,17 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w struct mlx5e_frag_page *frag_page = &wi->alloc_units.frag_pages[page_idx]; u16 headlen = min_t(u16, MLX5E_RX_MAX_HEAD, cqe_bcnt); struct mlx5e_frag_page *head_page = frag_page; - u32 frag_offset = head_offset + headlen; - u32 byte_cnt = cqe_bcnt - headlen; + u32 frag_offset = head_offset; + u32 byte_cnt = cqe_bcnt; + struct skb_shared_info *sinfo; + struct mlx5e_xdp_buff mxbuf; + unsigned int truesize = 0; struct sk_buff *skb; + u32 linear_frame_sz; + u16 linear_data_len; dma_addr_t addr; + u16 linear_hr; + void *va; skb = napi_alloc_skb(rq->cq.napi, ALIGN(MLX5E_RX_MAX_HEAD, sizeof(long))); @@ -1989,16 +1996,52 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w return NULL; } + va = skb->head; net_prefetchw(skb->data); - /* Non-linear mode, hence non-XSK, which always uses PAGE_SIZE. */ + frag_offset += headlen; + byte_cnt -= headlen; + linear_hr = skb_headroom(skb); + linear_data_len = headlen; + linear_frame_sz = MLX5_SKB_FRAG_SZ(skb_end_offset(skb)); if (unlikely(frag_offset >= PAGE_SIZE)) { frag_page++; frag_offset -= PAGE_SIZE; } skb_mark_for_recycle(skb); - mlx5e_fill_skb_data(skb, rq, frag_page, byte_cnt, frag_offset); + mlx5e_fill_mxbuf(rq, cqe, va, linear_hr, linear_frame_sz, linear_data_len, &mxbuf); + net_prefetch(mxbuf.xdp.data); + + sinfo = xdp_get_shared_info_from_buff(&mxbuf.xdp); + + while (byte_cnt) { + /* Non-linear mode, hence non-XSK, which always uses PAGE_SIZE. */ + u32 pg_consumed_bytes = min_t(u32, PAGE_SIZE - frag_offset, byte_cnt); + + if (test_bit(MLX5E_RQ_STATE_SHAMPO, &rq->state)) + truesize += pg_consumed_bytes; + else + truesize += ALIGN(pg_consumed_bytes, BIT(rq->mpwqe.log_stride_sz)); + + mlx5e_add_skb_shared_info_frag(rq, sinfo, &mxbuf.xdp, frag_page, frag_offset, + pg_consumed_bytes); + byte_cnt -= pg_consumed_bytes; + frag_offset = 0; + frag_page++; + } + if (xdp_buff_has_frags(&mxbuf.xdp)) { + struct mlx5e_frag_page *pagep; + + xdp_update_skb_shared_info(skb, sinfo->nr_frags, + sinfo->xdp_frags_size, truesize, + xdp_buff_is_frag_pfmemalloc(&mxbuf.xdp)); + + pagep = frag_page - sinfo->nr_frags; + do + pagep->frags++; + while (++pagep < frag_page); + } /* copy header */ addr = page_pool_get_dma_addr(head_page->page); mlx5e_copy_skb_header(rq, skb, head_page->page, addr, From patchwork Mon Apr 17 12:19:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 13213780 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23C9CC77B76 for ; Mon, 17 Apr 2023 12:22:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231214AbjDQMWP (ORCPT ); Mon, 17 Apr 2023 08:22:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231136AbjDQMVp (ORCPT ); Mon, 17 Apr 2023 08:21:45 -0400 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2086.outbound.protection.outlook.com [40.107.94.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1FF5AD28 for ; Mon, 17 Apr 2023 05:21:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=iYRadkqdzvlutA+17QFYv1loi7Vp/4kbMKYYNyJGZpBxcXZZ7ib06n1ebfRAvdDSNrxSYP8ymaPmJM6d3mH9hxh9dRJ5rs/Ev6BEfZK+x7DFvVxs8oN9W2o0kNW2wzIxURV5PRZgj/GqvYcGhGNEwQuWQZX2sklwsJNWlc6WNO0jaZDTH1ib1K3Nyloukh6p9V4yZqzEK8bvRo3v+/GcQM2mZ5ZpMGutN8llqTqmoBatLneHeox5GXrr95iHaEXBW2av8jDkwPobCNzfcsw6eZvqZd6kHxIM0UAu/3jrv89zBDSBAl/a/pPnRW3YbmbtDBSuxMD9aJ8igzRgJUqxSw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=sVQtEADz7kzPrySHF82I4YoQ84yduAPClVb0gUYuMz4=; b=ITZoU1HZTzuLwAatgIPrUB/GXlS+xtpwPBoszOrPIK2RvPvyYNgkWE7ru3qF2fs2i3pPnnJsydOfkQlQkqabTaj1Xy08w0P2n0fiOxOyZmpmaD9IjVjHqDnTsNciz0DYRwzuibXgAQyo+geui0mJHdC0yoE+8ZYN5p3K/XRXOLSqj+V7QibP5Ebq/C+ub8OUw/TXtq1mBn8nkSFhqRXdLWpBLhO5Jj7388ZHLWg+FLB6g4cWnyLe+2fnz0MfLCTqpjmD07mzBphyJOQIOMUNmZOfcFbPqyFU2boKQ5bIllvYAJpTTZcYyu4/Jq6Bc4/keWbe3FLP/1hASo3Psnfhag== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=davemloft.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=sVQtEADz7kzPrySHF82I4YoQ84yduAPClVb0gUYuMz4=; b=hVGg5g5SvF3ujYR09DqCQdalg5H1VbTEUv3QPyloM/C5zNkVfjFiL6P7/Sq8KUB0luB/FvCrQ5hMUv5zrKV8ml4wCYrX5Kt5QpNYcVFSX6RbT28qgVEWw7bAtKCnjZNdMjDFkTR4HVK3ZCmVMqqlUQcbQWTq4vpuizmr05SOMpWdD5VLzTdJz6wMKuSsWs2NT3I8rgESz0/7M5og+FehlhnY7DKISjWbdKdTKd/aJRRUCdMqexr66sjHF6brtPIiF3FuvmQCdZ3uY4L/cQIBKyjahjylOmMn/CsRFTowfBeqOZEpRDIW+cWa8Y8/UkIblqSVtorCaJ+W4hovLHd9Qg== Received: from BN9PR03CA0269.namprd03.prod.outlook.com (2603:10b6:408:ff::34) by IA1PR12MB6020.namprd12.prod.outlook.com (2603:10b6:208:3d4::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 12:20:59 +0000 Received: from BN8NAM11FT113.eop-nam11.prod.protection.outlook.com (2603:10b6:408:ff:cafe::65) by BN9PR03CA0269.outlook.office365.com (2603:10b6:408:ff::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 12:20:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT113.mail.protection.outlook.com (10.13.176.163) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 12:20:58 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 05:20:42 -0700 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 05:20:42 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.37 via Frontend Transport; Mon, 17 Apr 2023 05:20:39 -0700 From: Tariq Toukan To: "David S. Miller" , Jakub Kicinski CC: Eric Dumazet , Paolo Abeni , Jesper Dangaard Brouer , Toke Hoiland-Jorgensen , , Saeed Mahameed , Lorenzo Bianconi , Gal Pressman , Henning Fehrmann , "Oliver Behnke" , Tariq Toukan Subject: [PATCH net-next 15/15] net/mlx5e: RX, Add XDP multi-buffer support in Striding RQ Date: Mon, 17 Apr 2023 15:19:03 +0300 Message-ID: <20230417121903.46218-16-tariqt@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230417121903.46218-1-tariqt@nvidia.com> References: <20230417121903.46218-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT113:EE_|IA1PR12MB6020:EE_ X-MS-Office365-Filtering-Correlation-Id: dea0c307-5815-4e1d-5518-08db3f3e336b X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: bKC7soXqE0c56QfIrbpRAdjusQliqwY2KT+Ui/iAoHu9c0Mq4XpIcxEbOucPA5aLf/s+C9XFyD+ynJ53126V91flNo0le/bDMEgYrF9UG/Svh7hy0hBqABHmKSp4i0vemePSst+MtOec+lOmkKxOnrSc4grqjgIGsgfykSvoyPx89/uUhE3wOBb6yNdgcxuFl9lMKH23lTuoZZ21lgUTo6wsvJ+raQIkGjFFPlSD4Z5tkIq815mQHUk+Lt5fNWPqcGTe/ERmADpNvptI/49y934mmtjtUftYRwML/Nh4k6egfATrE76Mtd2EtNjlKZo9A0eZeGYTgcWPJ2KIbd2Sr2AdAV1+zpr8BJ86F0PWrfwEP8nRxlxOHnoeRoxiXhUUy2nvIHTlfmZ6i1Qw5/Dz5vB7Uwas+SMrfwxVMcG3kBfQdrnGGvDaJLD5VzZvgg9wy2CVhlMRi4N5s5V18H81hV4ZPZF6g53NClx2GacS9fCKJ8PKilNByJLDidOushUwjsMz0M3iUgigky6U81mjWRo3HeDJMag8mHo/dPeMnl75iiDtEhvvatGHHs/Nq+4qpx76mn7kfzrocnFJdWkWYrruFNCnXMcAGDA1EE1C+tFaLDCs/u/TFSAJFA4TNqiem96D/ZbPd3Go0s1jDhw71kXDFQ/Aif4vzMGGZUHV2DmB5T6McmZQz1W0/A46CF+gtFMROp6s0O4gSmyJRUnt/XgQPjoIeHczZ0sCN4upw6sq2fTYvYGEDLmy6ikq9kJ1+oUqG4aBvzd3+5v4U0PVUpl+LxSkcrbyGPT1h4S6x1w= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(39860400002)(346002)(396003)(451199021)(46966006)(36840700001)(40470700004)(478600001)(6666004)(34020700004)(8936002)(8676002)(316002)(41300700001)(82740400003)(4326008)(70586007)(40480700001)(70206006)(54906003)(7636003)(110136005)(356005)(40460700003)(186003)(107886003)(2906002)(30864003)(36756003)(1076003)(26005)(426003)(336012)(86362001)(83380400001)(47076005)(82310400005)(2616005)(36860700001)(5660300002)(7696005)(7416002);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:20:58.4220 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: dea0c307-5815-4e1d-5518-08db3f3e336b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT113.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6020 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Here we add support for multi-buffer XDP handling in Striding RQ, which is our default out-of-the-box RQ type. Before this series, loading such an XDP program would fail, until you switch to the legacy RQ (by unsetting the rx_striding_rq priv-flag). To overcome the lack of headroom and tailroom between the strides, we allocate a side page to be used for the descriptor (xdp_buff / skb) and the linear part. When an XDP program is attached, we structure the xdp_buff so that it contains no data in the linear part, and the whole packet resides in the fragments. In case of XDP_PASS, where an SKB still needs to be created, we copy up to 256 bytes to its linear part, to match the current behavior, and satisfy functions that assume finding the packet headers in the SKB linear part (like eth_type_trans). Performance testing: Packet rate test, 64 bytes, 32 channels, MTU 9000 bytes. CPU: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz. NIC: ConnectX-6 Dx, at 100 Gbps. +----------+-------------+-------------+---------+ | Test | Legacy RQ | Striding RQ | Speedup | +----------+-------------+-------------+---------+ | XDP_DROP | 101,615,544 | 117,191,020 | +15% | +----------+-------------+-------------+---------+ | XDP_TX | 95,608,169 | 117,043,422 | +22% | +----------+-------------+-------------+---------+ Reviewed-by: Saeed Mahameed Signed-off-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 1 + .../ethernet/mellanox/mlx5/core/en/params.c | 21 ++- .../ethernet/mellanox/mlx5/core/en/params.h | 3 + .../net/ethernet/mellanox/mlx5/core/en_main.c | 37 +++-- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 135 +++++++++++++----- 5 files changed, 138 insertions(+), 59 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 0e15afbe1673..b8987a404d75 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -587,6 +587,7 @@ union mlx5e_alloc_units { struct mlx5e_mpw_info { u16 consumed_strides; DECLARE_BITMAP(skip_release_bitmap, MLX5_MPWRQ_MAX_PAGES_PER_WQE); + struct mlx5e_frag_page linear_page; union mlx5e_alloc_units alloc_units; }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c index 196862e67af3..ef546ed8b4d9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c @@ -323,6 +323,20 @@ static bool mlx5e_verify_rx_mpwqe_strides(struct mlx5_core_dev *mdev, return log_num_strides >= MLX5_MPWQE_LOG_NUM_STRIDES_BASE; } +bool mlx5e_verify_params_rx_mpwqe_strides(struct mlx5_core_dev *mdev, + struct mlx5e_params *params, + struct mlx5e_xsk_param *xsk) +{ + u8 log_wqe_num_of_strides = mlx5e_mpwqe_get_log_num_strides(mdev, params, xsk); + u8 log_wqe_stride_size = mlx5e_mpwqe_get_log_stride_size(mdev, params, xsk); + enum mlx5e_mpwrq_umr_mode umr_mode = mlx5e_mpwrq_umr_mode(mdev, xsk); + u8 page_shift = mlx5e_mpwrq_page_shift(mdev, xsk); + + return mlx5e_verify_rx_mpwqe_strides(mdev, log_wqe_stride_size, + log_wqe_num_of_strides, + page_shift, umr_mode); +} + bool mlx5e_rx_mpwqe_is_linear_skb(struct mlx5_core_dev *mdev, struct mlx5e_params *params, struct mlx5e_xsk_param *xsk) @@ -405,6 +419,10 @@ u8 mlx5e_mpwqe_get_log_stride_size(struct mlx5_core_dev *mdev, if (mlx5e_rx_mpwqe_is_linear_skb(mdev, params, xsk)) return order_base_2(mlx5e_rx_get_linear_stride_sz(mdev, params, xsk, true)); + /* XDP in mlx5e doesn't support multiple packets per page. */ + if (params->xdp_prog) + return PAGE_SHIFT; + return MLX5_MPWRQ_DEF_LOG_STRIDE_SZ(mdev); } @@ -575,9 +593,6 @@ int mlx5e_mpwrq_validate_regular(struct mlx5_core_dev *mdev, struct mlx5e_params if (!mlx5e_check_fragmented_striding_rq_cap(mdev, page_shift, umr_mode)) return -EOPNOTSUPP; - if (params->xdp_prog && !mlx5e_rx_mpwqe_is_linear_skb(mdev, params, NULL)) - return -EINVAL; - return 0; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h index c9be6eb88012..a5d20f6d6d9c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h @@ -153,6 +153,9 @@ int mlx5e_build_channel_param(struct mlx5_core_dev *mdev, u16 mlx5e_calc_sq_stop_room(struct mlx5_core_dev *mdev, struct mlx5e_params *params); int mlx5e_validate_params(struct mlx5_core_dev *mdev, struct mlx5e_params *params); +bool mlx5e_verify_params_rx_mpwqe_strides(struct mlx5_core_dev *mdev, + struct mlx5e_params *params, + struct mlx5e_xsk_param *xsk); static inline void mlx5e_params_print_info(struct mlx5_core_dev *mdev, struct mlx5e_params *params) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index a95ce206391b..7eb1eeb115ca 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -803,6 +803,9 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params, pool_size = rq->mpwqe.pages_per_wqe << mlx5e_mpwqe_get_log_rq_size(mdev, params, xsk); + if (!mlx5e_rx_mpwqe_is_linear_skb(mdev, params, xsk) && params->xdp_prog) + pool_size *= 2; /* additional page per packet for the linear part */ + rq->mpwqe.log_stride_sz = mlx5e_mpwqe_get_log_stride_size(mdev, params, xsk); rq->mpwqe.num_strides = BIT(mlx5e_mpwqe_get_log_num_strides(mdev, params, xsk)); @@ -4060,10 +4063,9 @@ void mlx5e_set_xdp_feature(struct net_device *netdev) val = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | NETDEV_XDP_ACT_XSK_ZEROCOPY | + NETDEV_XDP_ACT_RX_SG | NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_NDO_XMIT_SG; - if (params->rq_wq_type == MLX5_WQ_TYPE_CYCLIC) - val |= NETDEV_XDP_ACT_RX_SG; xdp_set_features_flag(netdev, val); } @@ -4261,23 +4263,20 @@ static bool mlx5e_params_validate_xdp(struct net_device *netdev, mlx5e_rx_is_linear_skb(mdev, params, NULL) : mlx5e_rx_mpwqe_is_linear_skb(mdev, params, NULL); - /* XDP affects striding RQ parameters. Block XDP if striding RQ won't be - * supported with the new parameters: if PAGE_SIZE is bigger than - * MLX5_MPWQE_LOG_STRIDE_SZ_MAX, striding RQ can't be used, even though - * the MTU is small enough for the linear mode, because XDP uses strides - * of PAGE_SIZE on regular RQs. - */ - if (!is_linear && params->rq_wq_type != MLX5_WQ_TYPE_CYCLIC) { - netdev_warn(netdev, "XDP is not allowed with striding RQ and MTU(%d) > %d\n", - params->sw_mtu, - mlx5e_xdp_max_mtu(params, NULL)); - return false; - } - if (!is_linear && !params->xdp_prog->aux->xdp_has_frags) { - netdev_warn(netdev, "MTU(%d) > %d, too big for an XDP program not aware of multi buffer\n", - params->sw_mtu, - mlx5e_xdp_max_mtu(params, NULL)); - return false; + if (!is_linear) { + if (!params->xdp_prog->aux->xdp_has_frags) { + netdev_warn(netdev, "MTU(%d) > %d, too big for an XDP program not aware of multi buffer\n", + params->sw_mtu, + mlx5e_xdp_max_mtu(params, NULL)); + return false; + } + if (params->rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ && + !mlx5e_verify_params_rx_mpwqe_strides(mdev, params, NULL)) { + netdev_warn(netdev, "XDP is not allowed with striding RQ and MTU(%d) > %d\n", + params->sw_mtu, + mlx5e_xdp_max_mtu(params, NULL)); + return false; + } } return true; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 2e99bef49dd6..a8c2ae389d6c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -1982,36 +1982,51 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w struct skb_shared_info *sinfo; struct mlx5e_xdp_buff mxbuf; unsigned int truesize = 0; + struct bpf_prog *prog; struct sk_buff *skb; u32 linear_frame_sz; u16 linear_data_len; - dma_addr_t addr; u16 linear_hr; void *va; - skb = napi_alloc_skb(rq->cq.napi, - ALIGN(MLX5E_RX_MAX_HEAD, sizeof(long))); - if (unlikely(!skb)) { - rq->stats->buff_alloc_err++; - return NULL; - } - - va = skb->head; - net_prefetchw(skb->data); + prog = rcu_dereference(rq->xdp_prog); - frag_offset += headlen; - byte_cnt -= headlen; - linear_hr = skb_headroom(skb); - linear_data_len = headlen; - linear_frame_sz = MLX5_SKB_FRAG_SZ(skb_end_offset(skb)); - if (unlikely(frag_offset >= PAGE_SIZE)) { - frag_page++; - frag_offset -= PAGE_SIZE; + if (prog) { + /* area for bpf_xdp_[store|load]_bytes */ + net_prefetchw(page_address(frag_page->page) + frag_offset); + if (unlikely(mlx5e_page_alloc_fragmented(rq, &wi->linear_page))) { + rq->stats->buff_alloc_err++; + return NULL; + } + va = page_address(wi->linear_page.page); + net_prefetchw(va); /* xdp_frame data area */ + linear_hr = XDP_PACKET_HEADROOM; + linear_data_len = 0; + linear_frame_sz = MLX5_SKB_FRAG_SZ(linear_hr + MLX5E_RX_MAX_HEAD); + } else { + skb = napi_alloc_skb(rq->cq.napi, + ALIGN(MLX5E_RX_MAX_HEAD, sizeof(long))); + if (unlikely(!skb)) { + rq->stats->buff_alloc_err++; + return NULL; + } + skb_mark_for_recycle(skb); + va = skb->head; + net_prefetchw(va); /* xdp_frame data area */ + net_prefetchw(skb->data); + + frag_offset += headlen; + byte_cnt -= headlen; + linear_hr = skb_headroom(skb); + linear_data_len = headlen; + linear_frame_sz = MLX5_SKB_FRAG_SZ(skb_end_offset(skb)); + if (unlikely(frag_offset >= PAGE_SIZE)) { + frag_page++; + frag_offset -= PAGE_SIZE; + } } - skb_mark_for_recycle(skb); mlx5e_fill_mxbuf(rq, cqe, va, linear_hr, linear_frame_sz, linear_data_len, &mxbuf); - net_prefetch(mxbuf.xdp.data); sinfo = xdp_get_shared_info_from_buff(&mxbuf.xdp); @@ -2030,25 +2045,71 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w frag_offset = 0; frag_page++; } - if (xdp_buff_has_frags(&mxbuf.xdp)) { - struct mlx5e_frag_page *pagep; - xdp_update_skb_shared_info(skb, sinfo->nr_frags, - sinfo->xdp_frags_size, truesize, - xdp_buff_is_frag_pfmemalloc(&mxbuf.xdp)); + if (prog) { + if (mlx5e_xdp_handle(rq, prog, &mxbuf)) { + if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { + int i; + + for (i = 0; i < sinfo->nr_frags; i++) + /* non-atomic */ + __set_bit(page_idx + i, wi->skip_release_bitmap); + return NULL; + } + mlx5e_page_release_fragmented(rq, &wi->linear_page); + return NULL; /* page/packet was consumed by XDP */ + } + + skb = mlx5e_build_linear_skb(rq, mxbuf.xdp.data_hard_start, + linear_frame_sz, + mxbuf.xdp.data - mxbuf.xdp.data_hard_start, 0, + mxbuf.xdp.data - mxbuf.xdp.data_meta); + if (unlikely(!skb)) { + mlx5e_page_release_fragmented(rq, &wi->linear_page); + return NULL; + } - pagep = frag_page - sinfo->nr_frags; - do - pagep->frags++; - while (++pagep < frag_page); - } - /* copy header */ - addr = page_pool_get_dma_addr(head_page->page); - mlx5e_copy_skb_header(rq, skb, head_page->page, addr, - head_offset, head_offset, headlen); - /* skb linear part was allocated with headlen and aligned to long */ - skb->tail += headlen; - skb->len += headlen; + skb_mark_for_recycle(skb); + wi->linear_page.frags++; + mlx5e_page_release_fragmented(rq, &wi->linear_page); + + if (xdp_buff_has_frags(&mxbuf.xdp)) { + struct mlx5e_frag_page *pagep; + + /* sinfo->nr_frags is reset by build_skb, calculate again. */ + xdp_update_skb_shared_info(skb, frag_page - head_page, + sinfo->xdp_frags_size, truesize, + xdp_buff_is_frag_pfmemalloc(&mxbuf.xdp)); + + pagep = head_page; + do + pagep->frags++; + while (++pagep < frag_page); + } + __pskb_pull_tail(skb, headlen); + } else { + dma_addr_t addr; + + if (xdp_buff_has_frags(&mxbuf.xdp)) { + struct mlx5e_frag_page *pagep; + + xdp_update_skb_shared_info(skb, sinfo->nr_frags, + sinfo->xdp_frags_size, truesize, + xdp_buff_is_frag_pfmemalloc(&mxbuf.xdp)); + + pagep = frag_page - sinfo->nr_frags; + do + pagep->frags++; + while (++pagep < frag_page); + } + /* copy header */ + addr = page_pool_get_dma_addr(head_page->page); + mlx5e_copy_skb_header(rq, skb, head_page->page, addr, + head_offset, head_offset, headlen); + /* skb linear part was allocated with headlen and aligned to long */ + skb->tail += headlen; + skb->len += headlen; + } return skb; }