From patchwork Wed Jul 13 05:15:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 12915974 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A8E5CCA479 for ; Wed, 13 Jul 2022 05:16:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232076AbiGMFQt (ORCPT ); Wed, 13 Jul 2022 01:16:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231976AbiGMFQq (ORCPT ); Wed, 13 Jul 2022 01:16:46 -0400 Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2040.outbound.protection.outlook.com [40.107.101.40]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19DC4D4BE2 for ; Tue, 12 Jul 2022 22:16:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QsQ2W8tePG3jCzrGmbWSwvbY673K9wOWGvTE5C9R59Brd9qAA9fT973u4+ZuQe12mGXGaNyYRpobzibE8XVYOHgG+ibcOloX+snH7C02qX0rvnSrIRpnSwpQapvOyinlwlk/TUE35kklAW9AHqWl1qbfE6oVKg1FTOOIjqBs7hAlguHpOfOQHs2UBLgAs+ro4pVCF4IjKWaYDlKaw8VddIOrIVZM3GoLnwSHIJt6KccuqiPn/b3ezGz3LfW3N5iAQXgZSJ2CH8/CqwfRw+YxIkCWUAN30G4WMJ8gLmuhNra5LJ+OUpRqX93XJNGG4nwRC07tk0iCJb7sNB7rELkO/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=7RyyPFcs1zcWhX/Ju4FOYO0yjTHX4wy7oQRhZjpKQ3o=; b=NiNqQLUChqklXR3+93vHVJL2r2jSTBogQlM82shPPHM5ZOb+a+7PzryjO09Rc1UmgZ9pYilwi5wc7WcldGHk07Cw+invLDYP/GgW5wjFbd+p6NkWlzas6rVFmkLOgDLXXvbk3kAGeVqFbbXEweliNOCeTeOUgHaOOHFEc3wTewrTIQwCCJedpr4qIRbI88CHh/3d6QzlbRsgH2f0PsNWkqEiJsK1BYYpyQHyjCjcvRkhGnz/hIE8qaENP3C5Q9ksd3WIoLsbop8zX8Esctfbk2rR6u7lDnc32a7zKehqRfJRwMmd7U2WWo+K3THdT8tpLuyEKNQxNwQ1SL6bpaS1iw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=google.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=7RyyPFcs1zcWhX/Ju4FOYO0yjTHX4wy7oQRhZjpKQ3o=; b=hoDigWgnHzHcwNkdb9CdmurVXBfF66nCpkLqUVA8/CldLOF3WZL8X1lsmrSQjGctqxz0oNYG0zgBgECBmuWUZ5tK877oLpcrnwh/5J9bvUhP6jJMcXj2Hs2O/v9SzXgDmYszDxVV8G6LBGHGLhdaNtU/PpyDz+QqYJ22mP7WpUMt7BPbDjw47CZ3ksGmfm6lQQXcPhYb74pmnUZkjktVw6atvEV2gb04hjRrU2Tw+b18dH/5rsTCncalq1A5PXsQTj0Gpf6lBL6bcfSXNzx04yCv7yOS/edfaWh5sTY6RH09aoJ5CkjsQAs61GrYtVC1+shW8+TeuWrcLCLNe+2OZg== Received: from MW4PR04CA0083.namprd04.prod.outlook.com (2603:10b6:303:6b::28) by BL1PR12MB5302.namprd12.prod.outlook.com (2603:10b6:208:31d::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.12; Wed, 13 Jul 2022 05:16:43 +0000 Received: from CO1NAM11FT026.eop-nam11.prod.protection.outlook.com (2603:10b6:303:6b:cafe::88) by MW4PR04CA0083.outlook.office365.com (2603:10b6:303:6b::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5417.20 via Frontend Transport; Wed, 13 Jul 2022 05:16:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT026.mail.protection.outlook.com (10.13.175.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5438.12 via Frontend Transport; Wed, 13 Jul 2022 05:16:43 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Wed, 13 Jul 2022 05:16:42 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 12 Jul 2022 22:16:40 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.26 via Frontend Transport; Tue, 12 Jul 2022 22:16:38 -0700 From: Tariq Toukan To: Boris Pismenny , John Fastabend , Jakub Kicinski CC: "David S . Miller" , Eric Dumazet , Paolo Abeni , , Saeed Mahameed , Gal Pressman , Tariq Toukan , Maxim Mikityanskiy Subject: [PATCH net-next V2 1/6] net/tls: Perform immediate device ctx cleanup when possible Date: Wed, 13 Jul 2022 08:15:58 +0300 Message-ID: <20220713051603.14014-2-tariqt@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220713051603.14014-1-tariqt@nvidia.com> References: <20220713051603.14014-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1653d14a-6c6c-40da-e973-08da648edffa X-MS-TrafficTypeDiagnostic: BL1PR12MB5302:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 9xyoECgmV9BMl5XMKZ+zA0ZNOLKKJf7ULc+sRLwiYjE0bFyssFO0aVsFoNH+iizJs9PTO0An/9XVlyLrT7i/FTtnA2ohALxS6ikKlq2WLmOPwogitdczNpYP8z0Y9NcTnQrEybDrDEtsqF5vXglZITuAwQH7qvsXdFUa6xErJ5bc6Cig+yDZypKzMTf1f9RBK7lvt331P/JzIOQQ2lqOyVFcEIJUo32hAuHKiof8XVQGa+h93IzjESF9IQNZXxVNfPYizIiFOS4Cm8DnKXwNERlToGeGCXZwJHM/58kH3pj2Wkql50LzcfKuB9M035wVFJMotnWOq06ev3n3i9HlydQ7tnEac81qzbrvuh+LYGJCnQHr98LZY+5VLL2ssOJCawS+5eHkgFnu0wtf27Z51apAu2J5MWJ5f9I0RqXE3o8JC6/ch4KQBySCq+KgROt9LJJJv7eryhjthtC6JQQ4hqbOhx0DdSZOphXbUOMVcV+In8WNdgMDImmAoUe9pYHp+Es1tLJvC7+AykieVWSN7K3oCy95WXsu2uEBQQ+JPhVcM3lhcQp9ttjs+8YNycA9DbGY0GaeMa5vJusIaMAMMcUXsiAusoMuBkhxRveO3dF3SumwzDVDSdmOSjOLCQVCNTtoqqgxnzpwiPklRUwZ1j+BXJNFmiy4Bb2yviHqP2F8ZJPROk8lAboTykxiA84MqCGVm/YjW2jB9U3zEx4fR527GSyCJMel/eCqpz3ku6gMTnQrcegYYYstsMSFm6WmNytKTEwCpAMvEjjTQjdfMkY2mK5qCcRuTTZ0CMY6bOjQADKEgLk00nUv0YMjGaL0kj1H1yLcWunOZHiA5ACm8sJ/SamhkI9Blc9HwmjKyk//71KC0KtePZJ3yxbV1Po1 X-Forefront-Antispam-Report: CIP:12.22.5.236;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(396003)(346002)(376002)(136003)(39860400002)(36840700001)(40470700004)(46966006)(478600001)(7696005)(8676002)(6666004)(41300700001)(4326008)(26005)(70206006)(70586007)(8936002)(5660300002)(2906002)(110136005)(54906003)(316002)(82740400003)(81166007)(356005)(40460700003)(36756003)(86362001)(40480700001)(82310400005)(83380400001)(186003)(1076003)(107886003)(2616005)(47076005)(426003)(336012)(36860700001)(14773001)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jul 2022 05:16:43.0836 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1653d14a-6c6c-40da-e973-08da648edffa X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.236];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT026.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5302 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org TLS context destructor can be run in atomic context. Cleanup operations for device-offloaded contexts could require access and interaction with the device callbacks, which might sleep. Hence, the cleanup of such contexts must be deferred and completed inside an async work. For all others, this is not necessary, as cleanup is atomic. Invoke cleanup immediately for them, avoiding queueing redundant gc work. Signed-off-by: Tariq Toukan Reviewed-by: Maxim Mikityanskiy Signed-off-by: Saeed Mahameed --- net/tls/tls_device.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index 227b92a3064a..fdb7b7a4b05c 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -96,16 +96,24 @@ static void tls_device_gc_task(struct work_struct *work) static void tls_device_queue_ctx_destruction(struct tls_context *ctx) { unsigned long flags; + bool async_cleanup; spin_lock_irqsave(&tls_device_lock, flags); - list_move_tail(&ctx->list, &tls_device_gc_list); - - /* schedule_work inside the spinlock - * to make sure tls_device_down waits for that work. - */ - schedule_work(&tls_device_gc_work); + async_cleanup = ctx->netdev && ctx->tx_conf == TLS_HW; + if (async_cleanup) { + list_move_tail(&ctx->list, &tls_device_gc_list); + /* schedule_work inside the spinlock + * to make sure tls_device_down waits for that work. + */ + schedule_work(&tls_device_gc_work); + } else { + list_del(&ctx->list); + } spin_unlock_irqrestore(&tls_device_lock, flags); + + if (!async_cleanup) + tls_device_free_ctx(ctx); } /* We assume that the socket is already connected */ From patchwork Wed Jul 13 05:15:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 12915980 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B9FFC433EF for ; Wed, 13 Jul 2022 05:17:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232142AbiGMFQ6 (ORCPT ); Wed, 13 Jul 2022 01:16:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232111AbiGMFQv (ORCPT ); Wed, 13 Jul 2022 01:16:51 -0400 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2053.outbound.protection.outlook.com [40.107.212.53]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 282C0D4BE4 for ; Tue, 12 Jul 2022 22:16:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hTXOW1/iYOFRcyyTodgM12tdyYt0DPy65WHCEbBUjex1SBWVoMrAAj5hgitcz1jXGke/f3TWwuUtRHovbDm4KQhTevFhBAf1lKcsVmlslS1CkwDhcpuSCEDkxfuPv9HxUxs3HWwEhvAPt8lXN7WQ2gItmcBVy0lz0bxPa4S67dKjjGgkHHRSX+2qYmeqkR0O1mNhTQ2c3j89bMOkhJAUAWx1DYsHQ1q8wueaJSA/h8ExFfgfVnp/yRKTqnbTGtqvNGjT+uY1mFBWBzp8WA3gfyrXhYe1HTjH7Tb6OyY+Yj9v2mSyihNfsF3Pt1N7GZvAw+RH9czABsGh8plPhkcOqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CWltZG7E52S6NJA/yPRwQYeBpU8QVN6l5EGjV9HFYOM=; b=TcOrbkUMhtiOJqg8n+5Yksp1egRHTBXyyX21xta0kMd+PLcdTkLTjIQM0b/GTfHvuQ2Xkjq6D6xTM8GIioBOMTOXKU44/PKiqDW9Nwkjxludr0sIJT0TxyA1ednxb77jo59lRfIyEQ1+fpd0AtVhH0rS6zgpMAPY1UITu1SG7W9t0MbpVH0rUwt1KDlf2zxvz0n88nqhErRAZ8RR9e+p/G2uiPEl6ONNrZTLiemcaUfqCud/wAksuWxXRUZM21aYLvPDR0nGmCzT7i0KmV1f7Fnk2+wPtxFAm+G7Le/nHXGiKLWFJCGWAFcdzbuZrCPnX+oH5Wt3KYa8jkbBWaQvRA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=google.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CWltZG7E52S6NJA/yPRwQYeBpU8QVN6l5EGjV9HFYOM=; b=lBx3vLcOzEEuHeteoLGEP+I7PPrswe+1NkjD6u3y/ZepJ4sPXzy0a11GerJ2/AoQa7h2fEfmWLFd/YiK2dWeFNCefi4sAwx5AgflOx650Efff60Wa/D4AjKCaQL3TfiV2Wp63gCgwZoPEd3uCvnoTVM5XMaB7ARv8HiIWPgdW3ydGy7QSw9Yhi2Ujdx8bMoNTHBx0VAYIOsZNky8imOCq0EoQy0dYVllnPWrtXj4mZUyZsZYXb2OJMBCrJbANEh7560Rmhf0FeVSfMp12EiBY46fbxy4BIvNkD4WjcO0brSzlccH9H0MwEt9migkiSz1wKO6xK/LoKKsMXJctdL3+Q== Received: from DM6PR21CA0013.namprd21.prod.outlook.com (2603:10b6:5:174::23) by DM6PR12MB3498.namprd12.prod.outlook.com (2603:10b6:5:11a::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5417.26; Wed, 13 Jul 2022 05:16:47 +0000 Received: from DM6NAM11FT030.eop-nam11.prod.protection.outlook.com (2603:10b6:5:174:cafe::96) by DM6PR21CA0013.outlook.office365.com (2603:10b6:5:174::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.10 via Frontend Transport; Wed, 13 Jul 2022 05:16:47 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT030.mail.protection.outlook.com (10.13.172.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5438.12 via Frontend Transport; Wed, 13 Jul 2022 05:16:46 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Wed, 13 Jul 2022 05:16:45 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 12 Jul 2022 22:16:44 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.26 via Frontend Transport; Tue, 12 Jul 2022 22:16:42 -0700 From: Tariq Toukan To: Boris Pismenny , John Fastabend , Jakub Kicinski CC: "David S . Miller" , Eric Dumazet , Paolo Abeni , , Saeed Mahameed , Gal Pressman , Tariq Toukan , Maxim Mikityanskiy Subject: [PATCH net-next V2 2/6] net/tls: Multi-threaded calls to TX tls_dev_del Date: Wed, 13 Jul 2022 08:15:59 +0300 Message-ID: <20220713051603.14014-3-tariqt@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220713051603.14014-1-tariqt@nvidia.com> References: <20220713051603.14014-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0fdfdd60-2840-4b35-ea05-08da648ee21a X-MS-TrafficTypeDiagnostic: DM6PR12MB3498:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: p52i+FsWrVOqjK3I8hcgzVnhqQUt1juIgL6F3bzfNNmJeH37wp+QkuYuSO5NfarL2LWSdh0vadNnSVyFSRIMz9ESUxYHgd2Gi5HK4Z0smhkSitNg4FFaIuDcN8p//rzI0bQUviBvY6PSW0iRJ9g++YybfV5i6OGPQ48qpJsUz4vGizJjH/P+yONoGNQTdXqDmeAMr8+HZcF9B3Zv8WgfhrePttrlHItGRW9K/IszsZv06vBkec6pM1l76t9NajZCg2vOWsh2w56plAUmaNInIpr2D6ke+6RoxaAM+A483rouliUdKbNxZMqoQHfVYqhXBc9Ncw5AvYx/W2zBIYQQws+0WXTWa/ngFfDAiBvVCuDcrAopaziUM5X4tSra9zvxlc0yGxqmk1HswygrPIP9aZQ7QBabdelBwtGr/MwaPcd+kstobfT1Ga9RAuQgE9TfYl6Hi/kt7bg+jTnusWPpuNze2BIL7gTNCFM+0bjA951Llcv9Y4YUjMITIAjqpFzQLYXlCbKYYsqZ6pTuvw8PpG92rmmO0ufa9loYLuS5jLbwx6SGY7sZkrYz1qYKdRh4DvJBpDlmr3xapjZyA3YtSoT5Mz9lcLIHvZfEAF0iRObVzLFL4E1z0i/hsZZwEFrFTtXIIFUA0QibVQzj5v4F6T38s6+kRSfwV+mA/hh9YCIgn9Qf8qKXZCSMv+95OSP0kUiiowRUGic7/LRu8i0Rf2c029DAWg01vCmW3EkyezBbB7NQJHg1rqYmRQmlU38Vfqxwemw4sz4sXkpmfKnJJ98+dc1rj8QQK3fnZGWob8IIFSiumgZrDesfXyrHUy1TArTeX28YH4S58L9R1cnUmBPSdoAijfoKqB+jmdBr1GE= X-Forefront-Antispam-Report: CIP:12.22.5.235;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(346002)(396003)(39860400002)(376002)(136003)(46966006)(36840700001)(40470700004)(478600001)(82740400003)(1076003)(316002)(54906003)(40460700003)(107886003)(2616005)(186003)(81166007)(83380400001)(110136005)(2906002)(6666004)(70586007)(41300700001)(70206006)(4326008)(8676002)(8936002)(5660300002)(86362001)(7696005)(36860700001)(356005)(336012)(82310400005)(47076005)(26005)(426003)(36756003)(40480700001)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jul 2022 05:16:46.6801 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0fdfdd60-2840-4b35-ea05-08da648ee21a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.235];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT030.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3498 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Multiple TLS device-offloaded contexts can be added in parallel via concurrent calls to .tls_dev_add, while calls to .tls_dev_del are sequential in tls_device_gc_task. This is not a sustainable behavior. This creates a rate gap between add and del operations (addition rate outperforms the deletion rate). When running for enough time, the TLS device resources could get exhausted, failing to offload new connections. Replace the single-threaded garbage collector work with a per-context alternative, so they can be handled on several cores in parallel. Use a new dedicated destruct workqueue for this. Tested with mlx5 device: Before: 22141 add/sec, 103 del/sec After: 11684 add/sec, 11684 del/sec Signed-off-by: Tariq Toukan Reviewed-by: Maxim Mikityanskiy Signed-off-by: Saeed Mahameed --- include/net/tls.h | 2 ++ net/tls/tls.h | 4 +-- net/tls/tls_device.c | 65 +++++++++++++++++++------------------------- net/tls/tls_main.c | 7 ++++- 4 files changed, 38 insertions(+), 40 deletions(-) v2: Per Jakub's comments: - Remove bundling of work and back-pointer. Put directly in tls_offload_context_tx. - Use new dedicated workqueue for destruct works. Flush it on cleanup. diff --git a/include/net/tls.h b/include/net/tls.h index 8742e13bc362..57a8fbbf395d 100644 --- a/include/net/tls.h +++ b/include/net/tls.h @@ -142,6 +142,8 @@ struct tls_offload_context_tx { struct scatterlist sg_tx_data[MAX_SKB_FRAGS]; void (*sk_destruct)(struct sock *sk); + struct work_struct destruct_work; + struct tls_context *ctx; u8 driver_state[] __aligned(8); /* The TLS layer reserves room for driver specific state * Currently the belief is that there is not enough diff --git a/net/tls/tls.h b/net/tls/tls.h index 8005ee25157d..e0ccc96a0850 100644 --- a/net/tls/tls.h +++ b/net/tls/tls.h @@ -133,7 +133,7 @@ static inline struct tls_msg *tls_msg(struct sk_buff *skb) } #ifdef CONFIG_TLS_DEVICE -void tls_device_init(void); +int tls_device_init(void); void tls_device_cleanup(void); int tls_set_device_offload(struct sock *sk, struct tls_context *ctx); void tls_device_free_resources_tx(struct sock *sk); @@ -143,7 +143,7 @@ void tls_device_rx_resync_new_rec(struct sock *sk, u32 rcd_len, u32 seq); int tls_device_decrypted(struct sock *sk, struct tls_context *tls_ctx, struct sk_buff *skb, struct strp_msg *rxm); #else -static inline void tls_device_init(void) {} +static inline int tls_device_init(void) { return 0; } static inline void tls_device_cleanup(void) {} static inline int diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index fdb7b7a4b05c..ba528dbb69b4 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -46,10 +46,8 @@ */ static DECLARE_RWSEM(device_offload_lock); -static void tls_device_gc_task(struct work_struct *work); +static struct workqueue_struct *destruct_wq __read_mostly; -static DECLARE_WORK(tls_device_gc_work, tls_device_gc_task); -static LIST_HEAD(tls_device_gc_list); static LIST_HEAD(tls_device_list); static LIST_HEAD(tls_device_down_list); static DEFINE_SPINLOCK(tls_device_lock); @@ -68,29 +66,17 @@ static void tls_device_free_ctx(struct tls_context *ctx) tls_ctx_free(NULL, ctx); } -static void tls_device_gc_task(struct work_struct *work) +static void tls_device_tx_del_task(struct work_struct *work) { - struct tls_context *ctx, *tmp; - unsigned long flags; - LIST_HEAD(gc_list); - - spin_lock_irqsave(&tls_device_lock, flags); - list_splice_init(&tls_device_gc_list, &gc_list); - spin_unlock_irqrestore(&tls_device_lock, flags); - - list_for_each_entry_safe(ctx, tmp, &gc_list, list) { - struct net_device *netdev = ctx->netdev; + struct tls_offload_context_tx *offload_ctx = + container_of(work, struct tls_offload_context_tx, destruct_work); + struct tls_context *ctx = offload_ctx->ctx; + struct net_device *netdev = ctx->netdev; - if (netdev && ctx->tx_conf == TLS_HW) { - netdev->tlsdev_ops->tls_dev_del(netdev, ctx, - TLS_OFFLOAD_CTX_DIR_TX); - dev_put(netdev); - ctx->netdev = NULL; - } - - list_del(&ctx->list); - tls_device_free_ctx(ctx); - } + netdev->tlsdev_ops->tls_dev_del(netdev, ctx, TLS_OFFLOAD_CTX_DIR_TX); + dev_put(netdev); + ctx->netdev = NULL; + tls_device_free_ctx(ctx); } static void tls_device_queue_ctx_destruction(struct tls_context *ctx) @@ -99,21 +85,17 @@ static void tls_device_queue_ctx_destruction(struct tls_context *ctx) bool async_cleanup; spin_lock_irqsave(&tls_device_lock, flags); + list_del(&ctx->list); /* Remove from tls_device_list / tls_device_down_list */ + spin_unlock_irqrestore(&tls_device_lock, flags); + async_cleanup = ctx->netdev && ctx->tx_conf == TLS_HW; if (async_cleanup) { - list_move_tail(&ctx->list, &tls_device_gc_list); + struct tls_offload_context_tx *offload_ctx = tls_offload_ctx_tx(ctx); - /* schedule_work inside the spinlock - * to make sure tls_device_down waits for that work. - */ - schedule_work(&tls_device_gc_work); + queue_work(destruct_wq, &offload_ctx->destruct_work); } else { - list_del(&ctx->list); - } - spin_unlock_irqrestore(&tls_device_lock, flags); - - if (!async_cleanup) tls_device_free_ctx(ctx); + } } /* We assume that the socket is already connected */ @@ -1150,6 +1132,9 @@ int tls_set_device_offload(struct sock *sk, struct tls_context *ctx) start_marker_record->len = 0; start_marker_record->num_frags = 0; + INIT_WORK(&offload_ctx->destruct_work, tls_device_tx_del_task); + offload_ctx->ctx = ctx; + INIT_LIST_HEAD(&offload_ctx->records_list); list_add_tail(&start_marker_record->list, &offload_ctx->records_list); spin_lock_init(&offload_ctx->lock); @@ -1389,7 +1374,7 @@ static int tls_device_down(struct net_device *netdev) up_write(&device_offload_lock); - flush_work(&tls_device_gc_work); + flush_workqueue(destruct_wq); return NOTIFY_DONE; } @@ -1428,14 +1413,20 @@ static struct notifier_block tls_dev_notifier = { .notifier_call = tls_dev_event, }; -void __init tls_device_init(void) +int __init tls_device_init(void) { + destruct_wq = alloc_workqueue("ktls_device_destruct", 0, 0); + if (!destruct_wq) + return -ENOMEM; + register_netdevice_notifier(&tls_dev_notifier); + return 0; } void __exit tls_device_cleanup(void) { unregister_netdevice_notifier(&tls_dev_notifier); - flush_work(&tls_device_gc_work); + flush_workqueue(destruct_wq); + destroy_workqueue(destruct_wq); clean_acked_data_flush(); } diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c index f71b46568112..9703636cfc60 100644 --- a/net/tls/tls_main.c +++ b/net/tls/tls_main.c @@ -1141,7 +1141,12 @@ static int __init tls_register(void) if (err) return err; - tls_device_init(); + err = tls_device_init(); + if (err) { + unregister_pernet_subsys(&tls_proc_ops); + return err; + } + tcp_register_ulp(&tcp_tls_ulp_ops); return 0; From patchwork Wed Jul 13 05:16:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 12915979 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FB04CCA485 for ; Wed, 13 Jul 2022 05:17:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232123AbiGMFQ7 (ORCPT ); Wed, 13 Jul 2022 01:16:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232134AbiGMFQx (ORCPT ); Wed, 13 Jul 2022 01:16:53 -0400 Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2045.outbound.protection.outlook.com [40.107.95.45]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83715D5147 for ; Tue, 12 Jul 2022 22:16:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fJqm9OGhf+BOUI/j3jGLcfy0JUfmeu4s/agsZRmIix+NaAqT+giCDXcf+TXD46wuWBSncirracp30kw12l8PU7uOzkOzbDzS/mPNYn2vQgPTTa+ajosDfrsNlYugC9Ur4bI/PmgIhBsyXAFXl/MJ/++d9mRfaeIgP441Z1InLDEL8LzYs/2UIVputakXLNkSyYTsjRLVgNtutGeh9oLXVqges/CD/RcsJZ6UM8NS8Yv7dU7oGVahWBKB2gAc72KdjnbC7OFNyY77m1vpd64+k1aNiaOBp0mWU5lnODSC/bYacCvvYWtuIh5ZHA5h3pch53tcgSpmR6LKAT4UsAKp5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=SYjvzCmGDUXODP2tJfaAfL94a8R6OVjYcWLj589rSl8=; b=IrmuURHvZ0MCzSAeDvZLb0H+NV2XTieZMSs0ZIhROko/z5N/vGjdQVAvWtX6Kq6KNMIGvvOrC07Wh1IAJjVpZcuwN3fLhO/Vw+45tVYCWJw2zUGKunUx2K6/RQsCxuwMq1o45bCkVr0mwC0gISHesNI1YyMpAN+dz7MTLU5TTUffj5Pf+S4UbWA4fO9ZX1SyaDLIBsey1OPCo3Wl9GsBnRAt+ln4ZCE+4SvpwxHlYWuQfupKt6gzLMSuus6ePoLUsKawjraO2rU99cCEVrrQbVkXZcFKM/qHaVOryYZgLJNRjYHlpebDxT+3ha1gG3s/C4p4ytl39Fj3yv4Wfu+uWw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=google.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=SYjvzCmGDUXODP2tJfaAfL94a8R6OVjYcWLj589rSl8=; b=OM+PCmvhZFlrUkCv9wAntDyuq2M0/QaatGZMSrnbzfNKd2A2NJHJ7mipmVnV7RXooqxQ1PL2sKCeJs9s6zWNh8m51g6M+uYdHdDrD+X4Qb6gZFcu/yFYsIZ+n3B3Z7VmA3X4xceFV9AuMQhLewO6FsE/xHjal1sqvuD/PwcnllvR9SLB7GJFkZYiDvZY3zKJ2wl58Kw1rD1e/D2DBxdBfIWoszect+TGjbctDJ/gW74QVu8d9cFTqsrH5uvNjdadsMbTa1geqJ79iwBWStQEV2KuV+0aP+TPDG7hqCqD2QvGYvxsdw9hKlPbT/dAeaMp3fGR9nwCEyg8mIrDteG4bA== Received: from BN7PR06CA0047.namprd06.prod.outlook.com (2603:10b6:408:34::24) by DM5PR12MB1915.namprd12.prod.outlook.com (2603:10b6:3:10c::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5417.23; Wed, 13 Jul 2022 05:16:51 +0000 Received: from BN8NAM11FT017.eop-nam11.prod.protection.outlook.com (2603:10b6:408:34:cafe::9) by BN7PR06CA0047.outlook.office365.com (2603:10b6:408:34::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5417.20 via Frontend Transport; Wed, 13 Jul 2022 05:16:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT017.mail.protection.outlook.com (10.13.177.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5438.12 via Frontend Transport; Wed, 13 Jul 2022 05:16:50 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Wed, 13 Jul 2022 05:16:49 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 12 Jul 2022 22:16:48 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.26 via Frontend Transport; Tue, 12 Jul 2022 22:16:45 -0700 From: Tariq Toukan To: Boris Pismenny , John Fastabend , Jakub Kicinski CC: "David S . Miller" , Eric Dumazet , Paolo Abeni , , Saeed Mahameed , Gal Pressman , Tariq Toukan , Gal Pressman Subject: [PATCH net-next V2 3/6] net/mlx5e: kTLS, Introduce TLS-specific create TIS Date: Wed, 13 Jul 2022 08:16:00 +0300 Message-ID: <20220713051603.14014-4-tariqt@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220713051603.14014-1-tariqt@nvidia.com> References: <20220713051603.14014-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e93d591d-1faf-43de-6a47-08da648ee440 X-MS-TrafficTypeDiagnostic: DM5PR12MB1915:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: R7D8cPiXOExhgj38QKHnXlJOcMzFtc0pzXtA2zcq8IOLB5Z0dOPEN6+T0ySoFZi9Ty4pU1nWJU5MHD0FQru2DbDrbjJ0FY+a2VZXlT6Q7IgpN59UQeW0pT/br0HFaMiKJNXrazg1qLZINrbFUj6SD7Ec11XG/zj++nDw+gkhd9/qe6vd+FRJTLmEspZJnhxX6D1gG5wk2bBKoZ90g/INZ81shKgjSkj3D3G/+q1EusLscTqdu6BGDSR7Wveg9KrU7+FHcHUew0cevQh1KLv+hh/sLtQSzpb4IvYGrCPJQHy96jb4P/6eIEqoNbG1BD2El3GyS3AsvWCb7dRiIn9IU8kkNRgcB21DofFr2WUt16S1WSwLy16Igh9wBk6KiQgiA8i19Vj3fxU7sqDMvrxU8VqL0L+gGt8r4ILwdTpozU0Yg7dvRVUM2uUyiuASTTMGwcn4y3nCu8yFQSpTfsjQ2mE39kom47m4UvDVS1Fjv47aJtjJwup1uH/MyMKYwQPG/qeiKzj5qrTB0bltQcRd8qfhF7gfBDMP6vpkztDy696UJeXNtSG/sl1I+U7nP0yX0QKWAq8mRFN+2fjHAAr76AQFJT6MTh/XJVHU/UG45IYDqPeeuV0XiYl1eXSXsqXp+GaEk+H69nIMbdxZ/strZkI+HHnsmvDsTEO1JrwG95HxTJrjoIVJ67ifWQfYoQVzO8Cba5ygrKx5nlPMD2fWScN0H3np9q7UxitWP0ncPt9auH7dx6KTX6/Brhw9DrUNvM3jCxu6rdmXn5IEXzh9rgfQVZTQmBfG7aFxhgOdJCkBFhSshKna4nO4JF6Nw3q6e2rpg05BjHJA2zy28TFaNpbToDeszajH00LQp3UMysI= X-Forefront-Antispam-Report: CIP:12.22.5.238;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(396003)(376002)(346002)(136003)(39860400002)(36840700001)(40470700004)(46966006)(82740400003)(356005)(478600001)(107886003)(2616005)(5660300002)(36756003)(26005)(8936002)(86362001)(6666004)(41300700001)(2906002)(82310400005)(7696005)(186003)(40460700003)(4326008)(36860700001)(426003)(83380400001)(1076003)(47076005)(54906003)(70586007)(110136005)(70206006)(316002)(40480700001)(8676002)(81166007)(336012)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jul 2022 05:16:50.2032 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e93d591d-1faf-43de-6a47-08da648ee440 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.238];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT017.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1915 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org TLS TIS objects have a defined role in mapping and reaching the HW TLS contexts. Some standard TIS attributes (like LAG port affinity) are not relevant for them. Use a dedicated TLS TIS create function instead of the generic mlx5e_create_tis. Signed-off-by: Tariq Toukan Reviewed-by: Gal Pressman Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c index cc5cb3010e64..2cd0437666d2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c @@ -39,16 +39,20 @@ u16 mlx5e_ktls_get_stop_room(struct mlx5_core_dev *mdev, struct mlx5e_params *pa return stop_room; } +static void mlx5e_ktls_set_tisc(struct mlx5_core_dev *mdev, void *tisc) +{ + MLX5_SET(tisc, tisc, tls_en, 1); + MLX5_SET(tisc, tisc, pd, mdev->mlx5e_res.hw_objs.pdn); + MLX5_SET(tisc, tisc, transport_domain, mdev->mlx5e_res.hw_objs.td.tdn); +} + static int mlx5e_ktls_create_tis(struct mlx5_core_dev *mdev, u32 *tisn) { u32 in[MLX5_ST_SZ_DW(create_tis_in)] = {}; - void *tisc; - - tisc = MLX5_ADDR_OF(create_tis_in, in, ctx); - MLX5_SET(tisc, tisc, tls_en, 1); + mlx5e_ktls_set_tisc(mdev, MLX5_ADDR_OF(create_tis_in, in, ctx)); - return mlx5e_create_tis(mdev, in, tisn); + return mlx5_core_create_tis(mdev, in, tisn); } struct mlx5e_ktls_offload_context_tx { From patchwork Wed Jul 13 05:16:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 12915978 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C555CCA480 for ; Wed, 13 Jul 2022 05:17:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232676AbiGMFQ7 (ORCPT ); Wed, 13 Jul 2022 01:16:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232153AbiGMFQ6 (ORCPT ); Wed, 13 Jul 2022 01:16:58 -0400 Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2055.outbound.protection.outlook.com [40.107.101.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2EAC2D5158 for ; Tue, 12 Jul 2022 22:16:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QX14fQq8OFUyoK23aiOvzymNKCZaS2SXYg34HLYsAgYCL33F1AXYKQBhx3HfgNG8M4u59f7ElF0/UNv3phGYlwD6GNa5rXpT3uib+h51Ei3t6cOgxA6+THJmEkdP2DPIr8s+mUrqHYaL23zoHgIQdo5JJSdVA0Xw3Hg9zTXyY6jzRxZz5w5+zIegDoTtWlPE5QnbC+Yo4xvrl5HO+ySM7mNc7p7dJwP92QeGmVs8eEvHnAlS/+amyaj46BLeUzCcDhsms4WN8L63PeB1Momj4rz9e5cfywfDtEypxhLX+4Hqb3NcIcBbd1TXi4dwVrtn9n0ge9n/wuHKCw0xml16RQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0IVCPAmko9kX74MgfWr08eWJsGoNSZT8Tb0fmyM4CZ8=; b=Bq9/+1oMeFu6B+acxitMqLpLEqP7mwtGKI/ORQelg6XcDF6FNsIEH3XWQNgGmGL2oguOAhcTJQHT2WgzGH/jUPSD0ZiRvzCKcpgQJ6mE3jAXWtsJyCnbP8LLuOGSq0fkB0TrM5ETz6fVa4ZG9ovzdpWjVv6z/hk5bgp1dyb7g8w9PrthWayEWG94d8DEF6xZqjv3DiUkbkI21ChCF9+dP4rxFmBTvvOjRim1twsFOw5LG3Ko3He25dsvth0zcgwHh/+YhgHuzCYJbvH/nW2dOO6IuVLjRTqkvETvUFmtSEj9jmUSYGI53EVIZh78nNlegGNrG40dkqr8wykz+hVEVg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=google.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0IVCPAmko9kX74MgfWr08eWJsGoNSZT8Tb0fmyM4CZ8=; b=IORE0FpdMcxwmtOiyZHUhNoy0lBjCD5GO406I67rdQD+D/Bk7bYcLAf0yUXRS3KXf74DpJWMzkQk82chjDGQmc0qE9jpwTQQO5DitJyBnC59Ab7sydIHD7Y41iKBa/8bW3v2R4lvvZ4dWruPCbqj9uRwOu5gg2U6kb4kS9/AK1eAom2qCfbNieMlxahAFZDHiEd/487arF0czGVyljzok05LAiMnkIlQp2ZiZVRaOA/flUqPy2H+uWfoOul58gn+f3l7GDrBkeBlB9GVPPqCfvf6mBKJLQaHferDSYBSSmDabuDLAKB1UD2dDT0Ld3r3r+kbxJDkOM3GOMJk72DFdg== Received: from BN9PR03CA0412.namprd03.prod.outlook.com (2603:10b6:408:111::27) by DM6PR12MB3708.namprd12.prod.outlook.com (2603:10b6:5:1c5::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.12; Wed, 13 Jul 2022 05:16:55 +0000 Received: from BN8NAM11FT026.eop-nam11.prod.protection.outlook.com (2603:10b6:408:111:cafe::a4) by BN9PR03CA0412.outlook.office365.com (2603:10b6:408:111::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5417.20 via Frontend Transport; Wed, 13 Jul 2022 05:16:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT026.mail.protection.outlook.com (10.13.177.51) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5438.12 via Frontend Transport; Wed, 13 Jul 2022 05:16:54 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Wed, 13 Jul 2022 05:16:52 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 12 Jul 2022 22:16:50 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.26 via Frontend Transport; Tue, 12 Jul 2022 22:16:48 -0700 From: Tariq Toukan To: Boris Pismenny , John Fastabend , Jakub Kicinski CC: "David S . Miller" , Eric Dumazet , Paolo Abeni , , Saeed Mahameed , Gal Pressman , Tariq Toukan , Gal Pressman Subject: [PATCH net-next V2 4/6] net/mlx5e: kTLS, Take stats out of OOO handler Date: Wed, 13 Jul 2022 08:16:01 +0300 Message-ID: <20220713051603.14014-5-tariqt@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220713051603.14014-1-tariqt@nvidia.com> References: <20220713051603.14014-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 413cf7e6-93ef-4ded-21ba-08da648ee6a6 X-MS-TrafficTypeDiagnostic: DM6PR12MB3708:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xDsXdZPUmHL1aHe6H6Q6R7PoUGl+18UZhKd3ORt0ukQByWlHeu7Xeia88LDCCVrGDPddt+Y1SAoedtKqMTt5boE9H2lmYRUGmO8Fyg6vbF4HK512sXkgOZU7JxODANrUtNk7h++HtwHIgdt/tL5SKTRwHLx4mAgrxhBAbpW7IArOHNNi4FzwAmrWoHhbake+XKOvvTZvzGRVZFZwnj6Xc+SvFYzHw6v15sqQtYb6oU6Fv7Q6E76Zl2EwOf7DSK138M+22R5+WM/VhabFSweg2IkkB+0IJQ9l8QjBJSeZVz6ht51q45glmSFL5M4aNgMn1SuGUx0kK24rmfWDWW6B7rVPEACHn8zBVCEbFxN+oObwVwWyXgltv8LpelpFUx193s3C4ncLzhTO5peFyNcl1P7LV6CCGCk0vNbh9vxMyZZjfD34HI6PSplLd0/TezlQoR43jHuK/mC+d3DkW4czg5T3FbbhBSVYbq89YBwE553uhkuI5NdM5jXfsdvXOEdrRpIUquLv0NX3r4vLX8NkDWWCJGyznT/NT9+aO3ulE/UoVG0RL7ctu+RuU/9wzi4UtKSAMqC8TkZW7H411wou53P8KlRa7maujdq8pJIyIXsm3v78Qo9xUIgizrohA0V4ROKJ2DBxTXCpAXreYFYnw+8yrPJ7wYqlrm1/QNsSuL0WacD0Zzt3y0B38gu9gwKio8xlkwp3HK923iBlefZIjWHRqrzEXo44NjGvMTd+cHk1iW8AhiqDDAoE3Juh3EXgPifOabQ/iCqqhvOnZ8NRSOsLLJz/GxTQBgaB7te3buqufUvlhBzmyJi5f8EFeBFy9a6oyTb54TEc4B8+PfBb8Z8kvd8TXC2kb/fAlLJtk/w= X-Forefront-Antispam-Report: CIP:12.22.5.238;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(346002)(396003)(136003)(39860400002)(376002)(36840700001)(46966006)(40470700004)(40480700001)(83380400001)(356005)(81166007)(86362001)(82740400003)(40460700003)(36860700001)(82310400005)(8936002)(5660300002)(478600001)(6666004)(41300700001)(36756003)(2906002)(54906003)(316002)(110136005)(4326008)(70586007)(70206006)(8676002)(186003)(2616005)(107886003)(26005)(426003)(336012)(1076003)(7696005)(47076005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jul 2022 05:16:54.2565 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 413cf7e6-93ef-4ded-21ba-08da648ee6a6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.238];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT026.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3708 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Let the caller of mlx5e_ktls_tx_handle_ooo() take care of updating the stats, according to the returned value. As the switch/case blocks are already there, this change saves unnecessary branches in the handler. Signed-off-by: Tariq Toukan Reviewed-by: Gal Pressman Signed-off-by: Saeed Mahameed --- .../mellanox/mlx5/core/en_accel/ktls_tx.c | 27 ++++++++----------- 1 file changed, 11 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c index 2cd0437666d2..99e1cd015083 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c @@ -382,26 +382,17 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx, int datalen, u32 seq) { - struct mlx5e_sq_stats *stats = sq->stats; enum mlx5e_ktls_sync_retval ret; struct tx_sync_info info = {}; - int i = 0; + int i; ret = tx_sync_info_get(priv_tx, seq, datalen, &info); - if (unlikely(ret != MLX5E_KTLS_SYNC_DONE)) { - if (ret == MLX5E_KTLS_SYNC_SKIP_NO_DATA) { - stats->tls_skip_no_sync_data++; - return MLX5E_KTLS_SYNC_SKIP_NO_DATA; - } - /* We might get here if a retransmission reaches the driver - * after the relevant record is acked. + if (unlikely(ret != MLX5E_KTLS_SYNC_DONE)) + /* We might get here with ret == FAIL if a retransmission + * reaches the driver after the relevant record is acked. * It should be safe to drop the packet in this case */ - stats->tls_drop_no_sync_data++; - goto err_out; - } - - stats->tls_ooo++; + return ret; tx_post_resync_params(sq, priv_tx, info.rcd_sn); @@ -413,7 +404,7 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx, return MLX5E_KTLS_SYNC_DONE; } - for (; i < info.nr_frags; i++) { + for (i = 0; i < info.nr_frags; i++) { unsigned int orig_fsz, frag_offset = 0, n = 0; skb_frag_t *f = &info.frags[i]; @@ -483,15 +474,19 @@ bool mlx5e_ktls_handle_tx_skb(struct net_device *netdev, struct mlx5e_txqsq *sq, enum mlx5e_ktls_sync_retval ret = mlx5e_ktls_tx_handle_ooo(priv_tx, sq, datalen, seq); + stats->tls_ooo++; + switch (ret) { case MLX5E_KTLS_SYNC_DONE: break; case MLX5E_KTLS_SYNC_SKIP_NO_DATA: + stats->tls_skip_no_sync_data++; if (likely(!skb->decrypted)) goto out; WARN_ON_ONCE(1); - fallthrough; + goto err_out; case MLX5E_KTLS_SYNC_FAIL: + stats->tls_drop_no_sync_data++; goto err_out; } } From patchwork Wed Jul 13 05:16:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 12915977 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F498C43334 for ; Wed, 13 Jul 2022 05:17:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232841AbiGMFRN (ORCPT ); Wed, 13 Jul 2022 01:17:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233613AbiGMFRH (ORCPT ); Wed, 13 Jul 2022 01:17:07 -0400 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2049.outbound.protection.outlook.com [40.107.244.49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5168DD5170 for ; Tue, 12 Jul 2022 22:17:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UbCNM8myXpD6SLO4+R/BzcI4t5X50Lx5GaIypH61KLgmWXxmRGCQlxC75PD4ws37+QWRmxyj19GNbT5kbJaky21jsNOpyeHZTUZAYn3bKitfmUpELjuN7ONMMEbhRxSD20DqtyqnIXslRD2XwmvchrzZWJ7CXSu5V51gautkyKhx8tEOIjAv1SCT+WE9hke6WxwqRWp+grRpUB87c17czgCtOnbmMeucyyQaaRnzj50ElyXOfQ8f375ZQYza/59tKGyRhI3KMqSU0nL5lVJaQJyQtE69POf5wstbvR/V2LPZap1yKPsyDgCwAWvwj3lUYX6a8YZuSy85W0KD4GKZmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DLCmhFspbVcUWcTtFSDt6EccQu+6LIrX8DT0N1HnMp8=; b=hF/FpBFaeFwe+Hvathva654IUXute7iPN5zkTGpIaoldimj2jJQM9BqAkUMEdhUz3SZp1dfLilfyqvGSlZmx77MrX+Grc3PctIUdXscOfpZbDe0fZApeN13a9LnjAg2sQhWwB7wGDhCLsqSNPEhC66Uo9voTgoVjIzCOnfeD9AONyEIxZxnV/DPEm0h6uk7a09gYCflr0SqCfHx9yo2Xl8W1BsEoXcLTUt5M03412UpFMJVpUz9jr8yRUyicsxFVL2p+xeFPnq++MqTOxY9M9+64J4WOf/09ofIht+72LrZNXhJSxFwhsTnS1iZUFrGE6M9gh59SBDuAH095wr8Hpw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=google.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DLCmhFspbVcUWcTtFSDt6EccQu+6LIrX8DT0N1HnMp8=; b=eu7f7SQyG6eEFdIq65fpWOe8Y+DVJCgEm4h/qXLW1vQRLaCw7z4W+2SVDN5biDV9kdPy1BV66ZlshfTsqPKOeYClvyhESfKq2ufXrMAEEl+nbOAQN+s6Evr9PoIGcHae2pazZizX9rHFMTUhGsbD4Jtjrlv8jGLi4rfI14zCKIbRxyQgMTF2qMaMBBFI33wAy/PBjAfLc1JJTR9v9reXH1cj7lEfBSyE+iNKi0a1nQpMNSxfNCizUNMD4OSbzrBgy9t1RlFzlRoyJDop7UHk86vY5/uKb77jgMUn10HCBKuja1fh8JZYZNRe3Rh0d9MvX1FyaLwQX2ibnUZmmzteZg== Received: from BN9PR03CA0217.namprd03.prod.outlook.com (2603:10b6:408:f8::12) by SJ0PR12MB5661.namprd12.prod.outlook.com (2603:10b6:a03:422::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5417.26; Wed, 13 Jul 2022 05:16:57 +0000 Received: from BN8NAM11FT028.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f8:cafe::2a) by BN9PR03CA0217.outlook.office365.com (2603:10b6:408:f8::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5417.21 via Frontend Transport; Wed, 13 Jul 2022 05:16:57 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by BN8NAM11FT028.mail.protection.outlook.com (10.13.176.225) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5438.12 via Frontend Transport; Wed, 13 Jul 2022 05:16:56 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Wed, 13 Jul 2022 05:16:55 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 12 Jul 2022 22:16:53 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.26 via Frontend Transport; Tue, 12 Jul 2022 22:16:52 -0700 From: Tariq Toukan To: Boris Pismenny , John Fastabend , Jakub Kicinski CC: "David S . Miller" , Eric Dumazet , Paolo Abeni , , Saeed Mahameed , Gal Pressman , Tariq Toukan , Gal Pressman Subject: [PATCH net-next V2 5/6] net/mlx5e: kTLS, Recycle objects of device-offloaded TLS TX connections Date: Wed, 13 Jul 2022 08:16:02 +0300 Message-ID: <20220713051603.14014-6-tariqt@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220713051603.14014-1-tariqt@nvidia.com> References: <20220713051603.14014-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b38528ec-53ed-4d21-79f8-08da648ee82a X-MS-TrafficTypeDiagnostic: SJ0PR12MB5661:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: QOBwi2vc6I7lY3zt4za39qEY44OhT4rKbtcXZ+cgOzfKj8npT0ACiP0XtTi0y45xGrngqpu+R8Eq79mjEJdPOTG2/IR4ZgaoaLyPFjz78xSOzSXfd14U+9qJfmOlA0tPqIbiQFoojP2H/Y5pVR7xhhW8shHSYQTFXA2A6htWQaAYXAgSc+csJLUSdTH88fuOaB0IlahsLfmk3gf09ZGpIR4O1IXHTx5JSsH8oqiLdPz8p1VWbIL+gd5o2vnTK305ZvkY5S1LOL4ui2+tjv9U0eU/1/VD4OStA7KZEownXPxTFZNF8wTqXqhovuo2+9Del1VALfOtRtVAxEM8/UuJNcRwIw/UCb8d9V81dBs6aDN+fVPaVX87NeJRk7rWdMaPIeEh21ACoM7Ued92+01ZoLU2QZ5lYiMJKgQ8p+Khh3/EAlXA0ggCgh8iopWhYe58C6eKbMpFFBZAUrLyUMcX2yUqSRRO1nyH+O3VbCpjIbQVlhtSA618yklqEm//f1sPwbKUyxlwBUZqJ5aJPTT3/E/A+4g/bbum3ug8H3Or11ORZuLB0+ZoJvfpJQe4BGz89g3Hy9aUz0qfalG8oxUJp0RzjIG5IospZqEUX8YfJXdbyBqCjqFauBH/7kAK5S2eZvwVWJhTOuP3THyFhUrSLY3tJt8J7MpEEjbiTLjD3l4xsg2zURvQ1kcHDna3Rie07BJwhUhKzUMgcoHFqhp6GngwFiW73l2WxWJI7p7zh4k3yAGDHp/Zv/B6sS0LyDeDLNwae5Wm8OymY1riXYmxzXMEQLLJYMEKiaJLEl5IF85oWhAS4MG1HulwKX4eHoOG0+ABsWbVeqG2g5qR3hZZl298/D6DvHRsLo9wJuQNdLo= X-Forefront-Antispam-Report: CIP:12.22.5.234;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(396003)(346002)(376002)(39860400002)(136003)(40470700004)(46966006)(36840700001)(1076003)(36756003)(110136005)(4326008)(316002)(47076005)(2906002)(54906003)(2616005)(41300700001)(30864003)(107886003)(478600001)(186003)(26005)(336012)(6666004)(70206006)(70586007)(8676002)(426003)(81166007)(7696005)(82740400003)(86362001)(83380400001)(40460700003)(40480700001)(5660300002)(8936002)(356005)(82310400005)(36860700001)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jul 2022 05:16:56.7078 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b38528ec-53ed-4d21-79f8-08da648ee82a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.234];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT028.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB5661 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org The transport interface send (TIS) object is responsible for performing all transport related operations of the transmit side. The ConnectX HW uses a TIS object to save and access the TLS crypto information and state of an offloaded TX kTLS connection. Before this patch, we used to create a new TIS per connection and destroy it once it’s closed. Every create and destroy of a TIS is a FW command. Same applies for the private TLS context, where we used to dynamically allocate and free it per connection. Resources recycling reduce the impact of the allocation/free operations and helps speeding up the connection rate. In this feature we maintain a pool of TX objects and use it to recycle the resources instead of re-creating them per connection. A cached TIS popped from the pool is updated to serve the new connection via the fast-path HW interface, updating the tls static and progress params. This is a very fast operation, significantly faster than FW commands. On recycling, a WQE fence is required after the context params change. This guarantees that the data is sent after the context has been successfully updated in hardware, and that the context modification doesn't interfere with existing traffic. Signed-off-by: Tariq Toukan Reviewed-by: Gal Pressman Signed-off-by: Saeed Mahameed --- .../mellanox/mlx5/core/en_accel/en_accel.h | 10 + .../mellanox/mlx5/core/en_accel/ktls.h | 14 ++ .../mellanox/mlx5/core/en_accel/ktls_stats.c | 2 + .../mellanox/mlx5/core/en_accel/ktls_tx.c | 211 ++++++++++++++---- .../net/ethernet/mellanox/mlx5/core/en_main.c | 9 + 5 files changed, 199 insertions(+), 47 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h index 04c0a5e1c89a..1839f1ab1ddd 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h @@ -194,4 +194,14 @@ static inline void mlx5e_accel_cleanup_rx(struct mlx5e_priv *priv) { mlx5e_ktls_cleanup_rx(priv); } + +static inline int mlx5e_accel_init_tx(struct mlx5e_priv *priv) +{ + return mlx5e_ktls_init_tx(priv); +} + +static inline void mlx5e_accel_cleanup_tx(struct mlx5e_priv *priv) +{ + mlx5e_ktls_cleanup_tx(priv); +} #endif /* __MLX5E_EN_ACCEL_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h index d016624fbc9d..948400dee525 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h @@ -42,6 +42,8 @@ static inline bool mlx5e_ktls_type_check(struct mlx5_core_dev *mdev, } void mlx5e_ktls_build_netdev(struct mlx5e_priv *priv); +int mlx5e_ktls_init_tx(struct mlx5e_priv *priv); +void mlx5e_ktls_cleanup_tx(struct mlx5e_priv *priv); int mlx5e_ktls_init_rx(struct mlx5e_priv *priv); void mlx5e_ktls_cleanup_rx(struct mlx5e_priv *priv); int mlx5e_ktls_set_feature_rx(struct net_device *netdev, bool enable); @@ -62,6 +64,8 @@ static inline bool mlx5e_is_ktls_rx(struct mlx5_core_dev *mdev) struct mlx5e_tls_sw_stats { atomic64_t tx_tls_ctx; atomic64_t tx_tls_del; + atomic64_t tx_tls_pool_alloc; + atomic64_t tx_tls_pool_free; atomic64_t rx_tls_ctx; atomic64_t rx_tls_del; }; @@ -69,6 +73,7 @@ struct mlx5e_tls_sw_stats { struct mlx5e_tls { struct mlx5e_tls_sw_stats sw_stats; struct workqueue_struct *rx_wq; + struct mlx5e_tls_tx_pool *tx_pool; }; int mlx5e_ktls_init(struct mlx5e_priv *priv); @@ -83,6 +88,15 @@ static inline void mlx5e_ktls_build_netdev(struct mlx5e_priv *priv) { } +static inline int mlx5e_ktls_init_tx(struct mlx5e_priv *priv) +{ + return 0; +} + +static inline void mlx5e_ktls_cleanup_tx(struct mlx5e_priv *priv) +{ +} + static inline int mlx5e_ktls_init_rx(struct mlx5e_priv *priv) { return 0; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_stats.c index 2ab46c4247ff..7c1c0eb16787 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_stats.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_stats.c @@ -41,6 +41,8 @@ static const struct counter_desc mlx5e_ktls_sw_stats_desc[] = { { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, tx_tls_ctx) }, { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, tx_tls_del) }, + { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, tx_tls_pool_alloc) }, + { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, tx_tls_pool_free) }, { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, rx_tls_ctx) }, { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, rx_tls_del) }, }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c index 99e1cd015083..24d1288e906a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c @@ -35,6 +35,7 @@ u16 mlx5e_ktls_get_stop_room(struct mlx5_core_dev *mdev, struct mlx5e_params *pa stop_room += mlx5e_stop_room_for_wqe(mdev, MLX5E_TLS_SET_STATIC_PARAMS_WQEBBS); stop_room += mlx5e_stop_room_for_wqe(mdev, MLX5E_TLS_SET_PROGRESS_PARAMS_WQEBBS); stop_room += num_dumps * mlx5e_stop_room_for_wqe(mdev, MLX5E_KTLS_DUMP_WQEBBS); + stop_room += 1; /* fence nop */ return stop_room; } @@ -56,13 +57,17 @@ static int mlx5e_ktls_create_tis(struct mlx5_core_dev *mdev, u32 *tisn) } struct mlx5e_ktls_offload_context_tx { - struct tls_offload_context_tx *tx_ctx; - struct tls12_crypto_info_aes_gcm_128 crypto_info; - struct mlx5e_tls_sw_stats *sw_stats; + /* fast path */ u32 expected_seq; u32 tisn; - u32 key_id; bool ctx_post_pending; + /* control / resync */ + struct list_head list_node; /* member of the pool */ + struct tls12_crypto_info_aes_gcm_128 crypto_info; + struct tls_offload_context_tx *tx_ctx; + struct mlx5_core_dev *mdev; + struct mlx5e_tls_sw_stats *sw_stats; + u32 key_id; }; static void @@ -87,28 +92,136 @@ mlx5e_get_ktls_tx_priv_ctx(struct tls_context *tls_ctx) return *ctx; } +static struct mlx5e_ktls_offload_context_tx * +mlx5e_tls_priv_tx_init(struct mlx5_core_dev *mdev, struct mlx5e_tls_sw_stats *sw_stats) +{ + struct mlx5e_ktls_offload_context_tx *priv_tx; + int err; + + priv_tx = kzalloc(sizeof(*priv_tx), GFP_KERNEL); + if (!priv_tx) + return ERR_PTR(-ENOMEM); + + priv_tx->mdev = mdev; + priv_tx->sw_stats = sw_stats; + + err = mlx5e_ktls_create_tis(mdev, &priv_tx->tisn); + if (err) { + kfree(priv_tx); + return ERR_PTR(err); + } + + return priv_tx; +} + +static void mlx5e_tls_priv_tx_cleanup(struct mlx5e_ktls_offload_context_tx *priv_tx) +{ + mlx5e_destroy_tis(priv_tx->mdev, priv_tx->tisn); + kfree(priv_tx); +} + +static void mlx5e_tls_priv_tx_list_cleanup(struct list_head *list) +{ + struct mlx5e_ktls_offload_context_tx *obj; + + list_for_each_entry(obj, list, list_node) + mlx5e_tls_priv_tx_cleanup(obj); +} + +/* Recycling pool API */ + +struct mlx5e_tls_tx_pool { + struct mlx5_core_dev *mdev; + struct mlx5e_tls_sw_stats *sw_stats; + struct mutex lock; /* Protects access to the pool */ + struct list_head list; +#define MLX5E_TLS_TX_POOL_MAX_SIZE (256) + size_t size; +}; + +static struct mlx5e_tls_tx_pool *mlx5e_tls_tx_pool_init(struct mlx5_core_dev *mdev, + struct mlx5e_tls_sw_stats *sw_stats) +{ + struct mlx5e_tls_tx_pool *pool; + + pool = kvzalloc(sizeof(*pool), GFP_KERNEL); + if (!pool) + return NULL; + + INIT_LIST_HEAD(&pool->list); + mutex_init(&pool->lock); + + pool->mdev = mdev; + pool->sw_stats = sw_stats; + + return pool; +} + +static void mlx5e_tls_tx_pool_cleanup(struct mlx5e_tls_tx_pool *pool) +{ + mlx5e_tls_priv_tx_list_cleanup(&pool->list); + atomic64_add(pool->size, &pool->sw_stats->tx_tls_pool_free); + kvfree(pool); +} + +static void pool_push(struct mlx5e_tls_tx_pool *pool, struct mlx5e_ktls_offload_context_tx *obj) +{ + mutex_lock(&pool->lock); + if (pool->size >= MLX5E_TLS_TX_POOL_MAX_SIZE) { + mutex_unlock(&pool->lock); + mlx5e_tls_priv_tx_cleanup(obj); + atomic64_inc(&pool->sw_stats->tx_tls_pool_free); + return; + } + list_add(&obj->list_node, &pool->list); + pool->size++; + mutex_unlock(&pool->lock); +} + +static struct mlx5e_ktls_offload_context_tx *pool_pop(struct mlx5e_tls_tx_pool *pool) +{ + struct mlx5e_ktls_offload_context_tx *obj; + + mutex_lock(&pool->lock); + if (pool->size == 0) { + obj = mlx5e_tls_priv_tx_init(pool->mdev, pool->sw_stats); + if (!IS_ERR(obj)) + atomic64_inc(&pool->sw_stats->tx_tls_pool_alloc); + goto out; + } + + obj = list_first_entry(&pool->list, struct mlx5e_ktls_offload_context_tx, + list_node); + list_del(&obj->list_node); + pool->size--; +out: + mutex_unlock(&pool->lock); + return obj; +} + +/* End of pool API */ + int mlx5e_ktls_add_tx(struct net_device *netdev, struct sock *sk, struct tls_crypto_info *crypto_info, u32 start_offload_tcp_sn) { struct mlx5e_ktls_offload_context_tx *priv_tx; + struct mlx5e_tls_tx_pool *pool; struct tls_context *tls_ctx; - struct mlx5_core_dev *mdev; struct mlx5e_priv *priv; int err; tls_ctx = tls_get_ctx(sk); priv = netdev_priv(netdev); - mdev = priv->mdev; + pool = priv->tls->tx_pool; - priv_tx = kzalloc(sizeof(*priv_tx), GFP_KERNEL); - if (!priv_tx) - return -ENOMEM; + priv_tx = pool_pop(pool); + if (IS_ERR(priv_tx)) + return PTR_ERR(priv_tx); - err = mlx5_ktls_create_key(mdev, crypto_info, &priv_tx->key_id); + err = mlx5_ktls_create_key(pool->mdev, crypto_info, &priv_tx->key_id); if (err) goto err_create_key; - priv_tx->sw_stats = &priv->tls->sw_stats; priv_tx->expected_seq = start_offload_tcp_sn; priv_tx->crypto_info = *(struct tls12_crypto_info_aes_gcm_128 *)crypto_info; @@ -116,36 +229,29 @@ int mlx5e_ktls_add_tx(struct net_device *netdev, struct sock *sk, mlx5e_set_ktls_tx_priv_ctx(tls_ctx, priv_tx); - err = mlx5e_ktls_create_tis(mdev, &priv_tx->tisn); - if (err) - goto err_create_tis; - priv_tx->ctx_post_pending = true; atomic64_inc(&priv_tx->sw_stats->tx_tls_ctx); return 0; -err_create_tis: - mlx5_ktls_destroy_key(mdev, priv_tx->key_id); err_create_key: - kfree(priv_tx); + pool_push(pool, priv_tx); return err; } void mlx5e_ktls_del_tx(struct net_device *netdev, struct tls_context *tls_ctx) { struct mlx5e_ktls_offload_context_tx *priv_tx; - struct mlx5_core_dev *mdev; + struct mlx5e_tls_tx_pool *pool; struct mlx5e_priv *priv; priv_tx = mlx5e_get_ktls_tx_priv_ctx(tls_ctx); priv = netdev_priv(netdev); - mdev = priv->mdev; + pool = priv->tls->tx_pool; atomic64_inc(&priv_tx->sw_stats->tx_tls_del); - mlx5e_destroy_tis(mdev, priv_tx->tisn); - mlx5_ktls_destroy_key(mdev, priv_tx->key_id); - kfree(priv_tx); + mlx5_ktls_destroy_key(priv_tx->mdev, priv_tx->key_id); + pool_push(pool, priv_tx); } static void tx_fill_wi(struct mlx5e_txqsq *sq, @@ -206,6 +312,16 @@ post_progress_params(struct mlx5e_txqsq *sq, sq->pc += num_wqebbs; } +static void tx_post_fence_nop(struct mlx5e_txqsq *sq) +{ + struct mlx5_wq_cyc *wq = &sq->wq; + u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); + + tx_fill_wi(sq, pi, 1, 0, NULL); + + mlx5e_post_nop_fence(wq, sq->sqn, &sq->pc); +} + static void mlx5e_ktls_tx_post_param_wqes(struct mlx5e_txqsq *sq, struct mlx5e_ktls_offload_context_tx *priv_tx, @@ -217,6 +333,7 @@ mlx5e_ktls_tx_post_param_wqes(struct mlx5e_txqsq *sq, post_static_params(sq, priv_tx, fence_first_post); post_progress_params(sq, priv_tx, progress_fence); + tx_post_fence_nop(sq); } struct tx_sync_info { @@ -309,7 +426,7 @@ tx_post_resync_params(struct mlx5e_txqsq *sq, } static int -tx_post_resync_dump(struct mlx5e_txqsq *sq, skb_frag_t *frag, u32 tisn, bool first) +tx_post_resync_dump(struct mlx5e_txqsq *sq, skb_frag_t *frag, u32 tisn) { struct mlx5_wqe_ctrl_seg *cseg; struct mlx5_wqe_data_seg *dseg; @@ -331,7 +448,6 @@ tx_post_resync_dump(struct mlx5e_txqsq *sq, skb_frag_t *frag, u32 tisn, bool fir cseg->opmod_idx_opcode = cpu_to_be32((sq->pc << 8) | MLX5_OPCODE_DUMP); cseg->qpn_ds = cpu_to_be32((sq->sqn << 8) | ds_cnt); cseg->tis_tir_num = cpu_to_be32(tisn << 8); - cseg->fm_ce_se = first ? MLX5_FENCE_MODE_INITIATOR_SMALL : 0; fsz = skb_frag_size(frag); dma_addr = skb_frag_dma_map(sq->pdev, frag, 0, fsz, @@ -366,16 +482,6 @@ void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq, stats->tls_dump_bytes += wi->num_bytes; } -static void tx_post_fence_nop(struct mlx5e_txqsq *sq) -{ - struct mlx5_wq_cyc *wq = &sq->wq; - u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); - - tx_fill_wi(sq, pi, 1, 0, NULL); - - mlx5e_post_nop_fence(wq, sq->sqn, &sq->pc); -} - static enum mlx5e_ktls_sync_retval mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx, struct mlx5e_txqsq *sq, @@ -396,14 +502,6 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx, tx_post_resync_params(sq, priv_tx, info.rcd_sn); - /* If no dump WQE was sent, we need to have a fence NOP WQE before the - * actual data xmit. - */ - if (!info.nr_frags) { - tx_post_fence_nop(sq); - return MLX5E_KTLS_SYNC_DONE; - } - for (i = 0; i < info.nr_frags; i++) { unsigned int orig_fsz, frag_offset = 0, n = 0; skb_frag_t *f = &info.frags[i]; @@ -411,13 +509,12 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx, orig_fsz = skb_frag_size(f); do { - bool fence = !(i || frag_offset); unsigned int fsz; n++; fsz = min_t(unsigned int, sq->hw_mtu, orig_fsz - frag_offset); skb_frag_size_set(f, fsz); - if (tx_post_resync_dump(sq, f, priv_tx->tisn, fence)) { + if (tx_post_resync_dump(sq, f, priv_tx->tisn)) { page_ref_add(skb_frag_page(f), n - 1); goto err_out; } @@ -465,9 +562,8 @@ bool mlx5e_ktls_handle_tx_skb(struct net_device *netdev, struct mlx5e_txqsq *sq, priv_tx = mlx5e_get_ktls_tx_priv_ctx(tls_ctx); - if (unlikely(mlx5e_ktls_tx_offload_test_and_clear_pending(priv_tx))) { + if (unlikely(mlx5e_ktls_tx_offload_test_and_clear_pending(priv_tx))) mlx5e_ktls_tx_post_param_wqes(sq, priv_tx, false, false); - } seq = ntohl(tcp_hdr(skb)->seq); if (unlikely(priv_tx->expected_seq != seq)) { @@ -505,3 +601,24 @@ bool mlx5e_ktls_handle_tx_skb(struct net_device *netdev, struct mlx5e_txqsq *sq, dev_kfree_skb_any(skb); return false; } + +int mlx5e_ktls_init_tx(struct mlx5e_priv *priv) +{ + if (!mlx5e_is_ktls_tx(priv->mdev)) + return 0; + + priv->tls->tx_pool = mlx5e_tls_tx_pool_init(priv->mdev, &priv->tls->sw_stats); + if (!priv->tls->tx_pool) + return -ENOMEM; + + return 0; +} + +void mlx5e_ktls_cleanup_tx(struct mlx5e_priv *priv) +{ + if (!mlx5e_is_ktls_tx(priv->mdev)) + return; + + mlx5e_tls_tx_pool_cleanup(priv->tls->tx_pool); + priv->tls->tx_pool = NULL; +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 087952b84ccb..eb18cd459c7c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -3135,6 +3135,7 @@ int mlx5e_create_tises(struct mlx5e_priv *priv) static void mlx5e_cleanup_nic_tx(struct mlx5e_priv *priv) { + mlx5e_accel_cleanup_tx(priv); mlx5e_destroy_tises(priv); } @@ -5120,8 +5121,16 @@ static int mlx5e_init_nic_tx(struct mlx5e_priv *priv) return err; } + err = mlx5e_accel_init_tx(priv); + if (err) + goto err_destroy_tises; + mlx5e_dcbnl_initialize(priv); return 0; + +err_destroy_tises: + mlx5e_destroy_tises(priv); + return err; } static void mlx5e_nic_enable(struct mlx5e_priv *priv) From patchwork Wed Jul 13 05:16:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tariq Toukan X-Patchwork-Id: 12915976 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 366F9CCA479 for ; Wed, 13 Jul 2022 05:17:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233696AbiGMFRO (ORCPT ); Wed, 13 Jul 2022 01:17:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233634AbiGMFRK (ORCPT ); Wed, 13 Jul 2022 01:17:10 -0400 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2051.outbound.protection.outlook.com [40.107.94.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08770D64C5 for ; Tue, 12 Jul 2022 22:17:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hrB1LCvXC9eEDLvAY/DilZnSnqVpYQeCEFHVnCo+uDuRyMoWPjcKphaGULpLPdRn0PjJ2i+JuhjRI0aG1VXPJC/cU5NSr5ilv7jHqpsc5s4elgTh2wpbkO+H+ctlUWaWPNfdy2c47F3KJkkDHF7p1k6dTKjcFMAycHocS2bVf029UrlVPzWtzR38RBwdkjTTIjviNm15rQsWBbbNFnQM1X4c9A6xtRspUZITk+nJd4YJ2Ynker5X/UVtXHh0PA7EhWq4Xm09qPeRc1qxOjRG7JfB1XlYE0x/+VOa9uJYJnR/Jo0dBK9Z7swCTaZDht1nzATBc+qbOJJSA9Odz4jlaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gRu8az/M2yr8iFIuyEXTDgUhdlGXbAcE0XScuwYDecU=; b=nA/yqtCZplUM3s9Hx6CKqmOjubT75BgssxHm5UhNaghrJBJ8TZQMEo8b414/1URKNUFBlSQcGftl9fUMnaeDCgNAkI8KFwII+GFL9iGGaS2PH/0D8JdfELkurJqrLvD8NqiJ1LTYbgd/8NW11q2gfdU15YZicKKlWu2fAQT2WoMapMgGCwOjyCEJwNbKsoejCpE/sGOFaiUD1iH8lia3SbjX1P7gYwa9k7bqY7uELXBHaVy73Bkq9c59b7qPnGFsQUtnQTHk3IDtz3HRZ4r4GqE2w45Iji3w8ajDe9aq8YoacvMVlFuTsmVEt1u0GBlTi4nIYSvLqFqbRVhbPTcWCQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=google.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gRu8az/M2yr8iFIuyEXTDgUhdlGXbAcE0XScuwYDecU=; b=qH6H+tCfzXTmkvRJWzaOY0H+8NWv2zv1d+fFGxHy5i4KaCcxe+FCocah9ZKUtEDzVufYb9OLiLvfmUpRxUPAquPgFdKtIGhAsoOCgDCQc2/ijuSvtlGAfvN9Q5d0W1XW+utnA3GYOnVg35WyxxhUtw+7IA6Fw8dlpWKYwp9ks21bI1aTgM6T/PTQNiy0j2En8fVx9FvrqXWsXaosTWXKz81vx9QrwGKA01vA8C1SOXOEijgH3oPbQ5QMVAMO8uRyg00LKljtzT/3gOwiivDgdSz9AUTLWzuRE4ggP80M3PKmL85bpf+P0UsJWzkGmS212N/f6kHIeLyGWZRSyYa54A== Received: from MW4PR03CA0240.namprd03.prod.outlook.com (2603:10b6:303:b9::35) by DM6PR12MB3627.namprd12.prod.outlook.com (2603:10b6:5:3e::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5417.25; Wed, 13 Jul 2022 05:17:00 +0000 Received: from CO1NAM11FT060.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b9:cafe::60) by MW4PR03CA0240.outlook.office365.com (2603:10b6:303:b9::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5417.16 via Frontend Transport; Wed, 13 Jul 2022 05:17:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT060.mail.protection.outlook.com (10.13.175.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5438.12 via Frontend Transport; Wed, 13 Jul 2022 05:16:59 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Wed, 13 Jul 2022 05:16:58 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 12 Jul 2022 22:16:57 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.26 via Frontend Transport; Tue, 12 Jul 2022 22:16:55 -0700 From: Tariq Toukan To: Boris Pismenny , John Fastabend , Jakub Kicinski CC: "David S . Miller" , Eric Dumazet , Paolo Abeni , , Saeed Mahameed , Gal Pressman , Tariq Toukan Subject: [PATCH net-next V2 6/6] net/mlx5e: kTLS, Dynamically re-size TX recycling pool Date: Wed, 13 Jul 2022 08:16:03 +0300 Message-ID: <20220713051603.14014-7-tariqt@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220713051603.14014-1-tariqt@nvidia.com> References: <20220713051603.14014-1-tariqt@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a26c40e5-1b64-4037-86e2-08da648ee9ce X-MS-TrafficTypeDiagnostic: DM6PR12MB3627:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: mAB5kqBEKeTUK/1jHpsjBzrlI12r3tIHTwn4ay6VobFVGqC7jVvjG6T/yU6VN0LwC17qFKUA4s+Rprf7iHyRhf63afKgRiEn8TuIY60t11Mdf6WdTBNnYMCPYt5rHClaqqqUdgyGc4PbfdCvg2V1v+3uM5ocGp9LIdtPu6n+2nwBk2XkRlFIGUde2/efyM9LDJA2sRdO+kzeYj8AJgBfjRQ4cdcCEIB2W24kuPqUaKBXj4Y6AMeJZb7f7GY80H2vQLoyu9IbDE2X7LMU8RgQjGzRINZ5tUa3TNKK2AuZTk9nIL/nqouRpF1I6JFwC7vYBRDu2ohyEyEHnHtxSLAoBrhv9O/aLwpelAnO4YnVyQuKf3epk4/n2XVTA4g9+ckRaENvgMnz6kJYVnp7lNTOcNmX9xCHQ0xGlut79UfRzxcDJJCoWKeaim2KrizBD2nnXMQR+BCYNkqk0eH/LeQgdWpiLIZWwPL1SIcwkKXpA8f3opubNZLZ73xqz+tIsQtHmpUM0Fs9fJr9Ironi0bu8fUK7Xr+8bIjdo15NxLV49MLeD/DKU9e9PSlDS3ycBni0F2lgqXzJIivDvchBt2FzAWvHyEKEFZTngplZ97VEtjy+X9dbZUW7XKpW9QcLFtucmdXFotuJleh5jyI3oXCWjBq9Z+HWPcksx0qjQ63DhHmVCnttQhVMQUdXqT6YU3nyuZtiu9N4qyyMA8bMuaNhK+muMbewKfGZs4iCUqiS7rqf39CLwaJfX4krQ7jLHNreM/DFTeBuyFfpaoNuBYINRQY5bIlKJ0qkKen1O9mnT/ejbNGnBjelDHotkBe6gUsW5bUwWRyD+DQ3fqbJEpiVQt96ky0unyNdGCnvNxYAY8= X-Forefront-Antispam-Report: CIP:12.22.5.236;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(136003)(396003)(346002)(39860400002)(376002)(40470700004)(36840700001)(46966006)(82740400003)(8676002)(30864003)(316002)(6666004)(47076005)(426003)(1076003)(336012)(82310400005)(4326008)(186003)(356005)(70586007)(107886003)(26005)(110136005)(7696005)(86362001)(70206006)(54906003)(2616005)(40460700003)(40480700001)(2906002)(36756003)(41300700001)(478600001)(36860700001)(83380400001)(8936002)(81166007)(5660300002)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jul 2022 05:16:59.5543 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a26c40e5-1b64-4037-86e2-08da648ee9ce X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.236];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT060.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3627 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Let the TLS TX recycle pool be more flexible in size, by continuously and dynamically allocating and releasing HW resources in response to changes in the connections rate and load. Allocate and release pool entries in bulks (16). Use a workqueue to release/allocate in the background. Allocate a new bulk when the pool size goes lower than the low threshold (1K). Symmetric operation is done when the pool size gets greater than the upper threshold (4K). Every idle pool entry holds: 1 TIS, 1 DEK (HW resources), in addition to ~100 bytes in host memory. Start with an empty pool to minimize memory and HW resources waste for non-TLS users that have the device-offload TLS enabled. Upon a new request, in case the pool is empty, do not wait for a whole bulk allocation to complete. Instead, trigger an instant allocation of a single resource to reduce latency. Performance tests: Before: 11,684 CPS After: 16,556 CPS Signed-off-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- .../mellanox/mlx5/core/en_accel/ktls_tx.c | 315 ++++++++++++++++-- 1 file changed, 289 insertions(+), 26 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c index 24d1288e906a..fc8860012a18 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c @@ -56,6 +56,36 @@ static int mlx5e_ktls_create_tis(struct mlx5_core_dev *mdev, u32 *tisn) return mlx5_core_create_tis(mdev, in, tisn); } +static int mlx5e_ktls_create_tis_cb(struct mlx5_core_dev *mdev, + struct mlx5_async_ctx *async_ctx, + u32 *out, int outlen, + mlx5_async_cbk_t callback, + struct mlx5_async_work *context) +{ + u32 in[MLX5_ST_SZ_DW(create_tis_in)] = {}; + + mlx5e_ktls_set_tisc(mdev, MLX5_ADDR_OF(create_tis_in, in, ctx)); + MLX5_SET(create_tis_in, in, opcode, MLX5_CMD_OP_CREATE_TIS); + + return mlx5_cmd_exec_cb(async_ctx, in, sizeof(in), + out, outlen, callback, context); +} + +static int mlx5e_ktls_destroy_tis_cb(struct mlx5_core_dev *mdev, u32 tisn, + struct mlx5_async_ctx *async_ctx, + u32 *out, int outlen, + mlx5_async_cbk_t callback, + struct mlx5_async_work *context) +{ + u32 in[MLX5_ST_SZ_DW(destroy_tis_in)] = {}; + + MLX5_SET(destroy_tis_in, in, opcode, MLX5_CMD_OP_DESTROY_TIS); + MLX5_SET(destroy_tis_in, in, tisn, tisn); + + return mlx5_cmd_exec_cb(async_ctx, in, sizeof(in), + out, outlen, callback, context); +} + struct mlx5e_ktls_offload_context_tx { /* fast path */ u32 expected_seq; @@ -68,6 +98,7 @@ struct mlx5e_ktls_offload_context_tx { struct mlx5_core_dev *mdev; struct mlx5e_tls_sw_stats *sw_stats; u32 key_id; + u8 create_err : 1; }; static void @@ -92,8 +123,81 @@ mlx5e_get_ktls_tx_priv_ctx(struct tls_context *tls_ctx) return *ctx; } +/* struct for callback API management */ +struct mlx5e_async_ctx { + struct mlx5_async_work context; + struct mlx5_async_ctx async_ctx; + struct work_struct work; + struct mlx5e_ktls_offload_context_tx *priv_tx; + struct completion complete; + int err; + union { + u32 out_create[MLX5_ST_SZ_DW(create_tis_out)]; + u32 out_destroy[MLX5_ST_SZ_DW(destroy_tis_out)]; + }; +}; + +static struct mlx5e_async_ctx *mlx5e_bulk_async_init(struct mlx5_core_dev *mdev, int n) +{ + struct mlx5e_async_ctx *bulk_async; + int i; + + bulk_async = kvcalloc(n, sizeof(struct mlx5e_async_ctx), GFP_KERNEL); + if (!bulk_async) + return NULL; + + for (i = 0; i < n; i++) { + struct mlx5e_async_ctx *async = &bulk_async[i]; + + mlx5_cmd_init_async_ctx(mdev, &async->async_ctx); + init_completion(&async->complete); + } + + return bulk_async; +} + +static void mlx5e_bulk_async_cleanup(struct mlx5e_async_ctx *bulk_async, int n) +{ + int i; + + for (i = 0; i < n; i++) { + struct mlx5e_async_ctx *async = &bulk_async[i]; + + mlx5_cmd_cleanup_async_ctx(&async->async_ctx); + } + kvfree(bulk_async); +} + +static void create_tis_callback(int status, struct mlx5_async_work *context) +{ + struct mlx5e_async_ctx *async = + container_of(context, struct mlx5e_async_ctx, context); + struct mlx5e_ktls_offload_context_tx *priv_tx = async->priv_tx; + + if (status) { + async->err = status; + priv_tx->create_err = 1; + goto out; + } + + priv_tx->tisn = MLX5_GET(create_tis_out, async->out_create, tisn); +out: + complete(&async->complete); +} + +static void destroy_tis_callback(int status, struct mlx5_async_work *context) +{ + struct mlx5e_async_ctx *async = + container_of(context, struct mlx5e_async_ctx, context); + struct mlx5e_ktls_offload_context_tx *priv_tx = async->priv_tx; + + complete(&async->complete); + kfree(priv_tx); +} + static struct mlx5e_ktls_offload_context_tx * -mlx5e_tls_priv_tx_init(struct mlx5_core_dev *mdev, struct mlx5e_tls_sw_stats *sw_stats) +mlx5e_tls_priv_tx_init(struct mlx5_core_dev *mdev, struct mlx5e_tls_sw_stats *sw_stats, + struct mlx5e_async_ctx *async) { struct mlx5e_ktls_offload_context_tx *priv_tx; int err; @@ -105,76 +209,229 @@ mlx5e_tls_priv_tx_init(struct mlx5_core_dev *mdev, struct mlx5e_tls_sw_stats *sw priv_tx->mdev = mdev; priv_tx->sw_stats = sw_stats; - err = mlx5e_ktls_create_tis(mdev, &priv_tx->tisn); - if (err) { - kfree(priv_tx); - return ERR_PTR(err); + if (!async) { + err = mlx5e_ktls_create_tis(mdev, &priv_tx->tisn); + if (err) + goto err_out; + } else { + async->priv_tx = priv_tx; + err = mlx5e_ktls_create_tis_cb(mdev, &async->async_ctx, + async->out_create, sizeof(async->out_create), + create_tis_callback, &async->context); + if (err) + goto err_out; } return priv_tx; + +err_out: + kfree(priv_tx); + return ERR_PTR(err); } -static void mlx5e_tls_priv_tx_cleanup(struct mlx5e_ktls_offload_context_tx *priv_tx) +static void mlx5e_tls_priv_tx_cleanup(struct mlx5e_ktls_offload_context_tx *priv_tx, + struct mlx5e_async_ctx *async) { - mlx5e_destroy_tis(priv_tx->mdev, priv_tx->tisn); - kfree(priv_tx); + if (priv_tx->create_err) { + complete(&async->complete); + kfree(priv_tx); + return; + } + async->priv_tx = priv_tx; + mlx5e_ktls_destroy_tis_cb(priv_tx->mdev, priv_tx->tisn, + &async->async_ctx, + async->out_destroy, sizeof(async->out_destroy), + destroy_tis_callback, &async->context); } -static void mlx5e_tls_priv_tx_list_cleanup(struct list_head *list) +static void mlx5e_tls_priv_tx_list_cleanup(struct mlx5_core_dev *mdev, + struct list_head *list, int size) { struct mlx5e_ktls_offload_context_tx *obj; + struct mlx5e_async_ctx *bulk_async; + int i; + + bulk_async = mlx5e_bulk_async_init(mdev, size); + if (!bulk_async) + return; - list_for_each_entry(obj, list, list_node) - mlx5e_tls_priv_tx_cleanup(obj); + i = 0; + list_for_each_entry(obj, list, list_node) { + mlx5e_tls_priv_tx_cleanup(obj, &bulk_async[i]); + i++; + } + + for (i = 0; i < size; i++) { + struct mlx5e_async_ctx *async = &bulk_async[i]; + + wait_for_completion(&async->complete); + } + mlx5e_bulk_async_cleanup(bulk_async, size); } /* Recycling pool API */ +#define MLX5E_TLS_TX_POOL_BULK (16) +#define MLX5E_TLS_TX_POOL_HIGH (4 * 1024) +#define MLX5E_TLS_TX_POOL_LOW (MLX5E_TLS_TX_POOL_HIGH / 4) + struct mlx5e_tls_tx_pool { struct mlx5_core_dev *mdev; struct mlx5e_tls_sw_stats *sw_stats; struct mutex lock; /* Protects access to the pool */ struct list_head list; -#define MLX5E_TLS_TX_POOL_MAX_SIZE (256) size_t size; + + struct workqueue_struct *wq; + struct work_struct create_work; + struct work_struct destroy_work; }; +static void create_work(struct work_struct *work) +{ + struct mlx5e_tls_tx_pool *pool = + container_of(work, struct mlx5e_tls_tx_pool, create_work); + struct mlx5e_ktls_offload_context_tx *obj; + struct mlx5e_async_ctx *bulk_async; + LIST_HEAD(local_list); + int i, j, err = 0; + + bulk_async = mlx5e_bulk_async_init(pool->mdev, MLX5E_TLS_TX_POOL_BULK); + if (!bulk_async) + return; + + for (i = 0; i < MLX5E_TLS_TX_POOL_BULK; i++) { + obj = mlx5e_tls_priv_tx_init(pool->mdev, pool->sw_stats, &bulk_async[i]); + if (IS_ERR(obj)) { + err = PTR_ERR(obj); + break; + } + list_add(&obj->list_node, &local_list); + } + + for (j = 0; j < i; j++) { + struct mlx5e_async_ctx *async = &bulk_async[j]; + + wait_for_completion(&async->complete); + if (!err && async->err) + err = async->err; + } + atomic64_add(i, &pool->sw_stats->tx_tls_pool_alloc); + mlx5e_bulk_async_cleanup(bulk_async, MLX5E_TLS_TX_POOL_BULK); + if (err) + goto err_out; + + mutex_lock(&pool->lock); + if (pool->size + MLX5E_TLS_TX_POOL_BULK >= MLX5E_TLS_TX_POOL_HIGH) { + mutex_unlock(&pool->lock); + goto err_out; + } + list_splice(&local_list, &pool->list); + pool->size += MLX5E_TLS_TX_POOL_BULK; + if (pool->size <= MLX5E_TLS_TX_POOL_LOW) + queue_work(pool->wq, work); + mutex_unlock(&pool->lock); + return; + +err_out: + mlx5e_tls_priv_tx_list_cleanup(pool->mdev, &local_list, i); + atomic64_add(i, &pool->sw_stats->tx_tls_pool_free); +} + +static void destroy_work(struct work_struct *work) +{ + struct mlx5e_tls_tx_pool *pool = + container_of(work, struct mlx5e_tls_tx_pool, destroy_work); + struct mlx5e_ktls_offload_context_tx *obj; + LIST_HEAD(local_list); + int i = 0; + + mutex_lock(&pool->lock); + if (pool->size < MLX5E_TLS_TX_POOL_HIGH) { + mutex_unlock(&pool->lock); + return; + } + + list_for_each_entry(obj, &pool->list, list_node) + if (++i == MLX5E_TLS_TX_POOL_BULK) + break; + + list_cut_position(&local_list, &pool->list, &obj->list_node); + pool->size -= MLX5E_TLS_TX_POOL_BULK; + if (pool->size >= MLX5E_TLS_TX_POOL_HIGH) + queue_work(pool->wq, work); + mutex_unlock(&pool->lock); + + mlx5e_tls_priv_tx_list_cleanup(pool->mdev, &local_list, MLX5E_TLS_TX_POOL_BULK); + atomic64_add(MLX5E_TLS_TX_POOL_BULK, &pool->sw_stats->tx_tls_pool_free); +} + static struct mlx5e_tls_tx_pool *mlx5e_tls_tx_pool_init(struct mlx5_core_dev *mdev, struct mlx5e_tls_sw_stats *sw_stats) { struct mlx5e_tls_tx_pool *pool; + BUILD_BUG_ON(MLX5E_TLS_TX_POOL_LOW + MLX5E_TLS_TX_POOL_BULK >= MLX5E_TLS_TX_POOL_HIGH); + pool = kvzalloc(sizeof(*pool), GFP_KERNEL); if (!pool) return NULL; + pool->wq = create_singlethread_workqueue("mlx5e_tls_tx_pool"); + if (!pool->wq) + goto err_free; + INIT_LIST_HEAD(&pool->list); mutex_init(&pool->lock); + INIT_WORK(&pool->create_work, create_work); + INIT_WORK(&pool->destroy_work, destroy_work); + pool->mdev = mdev; pool->sw_stats = sw_stats; return pool; + +err_free: + kvfree(pool); + return NULL; +} + +static void mlx5e_tls_tx_pool_list_cleanup(struct mlx5e_tls_tx_pool *pool) +{ + while (pool->size > MLX5E_TLS_TX_POOL_BULK) { + struct mlx5e_ktls_offload_context_tx *obj; + LIST_HEAD(local_list); + int i = 0; + + list_for_each_entry(obj, &pool->list, list_node) + if (++i == MLX5E_TLS_TX_POOL_BULK) + break; + + list_cut_position(&local_list, &pool->list, &obj->list_node); + mlx5e_tls_priv_tx_list_cleanup(pool->mdev, &local_list, MLX5E_TLS_TX_POOL_BULK); + atomic64_add(MLX5E_TLS_TX_POOL_BULK, &pool->sw_stats->tx_tls_pool_free); + pool->size -= MLX5E_TLS_TX_POOL_BULK; + } + if (pool->size) { + mlx5e_tls_priv_tx_list_cleanup(pool->mdev, &pool->list, pool->size); + atomic64_add(pool->size, &pool->sw_stats->tx_tls_pool_free); + } } static void mlx5e_tls_tx_pool_cleanup(struct mlx5e_tls_tx_pool *pool) { - mlx5e_tls_priv_tx_list_cleanup(&pool->list); - atomic64_add(pool->size, &pool->sw_stats->tx_tls_pool_free); + mlx5e_tls_tx_pool_list_cleanup(pool); + destroy_workqueue(pool->wq); kvfree(pool); } static void pool_push(struct mlx5e_tls_tx_pool *pool, struct mlx5e_ktls_offload_context_tx *obj) { mutex_lock(&pool->lock); - if (pool->size >= MLX5E_TLS_TX_POOL_MAX_SIZE) { - mutex_unlock(&pool->lock); - mlx5e_tls_priv_tx_cleanup(obj); - atomic64_inc(&pool->sw_stats->tx_tls_pool_free); - return; - } list_add(&obj->list_node, &pool->list); - pool->size++; + if (++pool->size == MLX5E_TLS_TX_POOL_HIGH) + queue_work(pool->wq, &pool->destroy_work); mutex_unlock(&pool->lock); } @@ -183,18 +440,24 @@ static struct mlx5e_ktls_offload_context_tx *pool_pop(struct mlx5e_tls_tx_pool * struct mlx5e_ktls_offload_context_tx *obj; mutex_lock(&pool->lock); - if (pool->size == 0) { - obj = mlx5e_tls_priv_tx_init(pool->mdev, pool->sw_stats); + if (unlikely(pool->size == 0)) { + /* pool is empty: + * - trigger the populating work, and + * - serve the current context via the regular blocking api. + */ + queue_work(pool->wq, &pool->create_work); + mutex_unlock(&pool->lock); + obj = mlx5e_tls_priv_tx_init(pool->mdev, pool->sw_stats, NULL); if (!IS_ERR(obj)) atomic64_inc(&pool->sw_stats->tx_tls_pool_alloc); - goto out; + return obj; } obj = list_first_entry(&pool->list, struct mlx5e_ktls_offload_context_tx, list_node); list_del(&obj->list_node); - pool->size--; -out: + if (--pool->size == MLX5E_TLS_TX_POOL_LOW) + queue_work(pool->wq, &pool->create_work); mutex_unlock(&pool->lock); return obj; }