From patchwork Tue Jul 30 18:33:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13747738 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A592418B49D; Tue, 30 Jul 2024 18:34:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364454; cv=none; b=YQ14P6e5qGl/W0zt+y3Bj+sEJkNIOEXtd0trA+1t6J0Dn+S+0JrRcbkP8T79hJmIHUOFQuBNyBFBDMq9fTj8dMCKmTc2Rw80YPILrjz+r+/Pv9MsWlx3GQZDeIcJ1KdfMQYeY4TzrbkINn9/gVIqIAyzeBiS1SGixwR2u5+63wM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364454; c=relaxed/simple; bh=KdYeKBr4GE3XLiqNm9PoEa7iY9lF/A9jXufuvajDY0Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=QmEhwlqnnS8YqxSeRLcrfIT9xKOVHnlzf+/EdrucjvEU8CFsA04s5UwEM6EXyRSVm7jgyUpURENF3J2umYk0rwYxM3E8gW60Q0UcIzsIy+La+rvs6qkMweCYHS66Ne1xp6OXYIvzIqyisKf8PwcXT03dJM9XFjo7csi5clxNuiU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Tqp6/eRp; arc=none smtp.client-ip=209.85.210.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Tqp6/eRp" Received: by mail-pf1-f182.google.com with SMTP id d2e1a72fcca58-70d2d7e692eso3859066b3a.0; Tue, 30 Jul 2024 11:34:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722364452; x=1722969252; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6P3Gdc0qfrmDJeJXUl6nP3lBuuq2/4cAEsXf9G6JMHY=; b=Tqp6/eRpq0fKmi3TsgT6Pf4LcD+yAF1h+CWIxagscwLPbrIAU4HsJJuEFAvzL6o8L/ gHP/AvVvP1Q3zOk+MA9NvO5nXUHMgkQy55fgnD3L+krwdec5/8X6HpNirnNwGg+oNM4k sWf7onQSyFS0Wyc+zN9gXl5QDv/mPAlPVE93PUeBNp9Aik0Le8BGJOst9tctxUrh1LQu iNC8wWuC5R1vm+3UCus1wkjofGrQqkPvvgc58pEreyyAjxWpODHMikjCbd/JGUr2Sl0r 4fFulNOgD1pO2JVGcTNsDNH6a2PjNibvaWCYPKiIrMjPlRDL4PDioBP4yyx6YQwJZbLQ ZcqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722364452; x=1722969252; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6P3Gdc0qfrmDJeJXUl6nP3lBuuq2/4cAEsXf9G6JMHY=; b=kih49Eu2SZmzSEIOFSl4IvSIJWdfngcqXWhrpzYrz6OM0GzBu+QGiT/6noxRVKXlPm Rh95K0LnTai7VPiXqVOw/Hq6UDSCUUe8Uyc9wHlRFaiHtQjeJedzb49r51fbUREDM6BD 7U2QNr2FGQdS6UhHKqMHzFyz+DLi2tA45qksoZg/6S0aAgyoiIYkKudqmCqqyUZ1DcDR 6YmoNoCwqx3IOJFXFV/s6wReXqs88w3r/AbKFEpCVtsEcBZ3879CKk7BiRd+cBRNfnil AsxC16xxeRl1/V22YCkLr458EUP5Afdt2j2rDxOjHxEtn0W0wXbBeBr6TRMAq42XG2/g 1sjw== X-Forwarded-Encrypted: i=1; AJvYcCXhrrgF3ruafdQAiWve6CFrogjf85VfODzcurtNot7yM1uptHp6gUhDa61981U2HvYLEIMu+rkfpA28B4gZsNJzs7IlZ00JuvOnHorfNNVTAeUVLKVNbfuSqhu/wVIEXlt2jvN+qSfeXuZfgTuHZCrdYlMWMzIFcqcctXGKlPQMQQ== X-Gm-Message-State: AOJu0YyqAhjoktG3YWFLJQHOdSe58y9EKWB45WZbmSNmQOBGFB0g8ZPx xwNdMBiqLHBvXMTh1bUuImtWahD7ggss9l4R60xdJyaUL6/lc+KJ X-Google-Smtp-Source: AGHT+IGo8nVaQ+JgIKKf9pLlHqxKPIyD1F+zgCj5uA3r2WC6bOpU1PXlUDgQGzbzKafa7+vywptJKA== X-Received: by 2002:a05:6a20:6a05:b0:1c4:81a0:3783 with SMTP id adf61e73a8af0-1c4a129a0e9mr11841606637.11.1722364451820; Tue, 30 Jul 2024 11:34:11 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:f2df:af9:e1f6:390e]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7a9f817f5a2sm7837763a12.24.2024.07.30.11.34.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jul 2024 11:34:11 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Jes Sorensen , "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, netdev@vger.kernel.org, Allen Pais Subject: [net-next v3 01/15] net: alteon: Convert tasklet API to new bottom half workqueue mechanism Date: Tue, 30 Jul 2024 11:33:49 -0700 Message-Id: <20240730183403.4176544-2-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240730183403.4176544-1-allen.lkml@gmail.com> References: <20240730183403.4176544-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the alteon driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/alteon/acenic.c | 26 +++++++++++++------------- drivers/net/ethernet/alteon/acenic.h | 8 ++++---- 2 files changed, 17 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/alteon/acenic.c b/drivers/net/ethernet/alteon/acenic.c index 3d8ac63132fb..9e6f91df2ba0 100644 --- a/drivers/net/ethernet/alteon/acenic.c +++ b/drivers/net/ethernet/alteon/acenic.c @@ -1560,9 +1560,9 @@ static void ace_watchdog(struct net_device *data, unsigned int txqueue) } -static void ace_tasklet(struct tasklet_struct *t) +static void ace_bh_work(struct work_struct *work) { - struct ace_private *ap = from_tasklet(ap, t, ace_tasklet); + struct ace_private *ap = from_work(ap, work, ace_bh_work); struct net_device *dev = ap->ndev; int cur_size; @@ -1595,7 +1595,7 @@ static void ace_tasklet(struct tasklet_struct *t) #endif ace_load_jumbo_rx_ring(dev, RX_JUMBO_SIZE - cur_size); } - ap->tasklet_pending = 0; + ap->bh_work_pending = 0; } @@ -1617,7 +1617,7 @@ static void ace_dump_trace(struct ace_private *ap) * * Loading rings is safe without holding the spin lock since this is * done only before the device is enabled, thus no interrupts are - * generated and by the interrupt handler/tasklet handler. + * generated and by the interrupt handler/bh handler. */ static void ace_load_std_rx_ring(struct net_device *dev, int nr_bufs) { @@ -2160,7 +2160,7 @@ static irqreturn_t ace_interrupt(int irq, void *dev_id) */ if (netif_running(dev)) { int cur_size; - int run_tasklet = 0; + int run_bh_work = 0; cur_size = atomic_read(&ap->cur_rx_bufs); if (cur_size < RX_LOW_STD_THRES) { @@ -2172,7 +2172,7 @@ static irqreturn_t ace_interrupt(int irq, void *dev_id) ace_load_std_rx_ring(dev, RX_RING_SIZE - cur_size); } else - run_tasklet = 1; + run_bh_work = 1; } if (!ACE_IS_TIGON_I(ap)) { @@ -2188,7 +2188,7 @@ static irqreturn_t ace_interrupt(int irq, void *dev_id) ace_load_mini_rx_ring(dev, RX_MINI_SIZE - cur_size); } else - run_tasklet = 1; + run_bh_work = 1; } } @@ -2205,12 +2205,12 @@ static irqreturn_t ace_interrupt(int irq, void *dev_id) ace_load_jumbo_rx_ring(dev, RX_JUMBO_SIZE - cur_size); } else - run_tasklet = 1; + run_bh_work = 1; } } - if (run_tasklet && !ap->tasklet_pending) { - ap->tasklet_pending = 1; - tasklet_schedule(&ap->ace_tasklet); + if (run_bh_work && !ap->bh_work_pending) { + ap->bh_work_pending = 1; + queue_work(system_bh_wq, &ap->ace_bh_work); } } @@ -2267,7 +2267,7 @@ static int ace_open(struct net_device *dev) /* * Setup the bottom half rx ring refill handler */ - tasklet_setup(&ap->ace_tasklet, ace_tasklet); + INIT_WORK(&ap->ace_bh_work, ace_bh_work); return 0; } @@ -2301,7 +2301,7 @@ static int ace_close(struct net_device *dev) cmd.idx = 0; ace_issue_cmd(regs, &cmd); - tasklet_kill(&ap->ace_tasklet); + cancel_work_sync(&ap->ace_bh_work); /* * Make sure one CPU is not processing packets while diff --git a/drivers/net/ethernet/alteon/acenic.h b/drivers/net/ethernet/alteon/acenic.h index ca5ce0cbbad1..0e45a97b9c9b 100644 --- a/drivers/net/ethernet/alteon/acenic.h +++ b/drivers/net/ethernet/alteon/acenic.h @@ -2,7 +2,7 @@ #ifndef _ACENIC_H_ #define _ACENIC_H_ #include - +#include /* * Generate TX index update each time, when TX ring is closed. @@ -667,8 +667,8 @@ struct ace_private struct rx_desc *rx_mini_ring; struct rx_desc *rx_return_ring; - int tasklet_pending, jumbo; - struct tasklet_struct ace_tasklet; + int bh_work_pending, jumbo; + struct work_struct ace_bh_work; struct event *evt_ring; @@ -776,7 +776,7 @@ static int ace_open(struct net_device *dev); static netdev_tx_t ace_start_xmit(struct sk_buff *skb, struct net_device *dev); static int ace_close(struct net_device *dev); -static void ace_tasklet(struct tasklet_struct *t); +static void ace_bh_work(struct work_struct *work); static void ace_dump_trace(struct ace_private *ap); static void ace_set_multicast_list(struct net_device *dev); static int ace_change_mtu(struct net_device *dev, int new_mtu); From patchwork Tue Jul 30 18:33:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13747739 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 82CDC18C935; Tue, 30 Jul 2024 18:34:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364457; cv=none; b=eKOGpnUeu3JDbNZ4pMPrfXB0rhiNH9sbA0847w7I1ejW4341s4BWSsPEPTLNBowHjqhJaduBW/vETSElF3l4oj4MAAaATV+HU1iFxooBRVWqZGHFdOpP/qHaynDv76V9Get2liQw/n/Xd6MrzZNj3xMst7qyRiBSTCGZiVpxWI0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364457; c=relaxed/simple; bh=6SvRW/yxoNpZwoYZMTVDeEIcDTd9jfiBS9x45YxGCMA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=n/GmXedvPMkvDk57ZKglGBYU3ehzr0kM4AFz1nhWWCmCXlkvuqO9Vx+GpZSti+zwGanSUnIkP8jv7hzZEBwymvlxJMuiKkl1jwNTFcWIfktQDvdLqDEeGMQkEPtbqETsCpuRKhJktzoqQF5CYAQpNs/RrT8uwItoNy06wSpikfg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=fU2pjY7X; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fU2pjY7X" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-70d199fb3dfso4128309b3a.3; Tue, 30 Jul 2024 11:34:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722364455; x=1722969255; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mB0wI3bTOL8Z7NkoFf7Yv1HR9LADL20dFoU50ttk+I0=; b=fU2pjY7XSPy6j6as4NMzb+NwTo+PO9CcfcSGmAPkb+g0dY6BWux65/a0vkogv3WdLv 10gwHgNsqJJAPoV9xNk0oFbXhj1W8CY9wMnOwYbW7H3v9ykSfor4Q+pXyev7QMTdfRZ8 KANQit+n3PiDtG6ARv1ygDSc+//J6IPI93NHyBqK0gMMA4t2e6acW5b0qmzMdSRDIWva xeds6j6DwwIxIwhQQktg4+0EbBfuJG9/FObSl+O6wC7BLHLT3HwombMxpfNeAYrFOCcO WAXx79PAgpTiwgOwQ88EmamS24gY3FLK/I/QyymOM4dcSMXvh8TUSmbeZC0WMBGSEe14 afpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722364455; x=1722969255; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mB0wI3bTOL8Z7NkoFf7Yv1HR9LADL20dFoU50ttk+I0=; b=isBUe4DtePfW9vcv9dM3yTGOXQuzWeHuET0iCzfY/CU23GX/MfhVvqNbuJ7BTy8VUr La1mfJzxgwqTr45hkua/oQclAukaHgY6Tjco8wK5k0A/Sxff9jltNrBceZO5KhWYGmpb B40RMh5KfpfkaGi1gCFTvx2PtpOvMiB9m78q4UKWBeEi8q1Yxs0cdF3z4VozXVP49Da5 hikivL74YdvbjlETpyFfvMghJOgqLJOFHWUTCJ6S4+6Qwu3tVAm8GvHlTsq0n0chwL+p J9v6P+YkAXRHjMlLEnDfefLsTY7EBxxQJVngYKs6VmIiFWvhqYoz+C3eMS1oOi4dsbDJ B1nQ== X-Forwarded-Encrypted: i=1; AJvYcCUhsXKe0RHn2niCRuMREWVSglptR9POjiprcO47CHIhrjvdkT69PxMbct6LqOHvYhFoU5pIPMhLCufJC3OImcmXwU3bNQDzLTFC0HlDxW8T67+NU3u0GZ2BwouCiXa3IlmOIdmrAc7oJDLajjbdt7SG31qgQpZmyMmLfS5ajqP0bw== X-Gm-Message-State: AOJu0YwBbmgMq88h9P90/eq3nm8T8IB4SCIqD37rDDb0A3xIJXPENH4C N1Ke3/T2vXDVZ+UE8yIUzDn4e44xeZXg3R+Fr0JQWj1nC4b1xw/T X-Google-Smtp-Source: AGHT+IHTgMmTpBe/FA3+CQ0G4ogSBdkYOovxMN42cMmaL9uq6bOzPos7Ze8hZ827PyeluPhC/BEzXw== X-Received: by 2002:a05:6a21:3489:b0:1c4:919f:3677 with SMTP id adf61e73a8af0-1c4a14c6f3amr14768208637.42.1722364454639; Tue, 30 Jul 2024 11:34:14 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:f2df:af9:e1f6:390e]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7a9f817f5a2sm7837763a12.24.2024.07.30.11.34.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jul 2024 11:34:13 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Shyam Sundar S K , "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, netdev@vger.kernel.org, Allen Pais Subject: [net-next v3 02/15] net: xgbe: Convert tasklet API to new bottom half workqueue mechanism Date: Tue, 30 Jul 2024 11:33:50 -0700 Message-Id: <20240730183403.4176544-3-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240730183403.4176544-1-allen.lkml@gmail.com> References: <20240730183403.4176544-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the xgbe driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/amd/xgbe/xgbe-drv.c | 30 +++++++++++------------ drivers/net/ethernet/amd/xgbe/xgbe-i2c.c | 16 ++++++------ drivers/net/ethernet/amd/xgbe/xgbe-mdio.c | 16 ++++++------ drivers/net/ethernet/amd/xgbe/xgbe-pci.c | 4 +-- drivers/net/ethernet/amd/xgbe/xgbe.h | 10 ++++---- 5 files changed, 38 insertions(+), 38 deletions(-) diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c index c4a4e316683f..5475867708f4 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c @@ -403,9 +403,9 @@ static bool xgbe_ecc_ded(struct xgbe_prv_data *pdata, unsigned long *period, return false; } -static void xgbe_ecc_isr_task(struct tasklet_struct *t) +static void xgbe_ecc_isr_bh_work(struct work_struct *work) { - struct xgbe_prv_data *pdata = from_tasklet(pdata, t, tasklet_ecc); + struct xgbe_prv_data *pdata = from_work(pdata, work, ecc_bh_work); unsigned int ecc_isr; bool stop = false; @@ -465,17 +465,17 @@ static irqreturn_t xgbe_ecc_isr(int irq, void *data) { struct xgbe_prv_data *pdata = data; - if (pdata->isr_as_tasklet) - tasklet_schedule(&pdata->tasklet_ecc); + if (pdata->isr_as_bh_work) + queue_work(system_bh_wq, &pdata->ecc_bh_work); else - xgbe_ecc_isr_task(&pdata->tasklet_ecc); + xgbe_ecc_isr_bh_work(&pdata->ecc_bh_work); return IRQ_HANDLED; } -static void xgbe_isr_task(struct tasklet_struct *t) +static void xgbe_isr_bh_work(struct work_struct *work) { - struct xgbe_prv_data *pdata = from_tasklet(pdata, t, tasklet_dev); + struct xgbe_prv_data *pdata = from_work(pdata, work, dev_bh_work); struct xgbe_hw_if *hw_if = &pdata->hw_if; struct xgbe_channel *channel; unsigned int dma_isr, dma_ch_isr; @@ -582,7 +582,7 @@ static void xgbe_isr_task(struct tasklet_struct *t) /* If there is not a separate ECC irq, handle it here */ if (pdata->vdata->ecc_support && (pdata->dev_irq == pdata->ecc_irq)) - xgbe_ecc_isr_task(&pdata->tasklet_ecc); + xgbe_ecc_isr_bh_work(&pdata->ecc_bh_work); /* If there is not a separate I2C irq, handle it here */ if (pdata->vdata->i2c_support && (pdata->dev_irq == pdata->i2c_irq)) @@ -604,10 +604,10 @@ static irqreturn_t xgbe_isr(int irq, void *data) { struct xgbe_prv_data *pdata = data; - if (pdata->isr_as_tasklet) - tasklet_schedule(&pdata->tasklet_dev); + if (pdata->isr_as_bh_work) + queue_work(system_bh_wq, &pdata->dev_bh_work); else - xgbe_isr_task(&pdata->tasklet_dev); + xgbe_isr_bh_work(&pdata->dev_bh_work); return IRQ_HANDLED; } @@ -1007,8 +1007,8 @@ static int xgbe_request_irqs(struct xgbe_prv_data *pdata) unsigned int i; int ret; - tasklet_setup(&pdata->tasklet_dev, xgbe_isr_task); - tasklet_setup(&pdata->tasklet_ecc, xgbe_ecc_isr_task); + INIT_WORK(&pdata->dev_bh_work, xgbe_isr_bh_work); + INIT_WORK(&pdata->ecc_bh_work, xgbe_ecc_isr_bh_work); ret = devm_request_irq(pdata->dev, pdata->dev_irq, xgbe_isr, 0, netdev_name(netdev), pdata); @@ -1078,8 +1078,8 @@ static void xgbe_free_irqs(struct xgbe_prv_data *pdata) devm_free_irq(pdata->dev, pdata->dev_irq, pdata); - tasklet_kill(&pdata->tasklet_dev); - tasklet_kill(&pdata->tasklet_ecc); + cancel_work_sync(&pdata->dev_bh_work); + cancel_work_sync(&pdata->ecc_bh_work); if (pdata->vdata->ecc_support && (pdata->dev_irq != pdata->ecc_irq)) devm_free_irq(pdata->dev, pdata->ecc_irq, pdata); diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c b/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c index a9ccc4258ee5..7a833894f52a 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c @@ -274,9 +274,9 @@ static void xgbe_i2c_clear_isr_interrupts(struct xgbe_prv_data *pdata, XI2C_IOREAD(pdata, IC_CLR_STOP_DET); } -static void xgbe_i2c_isr_task(struct tasklet_struct *t) +static void xgbe_i2c_isr_bh_work(struct work_struct *work) { - struct xgbe_prv_data *pdata = from_tasklet(pdata, t, tasklet_i2c); + struct xgbe_prv_data *pdata = from_work(pdata, work, i2c_bh_work); struct xgbe_i2c_op_state *state = &pdata->i2c.op_state; unsigned int isr; @@ -321,10 +321,10 @@ static irqreturn_t xgbe_i2c_isr(int irq, void *data) { struct xgbe_prv_data *pdata = (struct xgbe_prv_data *)data; - if (pdata->isr_as_tasklet) - tasklet_schedule(&pdata->tasklet_i2c); + if (pdata->isr_as_bh_work) + queue_work(system_bh_wq, &pdata->i2c_bh_work); else - xgbe_i2c_isr_task(&pdata->tasklet_i2c); + xgbe_i2c_isr_bh_work(&pdata->i2c_bh_work); return IRQ_HANDLED; } @@ -369,7 +369,7 @@ static void xgbe_i2c_set_target(struct xgbe_prv_data *pdata, unsigned int addr) static irqreturn_t xgbe_i2c_combined_isr(struct xgbe_prv_data *pdata) { - xgbe_i2c_isr_task(&pdata->tasklet_i2c); + xgbe_i2c_isr_bh_work(&pdata->i2c_bh_work); return IRQ_HANDLED; } @@ -449,7 +449,7 @@ static void xgbe_i2c_stop(struct xgbe_prv_data *pdata) if (pdata->dev_irq != pdata->i2c_irq) { devm_free_irq(pdata->dev, pdata->i2c_irq, pdata); - tasklet_kill(&pdata->tasklet_i2c); + cancel_work_sync(&pdata->i2c_bh_work); } } @@ -464,7 +464,7 @@ static int xgbe_i2c_start(struct xgbe_prv_data *pdata) /* If we have a separate I2C irq, enable it */ if (pdata->dev_irq != pdata->i2c_irq) { - tasklet_setup(&pdata->tasklet_i2c, xgbe_i2c_isr_task); + INIT_WORK(&pdata->i2c_bh_work, xgbe_i2c_isr_bh_work); ret = devm_request_irq(pdata->dev, pdata->i2c_irq, xgbe_i2c_isr, 0, pdata->i2c_name, diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c index 4a2dc705b528..07f4f3418d01 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c @@ -703,9 +703,9 @@ static void xgbe_an73_isr(struct xgbe_prv_data *pdata) } } -static void xgbe_an_isr_task(struct tasklet_struct *t) +static void xgbe_an_isr_bh_work(struct work_struct *work) { - struct xgbe_prv_data *pdata = from_tasklet(pdata, t, tasklet_an); + struct xgbe_prv_data *pdata = from_work(pdata, work, an_bh_work); netif_dbg(pdata, intr, pdata->netdev, "AN interrupt received\n"); @@ -727,17 +727,17 @@ static irqreturn_t xgbe_an_isr(int irq, void *data) { struct xgbe_prv_data *pdata = (struct xgbe_prv_data *)data; - if (pdata->isr_as_tasklet) - tasklet_schedule(&pdata->tasklet_an); + if (pdata->isr_as_bh_work) + queue_work(system_bh_wq, &pdata->an_bh_work); else - xgbe_an_isr_task(&pdata->tasklet_an); + xgbe_an_isr_bh_work(&pdata->an_bh_work); return IRQ_HANDLED; } static irqreturn_t xgbe_an_combined_isr(struct xgbe_prv_data *pdata) { - xgbe_an_isr_task(&pdata->tasklet_an); + xgbe_an_isr_bh_work(&pdata->an_bh_work); return IRQ_HANDLED; } @@ -1454,7 +1454,7 @@ static void xgbe_phy_stop(struct xgbe_prv_data *pdata) if (pdata->dev_irq != pdata->an_irq) { devm_free_irq(pdata->dev, pdata->an_irq, pdata); - tasklet_kill(&pdata->tasklet_an); + cancel_work_sync(&pdata->an_bh_work); } pdata->phy_if.phy_impl.stop(pdata); @@ -1477,7 +1477,7 @@ static int xgbe_phy_start(struct xgbe_prv_data *pdata) /* If we have a separate AN irq, enable it */ if (pdata->dev_irq != pdata->an_irq) { - tasklet_setup(&pdata->tasklet_an, xgbe_an_isr_task); + INIT_WORK(&pdata->an_bh_work, xgbe_an_isr_bh_work); ret = devm_request_irq(pdata->dev, pdata->an_irq, xgbe_an_isr, 0, pdata->an_name, diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c index c5e5fac49779..c636999a6a84 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c @@ -139,7 +139,7 @@ static int xgbe_config_multi_msi(struct xgbe_prv_data *pdata) return ret; } - pdata->isr_as_tasklet = 1; + pdata->isr_as_bh_work = 1; pdata->irq_count = ret; pdata->dev_irq = pci_irq_vector(pdata->pcidev, 0); @@ -176,7 +176,7 @@ static int xgbe_config_irqs(struct xgbe_prv_data *pdata) return ret; } - pdata->isr_as_tasklet = pdata->pcidev->msi_enabled ? 1 : 0; + pdata->isr_as_bh_work = pdata->pcidev->msi_enabled ? 1 : 0; pdata->irq_count = 1; pdata->channel_irq_count = 1; diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h index f01a1e566da6..d85386cac8d1 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe.h +++ b/drivers/net/ethernet/amd/xgbe/xgbe.h @@ -1298,11 +1298,11 @@ struct xgbe_prv_data { unsigned int lpm_ctrl; /* CTRL1 for resume */ - unsigned int isr_as_tasklet; - struct tasklet_struct tasklet_dev; - struct tasklet_struct tasklet_ecc; - struct tasklet_struct tasklet_i2c; - struct tasklet_struct tasklet_an; + unsigned int isr_as_bh_work; + struct work_struct dev_bh_work; + struct work_struct ecc_bh_work; + struct work_struct i2c_bh_work; + struct work_struct an_bh_work; struct dentry *xgbe_debugfs; From patchwork Tue Jul 30 18:33:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13747740 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-oi1-f180.google.com (mail-oi1-f180.google.com [209.85.167.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7C24018B49D; Tue, 30 Jul 2024 18:34:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364460; cv=none; b=kmNXxKHW7L3HbH1ZHCEUWELMIWmtSUEtZSkqqDnh6DGnsf0sgfFyvoTrkgmlVQGLQZwWcOqQqXgNofcepKoottrZDaD2lOCIIggnb4S03JiUVNAhYRWZerXRE2t4C/ZUTK2SABEHnmz/kvBdiXbC5rhL1WXzSqj2Ni9+z049UdQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364460; c=relaxed/simple; bh=0lVjZMbqARbw7gbAYzWGubSXpkB35s6c9hWX6BwHW18=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=gKnRFWYUKe1yeWbSdCySjMXb6VKfvlphOYgOKWufM0Wq606Ahmsm9U/e9Pv9YHqt//XlFVI8kKLuA6cEVE7I7plxvreSyKp2m+AxJszQ13G/D6EVn4wtFS9Sab0B0DS3KQd/cDV+WGVHShDGOGXYywO6pf4XVWoHI+fynDwEGGs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=AvpfcwcD; arc=none smtp.client-ip=209.85.167.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="AvpfcwcD" Received: by mail-oi1-f180.google.com with SMTP id 5614622812f47-3d9e13ef8edso3281263b6e.2; Tue, 30 Jul 2024 11:34:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722364457; x=1722969257; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VYc7brdTKERYILuKkf0RRNv0Il2mQiysyxHJxGPSiB4=; b=AvpfcwcD21BBb7SXRzQF4KPf3b1KSBORFvPMmJ2yW3locfF7Ux4hC5pTDIuaSusnvN 3M86bCBpdfsb380ppOYehZTVhWB93geadIiZhLTq9vCyzKz2tSzcauxMGKQW/vrFO4yB GSVkssIHrcMAaofPqhdBHsXfpupXf92Ljlmeuf1RIrbm77mDcJ4hcoaXnO9KbE0NhCUW Uk9ctr6p0MGX7UjZYAhTW+iQsLjqYk7qB5RVIBhqOcWIAhFJ105EfA3UDWNeR4okASCx VkWPngG0I4z1aHWWwYBfrtojfa9euxynv08RaLGXXBGCROnlfkdr8gVcbXiKefRAuYmQ Q08w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722364457; x=1722969257; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VYc7brdTKERYILuKkf0RRNv0Il2mQiysyxHJxGPSiB4=; b=aOcrxRjoJ/BKB66a+D3zOBiV+J23ItL3Xvsp9DoJqU9uPtAHItWz8gAKsnuQoM3j4Z BZeOa7xo9viqpHm8ekm7xRS2hV4zY3AjsWZ5gTj7uZ/Lo5W6lylbAG2dzbiEroDen1fh I8+BTbNfR7oJ37DSnGqnQtGJ7fN+v2zbav7qJ3xe+k33vhNPQBb8oVR91OtYAcVSF9qW 69ZpgzaBBsFChlRVgJuj3HCTu5wlwTMFBPdcI8eaQCRd4G5naMEIslLTvPUN1DT99s8d /xMo7BSlwF9u278qoP9B+pyUXdim96EicDDTKMez14sVrZx9K76Lbde7Tuwi5zdq3qDY V8MQ== X-Forwarded-Encrypted: i=1; AJvYcCVup1TNGYTP81NUwTTAJJveVHbyzd3CK4WP+77021ETUCuOmOa0AJQG+0yiZKT/qZipO4sWLeDZycB57XX19YcdPlWBUTk7MVZJXJGVpQ44iwiAMR6hF7Z16dfQ7UqRDYfZBgukaHbzNGA+NB41Mo9UCxNuiyMntbfNxN+/P8TaEA== X-Gm-Message-State: AOJu0YyaIipsAfYNTY7zkOgL+lFGwrfPpltn6nX0eWR/DJwEPUK15xLo nDU7xtn2qlO1+amjYPt1Mj4XD2pFtyzp9uB4wAaA5ZfOOgPPGjYB X-Google-Smtp-Source: AGHT+IGkxO8m4elnJWTAgkvC2oDhhS4zPbvjuafdt7rYO9m145PA1WYmoB5E1XzY8GmRLJ4E5hhiQA== X-Received: by 2002:a05:6808:211a:b0:3db:ae5:9bf3 with SMTP id 5614622812f47-3db236a06e1mr13020392b6e.15.1722364457302; Tue, 30 Jul 2024 11:34:17 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:f2df:af9:e1f6:390e]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7a9f817f5a2sm7837763a12.24.2024.07.30.11.34.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jul 2024 11:34:16 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, netdev@vger.kernel.org, Allen Pais Subject: [net-next v3 03/15] net: cnic: Convert tasklet API to new bottom half workqueue mechanism Date: Tue, 30 Jul 2024 11:33:51 -0700 Message-Id: <20240730183403.4176544-4-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240730183403.4176544-1-allen.lkml@gmail.com> References: <20240730183403.4176544-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the cnic driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/broadcom/cnic.c | 19 ++++++++++--------- drivers/net/ethernet/broadcom/cnic.h | 2 +- 2 files changed, 11 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/broadcom/cnic.c b/drivers/net/ethernet/broadcom/cnic.c index c2b4188a1ef1..a9040c42d2ff 100644 --- a/drivers/net/ethernet/broadcom/cnic.c +++ b/drivers/net/ethernet/broadcom/cnic.c @@ -31,6 +31,7 @@ #include #include #include +#include #if IS_ENABLED(CONFIG_VLAN_8021Q) #define BCM_VLAN 1 #endif @@ -3015,9 +3016,9 @@ static int cnic_service_bnx2(void *data, void *status_blk) return cnic_service_bnx2_queues(dev); } -static void cnic_service_bnx2_msix(struct tasklet_struct *t) +static void cnic_service_bnx2_msix(struct work_struct *work) { - struct cnic_local *cp = from_tasklet(cp, t, cnic_irq_task); + struct cnic_local *cp = from_work(cp, work, cnic_irq_bh_work); struct cnic_dev *dev = cp->dev; cp->last_status_idx = cnic_service_bnx2_queues(dev); @@ -3036,7 +3037,7 @@ static void cnic_doirq(struct cnic_dev *dev) prefetch(cp->status_blk.gen); prefetch(&cp->kcq1.kcq[KCQ_PG(prod)][KCQ_IDX(prod)]); - tasklet_schedule(&cp->cnic_irq_task); + queue_work(system_bh_wq, &cp->cnic_irq_bh_work); } } @@ -3140,9 +3141,9 @@ static u32 cnic_service_bnx2x_kcq(struct cnic_dev *dev, struct kcq_info *info) return last_status; } -static void cnic_service_bnx2x_bh(struct tasklet_struct *t) +static void cnic_service_bnx2x_bh_work(struct work_struct *work) { - struct cnic_local *cp = from_tasklet(cp, t, cnic_irq_task); + struct cnic_local *cp = from_work(cp, work, cnic_irq_bh_work); struct cnic_dev *dev = cp->dev; struct bnx2x *bp = netdev_priv(dev->netdev); u32 status_idx, new_status_idx; @@ -4428,7 +4429,7 @@ static void cnic_free_irq(struct cnic_dev *dev) if (ethdev->drv_state & CNIC_DRV_STATE_USING_MSIX) { cp->disable_int_sync(dev); - tasklet_kill(&cp->cnic_irq_task); + cancel_work_sync(&cp->cnic_irq_bh_work); free_irq(ethdev->irq_arr[0].vector, dev); } } @@ -4441,7 +4442,7 @@ static int cnic_request_irq(struct cnic_dev *dev) err = request_irq(ethdev->irq_arr[0].vector, cnic_irq, 0, "cnic", dev); if (err) - tasklet_disable(&cp->cnic_irq_task); + disable_work_sync(&cp->cnic_irq_bh_work); return err; } @@ -4464,7 +4465,7 @@ static int cnic_init_bnx2_irq(struct cnic_dev *dev) CNIC_WR(dev, base + BNX2_HC_CMD_TICKS_OFF, (64 << 16) | 220); cp->last_status_idx = cp->status_blk.bnx2->status_idx; - tasklet_setup(&cp->cnic_irq_task, cnic_service_bnx2_msix); + INIT_WORK(&cp->cnic_irq_bh_work, cnic_service_bnx2_msix); err = cnic_request_irq(dev); if (err) return err; @@ -4873,7 +4874,7 @@ static int cnic_init_bnx2x_irq(struct cnic_dev *dev) struct cnic_eth_dev *ethdev = cp->ethdev; int err = 0; - tasklet_setup(&cp->cnic_irq_task, cnic_service_bnx2x_bh); + INIT_WORK(&cp->cnic_irq_bh_work, cnic_service_bnx2x_bh_work); if (ethdev->drv_state & CNIC_DRV_STATE_USING_MSIX) err = cnic_request_irq(dev); diff --git a/drivers/net/ethernet/broadcom/cnic.h b/drivers/net/ethernet/broadcom/cnic.h index fedc84ada937..1a314a75d2d2 100644 --- a/drivers/net/ethernet/broadcom/cnic.h +++ b/drivers/net/ethernet/broadcom/cnic.h @@ -268,7 +268,7 @@ struct cnic_local { u32 bnx2x_igu_sb_id; u32 int_num; u32 last_status_idx; - struct tasklet_struct cnic_irq_task; + struct work_struct cnic_irq_bh_work; struct kcqe *completed_kcq[MAX_COMPLETED_KCQE]; From patchwork Tue Jul 30 18:33:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13747741 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6739418E054; Tue, 30 Jul 2024 18:34:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364461; cv=none; b=UzfElHTq0Eeq0ZIbaCboiNUga6e6m00/P4SRfk4t9QD4WDICqfiu5YpBcv9o+q/ESCg52zki3GU+OL/3cy9ZPMufHL6eTqOaO4uv6F9wdCJi7r7lEqT/MaGSV6xVxcRiVxkjLtzkLyVz25thKSd4KHHZNjXiXC8kUNDQALZaqfo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364461; c=relaxed/simple; bh=zNXUonBkY332KTz7o5NpA8UezyxolbGi/xfiOMJm/xw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=U0WQY1xcmGeDICgjqgZm/p+g/eTSYNxVLqqMPGFYfJ4bpq9hj9xftbpiN3wEdhgzci4KTnf3Burr8+g89/zGuaWZ3fPPbUr/HfaucAnl/PHH+lvdaEt8Q363JpLy75G587I1p+HiHaDcM3ufgw3oZnctVxUw9f5cX+p8lerdFv0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Ynlip6uZ; arc=none smtp.client-ip=209.85.214.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Ynlip6uZ" Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-1fc66fc35f2so1232795ad.0; Tue, 30 Jul 2024 11:34:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722364460; x=1722969260; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2tAJul54dXwAicX38emqI3ljWngiGXZJVJ9lmbTIws0=; b=Ynlip6uZMwB71bcEuIPDjXWDwrBpI7ogJ5kPBOnaR351gRyQmZ/Xxh0pirc5O3avDO JD8R14TpHhwjhWj7+Lu/owvcESOH0N2fgMrvn3WTCY833st1Bgg/3scl3DY9CebkDtDE ei/6gvLFCl2ZoAsHr1m9jEMDNW4URfEFTlPia1u8p91YNH4n7UHQlNbfb+9oiOrbD+ZO EGGSckSJNHa26qbEeEXpxf/TI1BdN61p/ZAURV/4hSrtbgzwHGWtKm/UJ+AGCJrwcihS vYafDkj3sTwYmJQ1DhKMJk3n0ZWZ/JeAJzIZBm1w1ZoO6n3hCdIRAs8ZIYm3JwT0crlt Ng5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722364460; x=1722969260; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2tAJul54dXwAicX38emqI3ljWngiGXZJVJ9lmbTIws0=; b=xFlKRdv3s64Eknhn1KWnKZO8UjzVmS/c/NCF6vS18C19sVVGNeeOj4qWw+osIQT8wo O5kvd8qGxc5kQvZjSR2wpsPhavlnZ3SIU6j/5qKyyUPUjRxm1Exw5KR+HqDF3k+h+4It XiIzUdwi4UPXzxvTRFah7pV3hCY4EwFOw0Pc7xP/W9XxNsM68NYFeBIN+O0lNEqVfsvz xgMiGvZOajQKoMRpJ8hHrbqP3hkXXsyJ5FbvI0qCVjdiu2udBHFMcXUspsHAR89hKOyb 1yGzd6/sulV2aviBNXiOtjbaz8WNxWLTbYsY/I2K3JLTybkm+ZR/rAUIItfvvu7NXQs/ EmUw== X-Forwarded-Encrypted: i=1; AJvYcCVqVMTPLQFrdPy6R3O+gtrvJZNUy1CC89CQ9Xi2x12zCQazFEyPIJOM1nZks1ygLuenfmGaaQa7z3aVBUdUX8qNgS+4Zzc2E2LI5seIlPzKO3wEdyw3Z/Knrh4Qd3x44X6/k4R+WeETMVwkoXsnh551PbzlxPl0ZlqUNJSYQK6vzQ== X-Gm-Message-State: AOJu0YwMPsSrMn1AF/ezuq+2IFWfzdic6C/3ygU171pDUOWzwmz3Om+H Vdh+I4G2XTbBdgfpR5nCONFyHAFpPkxrvP7xwEMhXmHnkoTAPqSy X-Google-Smtp-Source: AGHT+IGfya6mYn6ddU1tD7ITxjKPgnQCju1kkkFTzl3i1Es4LhqbsMgRa4AGgjVuAFe7V1qwxUpF5A== X-Received: by 2002:a17:902:e811:b0:1fd:8dfd:3553 with SMTP id d9443c01a7336-1ff37bec1cemr54739435ad.18.1722364459494; Tue, 30 Jul 2024 11:34:19 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:f2df:af9:e1f6:390e]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7a9f817f5a2sm7837763a12.24.2024.07.30.11.34.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jul 2024 11:34:18 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Nicolas Ferre , Claudiu Beznea , "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, netdev@vger.kernel.org, Allen Pais Subject: [net-next v3 04/15] net: macb: Convert tasklet API to new bottom half workqueue mechanism Date: Tue, 30 Jul 2024 11:33:52 -0700 Message-Id: <20240730183403.4176544-5-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240730183403.4176544-1-allen.lkml@gmail.com> References: <20240730183403.4176544-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the macb driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/cadence/macb.h | 3 ++- drivers/net/ethernet/cadence/macb_main.c | 10 +++++----- 2 files changed, 7 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h index ea71612f6b36..5740c98d8c9f 100644 --- a/drivers/net/ethernet/cadence/macb.h +++ b/drivers/net/ethernet/cadence/macb.h @@ -13,6 +13,7 @@ #include #include #include +#include #if defined(CONFIG_ARCH_DMA_ADDR_T_64BIT) || defined(CONFIG_MACB_USE_HWSTAMP) #define MACB_EXT_DESC @@ -1330,7 +1331,7 @@ struct macb { spinlock_t rx_fs_lock; unsigned int max_tuples; - struct tasklet_struct hresp_err_tasklet; + struct work_struct hresp_err_bh_work; int rx_bd_rd_prefetch; int tx_bd_rd_prefetch; diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c index 11665be3a22c..95e8742dce1d 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -1792,9 +1792,9 @@ static int macb_tx_poll(struct napi_struct *napi, int budget) return work_done; } -static void macb_hresp_error_task(struct tasklet_struct *t) +static void macb_hresp_error_task(struct work_struct *work) { - struct macb *bp = from_tasklet(bp, t, hresp_err_tasklet); + struct macb *bp = from_work(bp, work, hresp_err_bh_work); struct net_device *dev = bp->dev; struct macb_queue *queue; unsigned int q; @@ -1994,7 +1994,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id) } if (status & MACB_BIT(HRESP)) { - tasklet_schedule(&bp->hresp_err_tasklet); + queue_work(system_bh_wq, &bp->hresp_err_bh_work); netdev_err(dev, "DMA bus error: HRESP not OK\n"); if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) @@ -5172,7 +5172,7 @@ static int macb_probe(struct platform_device *pdev) goto err_out_unregister_mdio; } - tasklet_setup(&bp->hresp_err_tasklet, macb_hresp_error_task); + INIT_WORK(&bp->hresp_err_bh_work, macb_hresp_error_task); netdev_info(dev, "Cadence %s rev 0x%08x at 0x%08lx irq %d (%pM)\n", macb_is_gem(bp) ? "GEM" : "MACB", macb_readl(bp, MID), @@ -5216,7 +5216,7 @@ static void macb_remove(struct platform_device *pdev) mdiobus_free(bp->mii_bus); unregister_netdev(dev); - tasklet_kill(&bp->hresp_err_tasklet); + cancel_work_sync(&bp->hresp_err_bh_work); pm_runtime_disable(&pdev->dev); pm_runtime_dont_use_autosuspend(&pdev->dev); if (!pm_runtime_suspended(&pdev->dev)) { From patchwork Tue Jul 30 18:33:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13747742 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com [209.85.210.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 528AC194C79; Tue, 30 Jul 2024 18:34:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364465; cv=none; b=spg/p/PeO19+oQG6J7KiudOz3svkI0/dEkmS0lmiISq1ABvF3Rf0u0tOJFiSPC6JZIsH1vcCcec7xmIq1HcOwGZ5VhwpiFc/CVVeg7bt5VIiXC9A+vrkHINlcFVw5aoC3y+e2KUvq/VUjwQX3//kbgVl6mPePsEnYLGV1OzrtKk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364465; c=relaxed/simple; bh=A5jrWr1IghZ5fJq1hB2wTRQQ/a3/d6XdhbZEDYs8RaI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Wx7KF0H5Cv4wHDnNkWEdIfw/bLxSBpZD5iVDwM+VctHBzHibUdVAO+3jr/pdYxObyzfuywfCyLt57HNltFUZSypW34EA6FjFom83IJ5Xyxdq+ggkbf6u8NBI/t7wJTtrKXq94mCNL3/m9ktYvl2cHQW0NhE1dZTBNeFhlP34yB4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=JywRwFwr; arc=none smtp.client-ip=209.85.210.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="JywRwFwr" Received: by mail-pf1-f170.google.com with SMTP id d2e1a72fcca58-70d316f0060so102745b3a.1; Tue, 30 Jul 2024 11:34:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722364462; x=1722969262; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OdRV3tBKurUC7JL5LN0RX0YIj/b6iO27dMB9vD9DlaM=; b=JywRwFwryb/okkmnCi3onS+zCLEVjcb3vQFy/xsYNF1Llk+r8cOZUmgjlNPNRBAisY frBHgcu0AuhXu/4SxecWPI4AlBjNHG7iq1St2rPRD6KJsnuvWxnJhSHXV9FltAGOKYjI y3HBqp45qAHnDs5KjRfgMgBgauZqU2QPWLmLa/DqChNvRROZytYfe025CpRYsm4PxSDb 7/Jby+pTZcn4BjxiPY5ljYs2goCLLtMfQJOUIRDqul2t/eSyPL5eOmWt+3VFdiZoieVY 2OZEl10VdQ7WYap4HCLQONj5jdZUsxnA6lx0S3I3uomYHZRf6b27741FxeWfpUdW3wB8 /ZEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722364462; x=1722969262; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OdRV3tBKurUC7JL5LN0RX0YIj/b6iO27dMB9vD9DlaM=; b=fKurplNvWxaqhhzTNr/Wd19GFMaqeu4TFp5KyQjQCtzE1qmDrd5lkiePfPMWXoEeFT cU6DygywMfPlH0CX3Fu2dnkJOTxKxKRTkC5qEdlNxUdvVIfhN4WxVaMiPtfeqXqAED+v 5rDy9ZKG8KEJy/N3RlZd9LNHD5VJnewjYO+4kAZzC9oI2fY+TCCWyZehRw1y4kBJNZnf BDdQK1Jn/BufOlVku395vCu6vSZ4HtoA0u/KlDcRTow9KCOzkdbDZaGUxBP8E8QBmrO1 93mcobcRJXzaNrjyLB17nMB/YpeRdTAmOhY2gUg8p2dkTpbMpLO3w1cwzfQTKNKO9Mlt qADA== X-Forwarded-Encrypted: i=1; AJvYcCXCsjIkgTRqLOobJS1OZ1cvfEqmidiQMCpVT0QDXR0BIyvto5G5e129ujea+Go9UAR8Jt14/6pNv0WGYMx4SyovrQdR/wkN2XxLkpXX6SyAYTCL+auZZVvG1maunaNfDywByeyApIbKneU9UFOhHgbHkM9alVUOEomDljcUrFnmKw== X-Gm-Message-State: AOJu0YxTdxL6zBelUJWCIjrt1usPwuej8M8Ml/1WDHTnGLSmZ44KqbHx z2+Hhmb46nfVnzTbjYVCkf6fxEGnKgF4SrMxmdvRpkUNZ0eOWLO9 X-Google-Smtp-Source: AGHT+IE5CEFG0QL8+HJAnhugLdWRJxd0gblZEHd2FiL4s+DlPnYc3pLWQhzTRm6zOgq3LdUXGGqE2A== X-Received: by 2002:a05:6a20:5508:b0:1c0:c6c9:80bc with SMTP id adf61e73a8af0-1c4e470d444mr3545018637.9.1722364462335; Tue, 30 Jul 2024 11:34:22 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:f2df:af9:e1f6:390e]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7a9f817f5a2sm7837763a12.24.2024.07.30.11.34.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jul 2024 11:34:21 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, netdev@vger.kernel.org, Allen Pais , Sunil Goutham Subject: [net-next v3 05/15] net: cavium/liquidio: Convert tasklet API to new bottom half workqueue mechanism Date: Tue, 30 Jul 2024 11:33:53 -0700 Message-Id: <20240730183403.4176544-6-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240730183403.4176544-1-allen.lkml@gmail.com> References: <20240730183403.4176544-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the cavium/liquidio driver. This transition ensures compatibility with the latest design and enhances performance. Reviewed-by: Sunil Goutham Signed-off-by: Allen Pais --- .../net/ethernet/cavium/liquidio/lio_core.c | 4 ++-- .../net/ethernet/cavium/liquidio/lio_main.c | 24 +++++++++---------- .../ethernet/cavium/liquidio/lio_vf_main.c | 10 ++++---- .../ethernet/cavium/liquidio/octeon_droq.c | 4 ++-- .../ethernet/cavium/liquidio/octeon_main.h | 4 ++-- 5 files changed, 23 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/cavium/liquidio/lio_core.c b/drivers/net/ethernet/cavium/liquidio/lio_core.c index 674c54831875..37307e02a6ff 100644 --- a/drivers/net/ethernet/cavium/liquidio/lio_core.c +++ b/drivers/net/ethernet/cavium/liquidio/lio_core.c @@ -925,7 +925,7 @@ int liquidio_schedule_msix_droq_pkt_handler(struct octeon_droq *droq, u64 ret) if (OCTEON_CN23XX_VF(oct)) dev_err(&oct->pci_dev->dev, "should not come here should not get rx when poll mode = 0 for vf\n"); - tasklet_schedule(&oct_priv->droq_tasklet); + queue_work(system_bh_wq, &oct_priv->droq_bh_work); return 1; } /* this will be flushed periodically by check iq db */ @@ -975,7 +975,7 @@ static void liquidio_schedule_droq_pkt_handlers(struct octeon_device *oct) droq->ops.napi_fn(droq); oct_priv->napi_mask |= BIT_ULL(oq_no); } else { - tasklet_schedule(&oct_priv->droq_tasklet); + queue_work(system_bh_wq, &oct_priv->droq_bh_work); } } } diff --git a/drivers/net/ethernet/cavium/liquidio/lio_main.c b/drivers/net/ethernet/cavium/liquidio/lio_main.c index 1d79f6eaa41f..d348656c2f38 100644 --- a/drivers/net/ethernet/cavium/liquidio/lio_main.c +++ b/drivers/net/ethernet/cavium/liquidio/lio_main.c @@ -150,12 +150,12 @@ static int liquidio_set_vf_link_state(struct net_device *netdev, int vfidx, static struct handshake handshake[MAX_OCTEON_DEVICES]; static struct completion first_stage; -static void octeon_droq_bh(struct tasklet_struct *t) +static void octeon_droq_bh(struct work_struct *work) { int q_no; int reschedule = 0; - struct octeon_device_priv *oct_priv = from_tasklet(oct_priv, t, - droq_tasklet); + struct octeon_device_priv *oct_priv = from_work(oct_priv, work, + droq_bh_work); struct octeon_device *oct = oct_priv->dev; for (q_no = 0; q_no < MAX_OCTEON_OUTPUT_QUEUES(oct); q_no++) { @@ -180,7 +180,7 @@ static void octeon_droq_bh(struct tasklet_struct *t) } if (reschedule) - tasklet_schedule(&oct_priv->droq_tasklet); + queue_work(system_bh_wq, &oct_priv->droq_bh_work); } static int lio_wait_for_oq_pkts(struct octeon_device *oct) @@ -199,7 +199,7 @@ static int lio_wait_for_oq_pkts(struct octeon_device *oct) } if (pkt_cnt > 0) { pending_pkts += pkt_cnt; - tasklet_schedule(&oct_priv->droq_tasklet); + queue_work(system_bh_wq, &oct_priv->droq_bh_work); } pkt_cnt = 0; schedule_timeout_uninterruptible(1); @@ -1130,7 +1130,7 @@ static void octeon_destroy_resources(struct octeon_device *oct) break; } /* end switch (oct->status) */ - tasklet_kill(&oct_priv->droq_tasklet); + cancel_work_sync(&oct_priv->droq_bh_work); } /** @@ -1234,7 +1234,7 @@ static void liquidio_destroy_nic_device(struct octeon_device *oct, int ifidx) list_for_each_entry_safe(napi, n, &netdev->napi_list, dev_list) netif_napi_del(napi); - tasklet_enable(&oct_priv->droq_tasklet); + enable_and_queue_work(system_bh_wq, &oct_priv->droq_bh_work); if (atomic_read(&lio->ifstate) & LIO_IFSTATE_REGISTERED) unregister_netdev(netdev); @@ -1770,7 +1770,7 @@ static int liquidio_open(struct net_device *netdev) int ret = 0; if (oct->props[lio->ifidx].napi_enabled == 0) { - tasklet_disable(&oct_priv->droq_tasklet); + disable_work_sync(&oct_priv->droq_bh_work); list_for_each_entry_safe(napi, n, &netdev->napi_list, dev_list) napi_enable(napi); @@ -1896,7 +1896,7 @@ static int liquidio_stop(struct net_device *netdev) if (OCTEON_CN23XX_PF(oct)) oct->droq[0]->ops.poll_mode = 0; - tasklet_enable(&oct_priv->droq_tasklet); + enable_and_queue_work(system_bh_wq, &oct_priv->droq_bh_work); } dev_info(&oct->pci_dev->dev, "%s interface is stopped\n", netdev->name); @@ -4204,9 +4204,9 @@ static int octeon_device_init(struct octeon_device *octeon_dev) } } - /* Initialize the tasklet that handles output queue packet processing.*/ - dev_dbg(&octeon_dev->pci_dev->dev, "Initializing droq tasklet\n"); - tasklet_setup(&oct_priv->droq_tasklet, octeon_droq_bh); + /* Initialize the bh work that handles output queue packet processing.*/ + dev_dbg(&octeon_dev->pci_dev->dev, "Initializing droq bh work\n"); + INIT_WORK(&oct_priv->droq_bh_work, octeon_droq_bh); /* Setup the interrupt handler and record the INT SUM register address */ diff --git a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c index 62c2eadc33e3..04117625f388 100644 --- a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c +++ b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c @@ -87,7 +87,7 @@ static int lio_wait_for_oq_pkts(struct octeon_device *oct) } if (pkt_cnt > 0) { pending_pkts += pkt_cnt; - tasklet_schedule(&oct_priv->droq_tasklet); + queue_work(system_bh_wq, &oct_priv->droq_bh_work); } pkt_cnt = 0; schedule_timeout_uninterruptible(1); @@ -584,7 +584,7 @@ static void octeon_destroy_resources(struct octeon_device *oct) break; } - tasklet_kill(&oct_priv->droq_tasklet); + cancel_work_sync(&oct_priv->droq_bh_work); } /** @@ -687,7 +687,7 @@ static void liquidio_destroy_nic_device(struct octeon_device *oct, int ifidx) list_for_each_entry_safe(napi, n, &netdev->napi_list, dev_list) netif_napi_del(napi); - tasklet_enable(&oct_priv->droq_tasklet); + enable_and_queue_work(system_bh_wq, &oct_priv->droq_bh_work); if (atomic_read(&lio->ifstate) & LIO_IFSTATE_REGISTERED) unregister_netdev(netdev); @@ -911,7 +911,7 @@ static int liquidio_open(struct net_device *netdev) int ret = 0; if (!oct->props[lio->ifidx].napi_enabled) { - tasklet_disable(&oct_priv->droq_tasklet); + disable_work_sync(&oct_priv->droq_bh_work); list_for_each_entry_safe(napi, n, &netdev->napi_list, dev_list) napi_enable(napi); @@ -986,7 +986,7 @@ static int liquidio_stop(struct net_device *netdev) oct->droq[0]->ops.poll_mode = 0; - tasklet_enable(&oct_priv->droq_tasklet); + enable_and_queue_work(system_bh_wq, &oct_priv->droq_bh_work); } cancel_delayed_work_sync(&lio->stats_wk.work); diff --git a/drivers/net/ethernet/cavium/liquidio/octeon_droq.c b/drivers/net/ethernet/cavium/liquidio/octeon_droq.c index eef12fdd246d..4e5f8bbc891b 100644 --- a/drivers/net/ethernet/cavium/liquidio/octeon_droq.c +++ b/drivers/net/ethernet/cavium/liquidio/octeon_droq.c @@ -96,7 +96,7 @@ u32 octeon_droq_check_hw_for_pkts(struct octeon_droq *droq) last_count = pkt_count - droq->pkt_count; droq->pkt_count = pkt_count; - /* we shall write to cnts at napi irq enable or end of droq tasklet */ + /* we shall write to cnts at napi irq enable or end of droq bh_work */ if (last_count) atomic_add(last_count, &droq->pkts_pending); @@ -764,7 +764,7 @@ octeon_droq_process_packets(struct octeon_device *oct, (u16)rdisp->rinfo->recv_pkt->rh.r.subcode)); } - /* If there are packets pending. schedule tasklet again */ + /* If there are packets pending. schedule bh_work again */ if (atomic_read(&droq->pkts_pending)) return 1; diff --git a/drivers/net/ethernet/cavium/liquidio/octeon_main.h b/drivers/net/ethernet/cavium/liquidio/octeon_main.h index 5b4cb725f60f..a8f2a0a7b08e 100644 --- a/drivers/net/ethernet/cavium/liquidio/octeon_main.h +++ b/drivers/net/ethernet/cavium/liquidio/octeon_main.h @@ -24,6 +24,7 @@ #define _OCTEON_MAIN_H_ #include +#include #if BITS_PER_LONG == 32 #define CVM_CAST64(v) ((long long)(v)) @@ -36,8 +37,7 @@ #define DRV_NAME "LiquidIO" struct octeon_device_priv { - /** Tasklet structures for this device. */ - struct tasklet_struct droq_tasklet; + struct work_struct droq_bh_work; unsigned long napi_mask; struct octeon_device *dev; }; From patchwork Tue Jul 30 18:33:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13747743 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 24D0C1A0714; Tue, 30 Jul 2024 18:34:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364466; cv=none; b=saQJJXR7hLqLMJbIZjyrE5Zm0tp3VprW6G1L+SaEmviuwCmx0Wv7V1eHBk6tOmsm+LIgKEdvaCx59fl78TgPWqI+LDi5YeRneMKyr77xFJYZkjS+rvUGJ32ko+wCu7gFRYD+hLR75RmKeUPrdcR+eIFjXJ+FoynGfm7nYMlaMV0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364466; c=relaxed/simple; bh=i5I8aG+aYcWEncWeF56dW54t1OzqXvJqCOlfAieN1cs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=uuuTu5rA04/CHCogevN9oYrIlPOVg/m37W0OlUx7xOKH51nPFusYzc03nrJ1ufDkucTsdwFyZZFwLu1yZP3ictT/JSffc4vuN66qtKSQgmqaZBljQ0Hq99ZMNyYVXWhoVuxBuzBPk6M6LHy4Nnq7QP/zhFE2HLhGjAY1FoZbYa0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=YhDyFEmt; arc=none smtp.client-ip=209.85.210.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YhDyFEmt" Received: by mail-pf1-f180.google.com with SMTP id d2e1a72fcca58-70d333d5890so4566392b3a.0; Tue, 30 Jul 2024 11:34:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722364464; x=1722969264; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VSOLAIhMr7ucmgYUhEqNv7YbAkTdXAs1lqXyxcdm0Hs=; b=YhDyFEmtC6UPVJKTBxn7j0Hdn8TNbaKQ2bOuJg34uX6/ZDfMp/VZnpCutfOLMaDCn1 cDHVQNnt1VaYDfdMlsJBE8R3IaIzFmuXBhHDHBwpBj8A8Bk9H6i0OejPFe+1YBE5Ighc O0CsxfQAM1MysLfwWWqCTLCpKg9Tx5RS0I/7kT3/AHQNLWCjjZntdyKVV0W0+107zt3b 1a2Om83GapCecRup4un3ofXc/bjdrPMuIfv5xqYDF1CLjRO5X7NVs699sABx5gSulgmD hgrX5wqdteo6m2o4utTrVeoWuaiPQGz7Es3H2pNDiWo5HYgIAdLDYwLd3iz6qsfq2Iq5 aj/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722364464; x=1722969264; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VSOLAIhMr7ucmgYUhEqNv7YbAkTdXAs1lqXyxcdm0Hs=; b=l9IWWEdInDaYMXHOnJQ0oZmd8uS+nD/gFHwVUabQVrtqxgTw5n2B7tcmKEfelSzWCH cPcVlXbl6/jGlyqdK5cXJkWXh7EcMrmGCDnbPuQRFTzFnfPdX9N5adQJ20G5mf9EwIBY 23UtbeI2lsTk+kq6MgIbg4rKuo87Sm+Q+R0rKivFHnydkKG+XdrJyhIPSjXWkaT5RW6w qAoYOHCQy4I3LWXkrOmXZI/nmJvV+O9RKdohiL/C3ljGOCIpgqMrmLo/6vq9gHv/gXqR qhYV4Ij5rl9dG/Llw2tGfndkEwSuLmLUigpC2BVZyucNoIHkBmJN2MfWcvNH3sXg/UQd we9w== X-Forwarded-Encrypted: i=1; AJvYcCU4SJO/PzO7Xaqg12+5m/sLpQQzc0LiPmfxyL3OAg1uSppx91TyRyzfo6IGWXAD8GH83vDZASaqxzH0BNL9RAl2v6uLFtM6MRRxnKL61JoK8zX1+d4EQ3PCiiF3gZV36IeqHSNBa0qJAJ/Zo4mCvB4moIE8TTmUffPiJlKw25dgNQ== X-Gm-Message-State: AOJu0Yz7jm9DTAs4sB/d52weK/+osMf8ILlLIqxbuTNfM1a7Vel7Hv2y G9ing9BIemT0uzsrCKbkrT3AjPl57zXKty8sIcLW0ikEVRZgOEJ8 X-Google-Smtp-Source: AGHT+IGaW6btoR/A4Kaalo+y8iM7yRCvjX+nYGRNH0nXvrBuoT4A8PyVvkGhd6m9uRSpDxD0OfRKlg== X-Received: by 2002:a05:6a00:39a0:b0:70d:2b95:d9c0 with SMTP id d2e1a72fcca58-70ecea327demr16457353b3a.14.1722364464403; Tue, 30 Jul 2024 11:34:24 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:f2df:af9:e1f6:390e]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7a9f817f5a2sm7837763a12.24.2024.07.30.11.34.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jul 2024 11:34:23 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, netdev@vger.kernel.org, Allen Pais Subject: [net-next v3 06/15] net: octeon: Convert tasklet API to new bottom half workqueue mechanism Date: Tue, 30 Jul 2024 11:33:54 -0700 Message-Id: <20240730183403.4176544-7-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240730183403.4176544-1-allen.lkml@gmail.com> References: <20240730183403.4176544-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the cavium/octeon driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/cavium/octeon/octeon_mgmt.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c index 744f2434f7fa..0db993c1cc36 100644 --- a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c +++ b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -144,7 +145,7 @@ struct octeon_mgmt { unsigned int last_speed; struct device *dev; struct napi_struct napi; - struct tasklet_struct tx_clean_tasklet; + struct work_struct tx_clean_bh_work; struct device_node *phy_np; resource_size_t mix_phys; resource_size_t mix_size; @@ -315,9 +316,9 @@ static void octeon_mgmt_clean_tx_buffers(struct octeon_mgmt *p) netif_wake_queue(p->netdev); } -static void octeon_mgmt_clean_tx_tasklet(struct tasklet_struct *t) +static void octeon_mgmt_clean_tx_bh_work(struct work_struct *work) { - struct octeon_mgmt *p = from_tasklet(p, t, tx_clean_tasklet); + struct octeon_mgmt *p = from_work(p, work, tx_clean_bh_work); octeon_mgmt_clean_tx_buffers(p); octeon_mgmt_enable_tx_irq(p); } @@ -684,7 +685,7 @@ static irqreturn_t octeon_mgmt_interrupt(int cpl, void *dev_id) } if (mixx_isr.s.orthresh) { octeon_mgmt_disable_tx_irq(p); - tasklet_schedule(&p->tx_clean_tasklet); + queue_work(system_bh_wq, &p->tx_clean_bh_work); } return IRQ_HANDLED; @@ -1487,8 +1488,8 @@ static int octeon_mgmt_probe(struct platform_device *pdev) skb_queue_head_init(&p->tx_list); skb_queue_head_init(&p->rx_list); - tasklet_setup(&p->tx_clean_tasklet, - octeon_mgmt_clean_tx_tasklet); + INIT_WORK(&p->tx_clean_bh_work, + octeon_mgmt_clean_tx_bh_work); netdev->priv_flags |= IFF_UNICAST_FLT; From patchwork Tue Jul 30 18:33:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13747744 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pg1-f179.google.com (mail-pg1-f179.google.com [209.85.215.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C209A1A4B38; Tue, 30 Jul 2024 18:34:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364469; cv=none; b=Ec0x87/nDh+pDWvvKpDSDaUQwp44wdin6V06UPEajssWk5hfMJBZFeEgkyOmeGEUsxYNq8DYToOMZxOnRsY0ZIahwIyieJM1oXnoGTF2Ldz3XpktJP3y9KAtYIRGWeTadT9dA21bTBPpE9EqS5ds4UkKOeVadJ3GuEv5Ry47He4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364469; c=relaxed/simple; bh=YYqqPZRCkNJn0vuIumHHz/AM1x5zWpf+RFUKWaHos6Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=oOLfJh7WbXFFIj/T5yK2/Hch0aek0E74UhuGQEZ6zyMJ3dRkxIc2ejF0OPnA4gBX9EY1k/RieNO9Rlh/y2EGZtrMme1fJ8zcEu2JF6laebdoxowFyen/StxPVDFe0dQ024r4cvtfJLY/dO4K1AZG/q99PEVYw9HHkihlV+aKXdI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=aOKsgbGz; arc=none smtp.client-ip=209.85.215.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="aOKsgbGz" Received: by mail-pg1-f179.google.com with SMTP id 41be03b00d2f7-6e7b121be30so3042602a12.1; Tue, 30 Jul 2024 11:34:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722364467; x=1722969267; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CJet9BwGTsiRkCm4isnF6ncb/nhFhGOxt0nK/Q3v+7o=; b=aOKsgbGzdEz4sbFg/IR6PT7pne78NhlK7X0b4bueG7WxHCp9MejK3kDRVvebB7jO16 uqHpj/jCqnHrAN11GH3DEvjXQVG5/sL9weGUQSR4UAN9mVzfQ6keULkl/QhSDHKSRZij wE8A9FGBjhdgpIFghUfbwHU7Teb8QiEA+QbgfSdEEhsIh1FQEB8GOu3VcYKpGyhqQm3G rlI5Rmyj8vIKSqvqpgCf5KR4LN6dihLSeCzga2LFhinLafPJUrKPPHExzQ6vqzGkwLRs gGQrjhF1JHzasUM6bz02YNeT69OHi413SBB8WL9yvbSPo/ye8Nfbxlgj2OSdkNfwmbpI r82A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722364467; x=1722969267; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CJet9BwGTsiRkCm4isnF6ncb/nhFhGOxt0nK/Q3v+7o=; b=ixKf0wi0SqJP2KDQcvkFyhrTQPGQ+C08ACqdHWZyqwI7YbadLU1+zjyVjPoNxSrAfL VXqDj7hkA3pbQ/OfJpvNZJDs9FcwohcuOn5dTQSBmCoM9Fxuc3RHtwAm64E+Ia/nqUQx MMDCHNeHL3wbKq1QDcarEAEISKba1Wb5pbsRacJoLMlTMt8V9CYxHsZ8cyFgQoY6bwfu G5Vb0qXyWXt6OjcOd5vQG6WswAMDT/1KFXyjwetMWa96bNl31WlPuhVAZCsjdSmmTGdm QILqxQ/R56x7OrRwxQGCMdHce7cs/iR0575ilbQfjYTogpofAurGfR5IKjdH+zdeusbp 2Oqw== X-Forwarded-Encrypted: i=1; AJvYcCXdtRTLM173nxcMNmZyEYnwWAOs8/Wy/za468EmoIwgbZOdz+HkQ5mDrIG0vRP1D1qXJeexIpwACoOzNtdc4f3be5xN2d4ggeNEHxNvRBRN2rOe8sH3Dslggk7bhycisDdUL5hnD9ZNpMVLgG75kKqvw8gqB44oC2HZ+fTEs+feQg== X-Gm-Message-State: AOJu0YxubHImlKNLE4rkrgWOOXDZgClExVEapYUKkgYWhSv3z0HgkBxu FIbdc6AfJjiUdycudy42O1nr6QSTGkPB79eKIIiYZyzWdPvQ7sTx X-Google-Smtp-Source: AGHT+IGgF9xUzhoSsyAmcfnDOvxTdaMEnsGXOm516gr3MmxOJWQ9CxoA/oJ6bYQKfm6HPOJaEuM+ZA== X-Received: by 2002:a05:6a20:12d3:b0:1c2:905c:dba with SMTP id adf61e73a8af0-1c4a153356amr10354060637.54.1722364466988; Tue, 30 Jul 2024 11:34:26 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:f2df:af9:e1f6:390e]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7a9f817f5a2sm7837763a12.24.2024.07.30.11.34.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jul 2024 11:34:26 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Sunil Goutham , "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, netdev@vger.kernel.org, Allen Pais , linux-arm-kernel@lists.infradead.org Subject: [net-next v3 07/15] net: thunderx: Convert tasklet API to new bottom half workqueue mechanism Date: Tue, 30 Jul 2024 11:33:55 -0700 Message-Id: <20240730183403.4176544-8-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240730183403.4176544-1-allen.lkml@gmail.com> References: <20240730183403.4176544-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the cavium/thunderx driver. This transition ensures compatibility with the latest design and enhances performance. Reviewed-by: Sunil Goutham Signed-off-by: Allen Pais --- drivers/net/ethernet/cavium/thunder/nic.h | 5 ++-- .../net/ethernet/cavium/thunder/nicvf_main.c | 24 +++++++++---------- .../ethernet/cavium/thunder/nicvf_queues.c | 4 ++-- .../ethernet/cavium/thunder/nicvf_queues.h | 2 +- 4 files changed, 18 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/cavium/thunder/nic.h b/drivers/net/ethernet/cavium/thunder/nic.h index 090d6b83982a..ecc175b6e7fa 100644 --- a/drivers/net/ethernet/cavium/thunder/nic.h +++ b/drivers/net/ethernet/cavium/thunder/nic.h @@ -8,6 +8,7 @@ #include #include +#include #include #include "thunder_bgx.h" @@ -295,7 +296,7 @@ struct nicvf { bool rb_work_scheduled; struct page *rb_page; struct delayed_work rbdr_work; - struct tasklet_struct rbdr_task; + struct work_struct rbdr_bh_work; /* Secondary Qset */ u8 sqs_count; @@ -319,7 +320,7 @@ struct nicvf { bool loopback_supported; struct nicvf_rss_info rss_info; struct nicvf_pfc pfc; - struct tasklet_struct qs_err_task; + struct work_struct qs_err_bh_work; struct work_struct reset_task; struct nicvf_work rx_mode_work; /* spinlock to protect workqueue arguments from concurrent access */ diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c index aebb9fef3f6e..b0878bd25cf0 100644 --- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c +++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c @@ -982,9 +982,9 @@ static int nicvf_poll(struct napi_struct *napi, int budget) * * As of now only CQ errors are handled */ -static void nicvf_handle_qs_err(struct tasklet_struct *t) +static void nicvf_handle_qs_err(struct work_struct *work) { - struct nicvf *nic = from_tasklet(nic, t, qs_err_task); + struct nicvf *nic = from_work(nic, work, qs_err_bh_work); struct queue_set *qs = nic->qs; int qidx; u64 status; @@ -1069,7 +1069,7 @@ static irqreturn_t nicvf_rbdr_intr_handler(int irq, void *nicvf_irq) if (!nicvf_is_intr_enabled(nic, NICVF_INTR_RBDR, qidx)) continue; nicvf_disable_intr(nic, NICVF_INTR_RBDR, qidx); - tasklet_hi_schedule(&nic->rbdr_task); + queue_work(system_bh_highpri_wq, &nic->rbdr_bh_work); /* Clear interrupt */ nicvf_clear_intr(nic, NICVF_INTR_RBDR, qidx); } @@ -1085,7 +1085,7 @@ static irqreturn_t nicvf_qs_err_intr_handler(int irq, void *nicvf_irq) /* Disable Qset err interrupt and schedule softirq */ nicvf_disable_intr(nic, NICVF_INTR_QS_ERR, 0); - tasklet_hi_schedule(&nic->qs_err_task); + queue_work(system_bh_highpri_wq, &nic->qs_err_bh_work); nicvf_clear_intr(nic, NICVF_INTR_QS_ERR, 0); return IRQ_HANDLED; @@ -1364,8 +1364,8 @@ int nicvf_stop(struct net_device *netdev) for (irq = 0; irq < nic->num_vec; irq++) synchronize_irq(pci_irq_vector(nic->pdev, irq)); - tasklet_kill(&nic->rbdr_task); - tasklet_kill(&nic->qs_err_task); + cancel_work_sync(&nic->rbdr_bh_work); + cancel_work_sync(&nic->qs_err_bh_work); if (nic->rb_work_scheduled) cancel_delayed_work_sync(&nic->rbdr_work); @@ -1488,11 +1488,11 @@ int nicvf_open(struct net_device *netdev) nicvf_hw_set_mac_addr(nic, netdev); } - /* Init tasklet for handling Qset err interrupt */ - tasklet_setup(&nic->qs_err_task, nicvf_handle_qs_err); + /* Init bh_work for handling Qset err interrupt */ + INIT_WORK(&nic->qs_err_bh_work, nicvf_handle_qs_err); - /* Init RBDR tasklet which will refill RBDR */ - tasklet_setup(&nic->rbdr_task, nicvf_rbdr_task); + /* Init RBDR bh_work which will refill RBDR */ + INIT_WORK(&nic->rbdr_bh_work, nicvf_rbdr_bh_work); INIT_DELAYED_WORK(&nic->rbdr_work, nicvf_rbdr_work); /* Configure CPI alorithm */ @@ -1561,8 +1561,8 @@ int nicvf_open(struct net_device *netdev) cleanup: nicvf_disable_intr(nic, NICVF_INTR_MBOX, 0); nicvf_unregister_interrupts(nic); - tasklet_kill(&nic->qs_err_task); - tasklet_kill(&nic->rbdr_task); + cancel_work_sync(&nic->qs_err_bh_work); + cancel_work_sync(&nic->rbdr_bh_work); napi_del: for (qidx = 0; qidx < qs->cq_cnt; qidx++) { cq_poll = nic->napi[qidx]; diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c index 06397cc8bb36..ad71160879e4 100644 --- a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c +++ b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c @@ -461,9 +461,9 @@ void nicvf_rbdr_work(struct work_struct *work) } /* In Softirq context, alloc rcv buffers in atomic mode */ -void nicvf_rbdr_task(struct tasklet_struct *t) +void nicvf_rbdr_bh_work(struct work_struct *work) { - struct nicvf *nic = from_tasklet(nic, t, rbdr_task); + struct nicvf *nic = from_work(nic, work, rbdr_bh_work); nicvf_refill_rbdr(nic, GFP_ATOMIC); if (nic->rb_alloc_fail) { diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_queues.h b/drivers/net/ethernet/cavium/thunder/nicvf_queues.h index 8453defc296c..c6f18fb7c50e 100644 --- a/drivers/net/ethernet/cavium/thunder/nicvf_queues.h +++ b/drivers/net/ethernet/cavium/thunder/nicvf_queues.h @@ -348,7 +348,7 @@ void nicvf_xdp_sq_doorbell(struct nicvf *nic, struct snd_queue *sq, int sq_num); struct sk_buff *nicvf_get_rcv_skb(struct nicvf *nic, struct cqe_rx_t *cqe_rx, bool xdp); -void nicvf_rbdr_task(struct tasklet_struct *t); +void nicvf_rbdr_bh_work(struct work_struct *work); void nicvf_rbdr_work(struct work_struct *work); void nicvf_enable_intr(struct nicvf *nic, int int_type, int q_idx); From patchwork Tue Jul 30 18:33:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13747745 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-oi1-f170.google.com (mail-oi1-f170.google.com [209.85.167.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 424861BD4E5; Tue, 30 Jul 2024 18:34:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364473; cv=none; b=Qrjv0rW/5Mg2JUOjZcxZFMYwj0su9ibH4lk+CoJBDOLWRRW96rHwTN75KJ1jGBAKZkPduPJRVmNEPwDCNmP7mKOXDcRtN7NVbEKUJxspXZlEyiapJYqeJdXB8MP71A/TZ8eSZIwnT4FLZSXARMm5T0Ie6FSbkdRcmmRZ0LbUZpY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364473; c=relaxed/simple; bh=WtNE/dmbZmq1ThtIDUkgFnTonDnp8MMPvTwAHTwhhpo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=tIpwAeyqQH5AN+k44RYzkPL+YoIAaFYQ77Uos8VFp8ehbdf7CkBTYEXR3QY6aqRY+yj2H5Xp44Mio3+mw005sw5vRnCpSicTGBUAok1VqY8j8DxRB6cAyiGYCQQGBrxPbkBRAsepR/8/vbsl3Dvh6mkOvi45meq07viV7yRFNZw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=fmbxP6/r; arc=none smtp.client-ip=209.85.167.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fmbxP6/r" Received: by mail-oi1-f170.google.com with SMTP id 5614622812f47-3db16b2c1d2so3527715b6e.2; Tue, 30 Jul 2024 11:34:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722364470; x=1722969270; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mHsWDKW4hHN8DyEjz7y8O8xUw7bLEPYBA03SJ52fByU=; b=fmbxP6/rWQmcD38w5LjZpoZaQnRy6R2eTg1jVponMc/4XNayzPnP5ufmPTATwH37OF +8baGkjs8T2a213A281qmzjzGGeckcKTTyfFLNdKU00XdQFB3sIzMwANjRRRapMBZ7N8 8vRFUYNFWDD3ZWqiWlQbHLPAzSYSnd9yBbobt4jPFMaFRpxdnE3n5Zr7dIPEZuQ+Lm8M o1dUnA+iGa60VswKlNbDRCunnOxNRXdazNCYCZz0ITz/9j7cdqk+GDlfEgICdAJuzvW6 QtWn8TwT9/WCJsgKTqCzUVKPNXgqWbFeG8RAb+VN+tJgyDQOG6V4IcUitj5taLtICGnO VOXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722364470; x=1722969270; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mHsWDKW4hHN8DyEjz7y8O8xUw7bLEPYBA03SJ52fByU=; b=hHe+0IlDkrU/66PIF4JFk7cq9Djm5DDj3N+biRNn9T93EcMv9wu1hCZgByryNe2slT on51gp4u/3IMdSXrAh59AY0NoDlPaV7k4Ki+ZpqyfPH1oYpACtytLcRl4uhB6TPQ0DoA W1/oHGB93LBVPd6yp6FOkwWeFPTvdQSEBt6t+VsTuHh0d6YKNjv9UGzhQwt19vPp9VZB Vg2TWDnMIdGk06dVMQ+Rwt4pbE0rXNEIpxlTXZSk1IQIpUBnVJq8lxOWAADUkn/dGbqc 4Z8v1C2eWnlMOrqobBulaPGS9KDpUxBGKNpyoYFtyQLpSoufZGdIxAbAKv3NOfQVASVd klfg== X-Forwarded-Encrypted: i=1; AJvYcCWE31aWeYnZ1JzKxZA5Va/tBVsDWEXBzq0mOyfAlhf/HSKF+EwqCDpkHpv1xsIs4lqYT1iQ3kmKBgVnDBzSLB2RxtIUcbA0yoQHirjm0806zdS3Lnn9sz09jkMLqMvOOq3qumtzgAhDLJPm6/OQ5Pet4bV4ixLb27P5c/TYK+VMGw== X-Gm-Message-State: AOJu0YwffHp6vU/w/I/+2A6u9ZmHjIjoX1exP0UxahpQ7eIM+V04by9G 8vkBgI2hVjtJ+ru7CjWYSK9pcFY2tpI8YefzbehT06vzYnkGFYNC X-Google-Smtp-Source: AGHT+IFjKAYED3/vuEz7A8UASYPs6YorgrdUyrQaWZQZz08P7/46riBLlqI+PSaBWWESKO+XW1zBLQ== X-Received: by 2002:a05:6808:1827:b0:3d6:53fc:e813 with SMTP id 5614622812f47-3db23a3b32dmr15286599b6e.27.1722364470106; Tue, 30 Jul 2024 11:34:30 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:f2df:af9:e1f6:390e]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7a9f817f5a2sm7837763a12.24.2024.07.30.11.34.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jul 2024 11:34:29 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, "David S. Miller" , Eric Dumazet , Paolo Abeni , Potnuri Bharat Teja Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, netdev@vger.kernel.org, Allen Pais Subject: [net-next v3 08/15] net: chelsio: Convert tasklet API to new bottom half workqueue mechanism Date: Tue, 30 Jul 2024 11:33:56 -0700 Message-Id: <20240730183403.4176544-9-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240730183403.4176544-1-allen.lkml@gmail.com> References: <20240730183403.4176544-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the chelsio driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/chelsio/cxgb/sge.c | 19 ++++----- drivers/net/ethernet/chelsio/cxgb4/cxgb4.h | 9 +++-- .../net/ethernet/chelsio/cxgb4/cxgb4_main.c | 2 +- .../ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c | 4 +- .../net/ethernet/chelsio/cxgb4/cxgb4_uld.c | 2 +- drivers/net/ethernet/chelsio/cxgb4/sge.c | 40 +++++++++---------- drivers/net/ethernet/chelsio/cxgb4vf/sge.c | 6 +-- 7 files changed, 42 insertions(+), 40 deletions(-) diff --git a/drivers/net/ethernet/chelsio/cxgb/sge.c b/drivers/net/ethernet/chelsio/cxgb/sge.c index 861edff5ed89..4dab9b0dca86 100644 --- a/drivers/net/ethernet/chelsio/cxgb/sge.c +++ b/drivers/net/ethernet/chelsio/cxgb/sge.c @@ -229,11 +229,11 @@ struct sched { unsigned int port; /* port index (round robin ports) */ unsigned int num; /* num skbs in per port queues */ struct sched_port p[MAX_NPORTS]; - struct tasklet_struct sched_tsk;/* tasklet used to run scheduler */ + struct work_struct sched_bh_work;/* bh_work used to run scheduler */ struct sge *sge; }; -static void restart_sched(struct tasklet_struct *t); +static void restart_sched(struct work_struct *work); /* @@ -270,14 +270,14 @@ static const u8 ch_mac_addr[ETH_ALEN] = { }; /* - * stop tasklet and free all pending skb's + * stop bh_work and free all pending skb's */ static void tx_sched_stop(struct sge *sge) { struct sched *s = sge->tx_sched; int i; - tasklet_kill(&s->sched_tsk); + cancel_work_sync(&s->sched_bh_work); for (i = 0; i < MAX_NPORTS; i++) __skb_queue_purge(&s->p[s->port].skbq); @@ -371,7 +371,7 @@ static int tx_sched_init(struct sge *sge) return -ENOMEM; pr_debug("tx_sched_init\n"); - tasklet_setup(&s->sched_tsk, restart_sched); + INIT_WORK(&s->sched_bh_work, restart_sched); s->sge = sge; sge->tx_sched = s; @@ -1300,12 +1300,12 @@ static inline void reclaim_completed_tx(struct sge *sge, struct cmdQ *q) } /* - * Called from tasklet. Checks the scheduler for any + * Called from bh context. Checks the scheduler for any * pending skbs that can be sent. */ -static void restart_sched(struct tasklet_struct *t) +static void restart_sched(struct work_struct *work) { - struct sched *s = from_tasklet(s, t, sched_tsk); + struct sched *s = from_work(s, work, sched_bh_work); struct sge *sge = s->sge; struct adapter *adapter = sge->adapter; struct cmdQ *q = &sge->cmdQ[0]; @@ -1451,7 +1451,8 @@ static unsigned int update_tx_info(struct adapter *adapter, writel(F_CMDQ0_ENABLE, adapter->regs + A_SG_DOORBELL); } if (sge->tx_sched) - tasklet_hi_schedule(&sge->tx_sched->sched_tsk); + queue_work(system_bh_highpri_wq, + &sge->tx_sched->sched_bh_work); flags &= ~F_CMDQ0_ENABLE; } diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h index fca9533bc011..846040f5e638 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h @@ -53,6 +53,7 @@ #include #include #include +#include #include #include #include "t4_chip_type.h" @@ -880,7 +881,7 @@ struct sge_uld_txq { /* state for an SGE offload Tx queue */ struct sge_txq q; struct adapter *adap; struct sk_buff_head sendq; /* list of backpressured packets */ - struct tasklet_struct qresume_tsk; /* restarts the queue */ + struct work_struct qresume_bh_work; /* restarts the queue */ bool service_ofldq_running; /* service_ofldq() is processing sendq */ u8 full; /* the Tx ring is full */ unsigned long mapping_err; /* # of I/O MMU packet mapping errors */ @@ -890,7 +891,7 @@ struct sge_ctrl_txq { /* state for an SGE control Tx queue */ struct sge_txq q; struct adapter *adap; struct sk_buff_head sendq; /* list of backpressured packets */ - struct tasklet_struct qresume_tsk; /* restarts the queue */ + struct work_struct qresume_bh_work; /* restarts the queue */ u8 full; /* the Tx ring is full */ } ____cacheline_aligned_in_smp; @@ -946,7 +947,7 @@ struct sge_eosw_txq { u32 hwqid; /* Underlying hardware queue index */ struct net_device *netdev; /* Pointer to netdevice */ - struct tasklet_struct qresume_tsk; /* Restarts the queue */ + struct work_struct qresume_bh_work; /* Restarts the queue */ struct completion completion; /* completion for FLOWC rendezvous */ }; @@ -2107,7 +2108,7 @@ void free_tx_desc(struct adapter *adap, struct sge_txq *q, void cxgb4_eosw_txq_free_desc(struct adapter *adap, struct sge_eosw_txq *txq, u32 ndesc); int cxgb4_ethofld_send_flowc(struct net_device *dev, u32 eotid, u32 tc); -void cxgb4_ethofld_restart(struct tasklet_struct *t); +void cxgb4_ethofld_restart(struct work_struct *work); int cxgb4_ethofld_rx_handler(struct sge_rspq *q, const __be64 *rsp, const struct pkt_gl *si); void free_txq(struct adapter *adap, struct sge_txq *q); diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c index 2418645c8823..179517e90da7 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c @@ -589,7 +589,7 @@ static int fwevtq_handler(struct sge_rspq *q, const __be64 *rsp, struct sge_uld_txq *oq; oq = container_of(txq, struct sge_uld_txq, q); - tasklet_schedule(&oq->qresume_tsk); + queue_work(system_bh_wq, &oq->qresume_bh_work); } } else if (opcode == CPL_FW6_MSG || opcode == CPL_FW4_MSG) { const struct cpl_fw6_msg *p = (void *)rsp; diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c index 338b04f339b3..c165d3393e6e 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c @@ -114,7 +114,7 @@ static int cxgb4_init_eosw_txq(struct net_device *dev, eosw_txq->cred = adap->params.ofldq_wr_cred; eosw_txq->hwqid = hwqid; eosw_txq->netdev = dev; - tasklet_setup(&eosw_txq->qresume_tsk, cxgb4_ethofld_restart); + INIT_WORK(&eosw_txq->qresume_bh_work, cxgb4_ethofld_restart); return 0; } @@ -143,7 +143,7 @@ static void cxgb4_free_eosw_txq(struct net_device *dev, cxgb4_clean_eosw_txq(dev, eosw_txq); kfree(eosw_txq->desc); spin_unlock_bh(&eosw_txq->lock); - tasklet_kill(&eosw_txq->qresume_tsk); + cancel_work_sync(&eosw_txq->qresume_bh_work); } static int cxgb4_mqprio_alloc_hw_resources(struct net_device *dev) diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c index 5c13bcb4550d..d9bdf0b1eb69 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c @@ -407,7 +407,7 @@ free_sge_txq_uld(struct adapter *adap, struct sge_uld_txq_info *txq_info) struct sge_uld_txq *txq = &txq_info->uldtxq[i]; if (txq->q.desc) { - tasklet_kill(&txq->qresume_tsk); + cancel_work_sync(&txq->qresume_bh_work); t4_ofld_eq_free(adap, adap->mbox, adap->pf, 0, txq->q.cntxt_id); free_tx_desc(adap, &txq->q, txq->q.in_use, false); diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c index de52bcb884c4..d054979ef850 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/sge.c +++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c @@ -2769,15 +2769,15 @@ static int ctrl_xmit(struct sge_ctrl_txq *q, struct sk_buff *skb) /** * restart_ctrlq - restart a suspended control queue - * @t: pointer to the tasklet associated with this handler + * @work: pointer to the work struct associated with this handler * * Resumes transmission on a suspended Tx control queue. */ -static void restart_ctrlq(struct tasklet_struct *t) +static void restart_ctrlq(struct work_struct *work) { struct sk_buff *skb; unsigned int written = 0; - struct sge_ctrl_txq *q = from_tasklet(q, t, qresume_tsk); + struct sge_ctrl_txq *q = from_work(q, work, qresume_bh_work); spin_lock(&q->sendq.lock); reclaim_completed_tx_imm(&q->q); @@ -3075,13 +3075,13 @@ static int ofld_xmit(struct sge_uld_txq *q, struct sk_buff *skb) /** * restart_ofldq - restart a suspended offload queue - * @t: pointer to the tasklet associated with this handler + * @work: pointer to the work struct associated with this handler * * Resumes transmission on a suspended Tx offload queue. */ -static void restart_ofldq(struct tasklet_struct *t) +static void restart_ofldq(struct work_struct *work) { - struct sge_uld_txq *q = from_tasklet(q, t, qresume_tsk); + struct sge_uld_txq *q = from_work(q, work, qresume_bh_work); spin_lock(&q->sendq.lock); q->full = 0; /* the queue actually is completely empty now */ @@ -4020,10 +4020,10 @@ static int napi_rx_handler(struct napi_struct *napi, int budget) return work_done; } -void cxgb4_ethofld_restart(struct tasklet_struct *t) +void cxgb4_ethofld_restart(struct work_struct *work) { - struct sge_eosw_txq *eosw_txq = from_tasklet(eosw_txq, t, - qresume_tsk); + struct sge_eosw_txq *eosw_txq = from_work(eosw_txq, work, + qresume_bh_work); int pktcount; spin_lock(&eosw_txq->lock); @@ -4050,7 +4050,7 @@ void cxgb4_ethofld_restart(struct tasklet_struct *t) * @si: the gather list of packet fragments * * Process a ETHOFLD Tx completion. Increment the cidx here, but - * free up the descriptors in a tasklet later. + * free up the descriptors later in bh_work. */ int cxgb4_ethofld_rx_handler(struct sge_rspq *q, const __be64 *rsp, const struct pkt_gl *si) @@ -4117,10 +4117,10 @@ int cxgb4_ethofld_rx_handler(struct sge_rspq *q, const __be64 *rsp, spin_unlock(&eosw_txq->lock); - /* Schedule a tasklet to reclaim SKBs and restart ETHOFLD Tx, + /* Schedule a bh work to reclaim SKBs and restart ETHOFLD Tx, * if there were packets waiting for completion. */ - tasklet_schedule(&eosw_txq->qresume_tsk); + queue_work(system_bh_wq, &eosw_txq->qresume_bh_work); } out_done: @@ -4279,7 +4279,7 @@ static void sge_tx_timer_cb(struct timer_list *t) struct sge_uld_txq *txq = s->egr_map[id]; clear_bit(id, s->txq_maperr); - tasklet_schedule(&txq->qresume_tsk); + queue_work(system_bh_wq, &txq->qresume_bh_work); } if (!is_t4(adap->params.chip)) { @@ -4719,7 +4719,7 @@ int t4_sge_alloc_ctrl_txq(struct adapter *adap, struct sge_ctrl_txq *txq, init_txq(adap, &txq->q, FW_EQ_CTRL_CMD_EQID_G(ntohl(c.cmpliqid_eqid))); txq->adap = adap; skb_queue_head_init(&txq->sendq); - tasklet_setup(&txq->qresume_tsk, restart_ctrlq); + INIT_WORK(&txq->qresume_bh_work, restart_ctrlq); txq->full = 0; return 0; } @@ -4809,7 +4809,7 @@ int t4_sge_alloc_uld_txq(struct adapter *adap, struct sge_uld_txq *txq, txq->q.q_type = CXGB4_TXQ_ULD; txq->adap = adap; skb_queue_head_init(&txq->sendq); - tasklet_setup(&txq->qresume_tsk, restart_ofldq); + INIT_WORK(&txq->qresume_bh_work, restart_ofldq); txq->full = 0; txq->mapping_err = 0; return 0; @@ -4952,7 +4952,7 @@ void t4_free_sge_resources(struct adapter *adap) struct sge_ctrl_txq *cq = &adap->sge.ctrlq[i]; if (cq->q.desc) { - tasklet_kill(&cq->qresume_tsk); + cancel_work_sync(&cq->qresume_bh_work); t4_ctrl_eq_free(adap, adap->mbox, adap->pf, 0, cq->q.cntxt_id); __skb_queue_purge(&cq->sendq); @@ -5002,7 +5002,7 @@ void t4_sge_start(struct adapter *adap) * t4_sge_stop - disable SGE operation * @adap: the adapter * - * Stop tasklets and timers associated with the DMA engine. Note that + * Stop bh works and timers associated with the DMA engine. Note that * this is effective only if measures have been taken to disable any HW * events that may restart them. */ @@ -5025,7 +5025,7 @@ void t4_sge_stop(struct adapter *adap) for_each_ofldtxq(&adap->sge, i) { if (txq->q.desc) - tasklet_kill(&txq->qresume_tsk); + cancel_work_sync(&txq->qresume_bh_work); } } } @@ -5039,7 +5039,7 @@ void t4_sge_stop(struct adapter *adap) for_each_ofldtxq(&adap->sge, i) { if (txq->q.desc) - tasklet_kill(&txq->qresume_tsk); + cancel_work_sync(&txq->qresume_bh_work); } } } @@ -5048,7 +5048,7 @@ void t4_sge_stop(struct adapter *adap) struct sge_ctrl_txq *cq = &s->ctrlq[i]; if (cq->q.desc) - tasklet_kill(&cq->qresume_tsk); + cancel_work_sync(&cq->qresume_bh_work); } } diff --git a/drivers/net/ethernet/chelsio/cxgb4vf/sge.c b/drivers/net/ethernet/chelsio/cxgb4vf/sge.c index 5b1d746e6563..1f4628178d28 100644 --- a/drivers/net/ethernet/chelsio/cxgb4vf/sge.c +++ b/drivers/net/ethernet/chelsio/cxgb4vf/sge.c @@ -2587,7 +2587,7 @@ void t4vf_free_sge_resources(struct adapter *adapter) * t4vf_sge_start - enable SGE operation * @adapter: the adapter * - * Start tasklets and timers associated with the DMA engine. + * Start bh work and timers associated with the DMA engine. */ void t4vf_sge_start(struct adapter *adapter) { @@ -2600,7 +2600,7 @@ void t4vf_sge_start(struct adapter *adapter) * t4vf_sge_stop - disable SGE operation * @adapter: the adapter * - * Stop tasklets and timers associated with the DMA engine. Note that + * Stop bh works and timers associated with the DMA engine. Note that * this is effective only if measures have been taken to disable any HW * events that may restart them. */ @@ -2692,7 +2692,7 @@ int t4vf_sge_init(struct adapter *adapter) s->fl_starve_thres = s->fl_starve_thres * 2 + 1; /* - * Set up tasklet timers. + * Set up bh work timers. */ timer_setup(&s->rx_timer, sge_rx_timer_cb, 0); timer_setup(&s->tx_timer, sge_tx_timer_cb, 0); From patchwork Tue Jul 30 18:33:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13747746 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-oi1-f182.google.com (mail-oi1-f182.google.com [209.85.167.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C2F8518CBEB; Tue, 30 Jul 2024 18:34:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364475; cv=none; b=XY+fFMpvvgTApMQCKAcZM5QoFeGMtZMpPC+HYPHRn6u6SpNMykBK3EyCJSk14MmfNj0fM9PDwLdGRDjgojS074gEDmxcIsAs/16Y3wFTEAsYsaLdqJz5W+zPQ4XeffT9aFr1mcDfurnLwrBwtYGTMfjQr+u42GmGUnmvcE4WW1Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364475; c=relaxed/simple; bh=5wCl1B1i8ji1xA4dADPfV0mjXWvLWyC2722XSPdktnI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=eO9/iIkukjslgJhp2fw/euXJ1AholqD56ed5/qXgqkELgkpjwbgMwzfKDI5aq9PdIfwcJ0XmJKBPfvSBA7ofhVTM9ZE9pzQmagM/hNqKByraNEV9pNn0KiYspkK9Wj+f4swNa0jqoqy51YXKaDlUwRcPqh5pHsVZDE+rwPvOdXQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=T67YaH+T; arc=none smtp.client-ip=209.85.167.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="T67YaH+T" Received: by mail-oi1-f182.google.com with SMTP id 5614622812f47-3db23a60850so2535217b6e.0; Tue, 30 Jul 2024 11:34:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722364473; x=1722969273; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9dhdHUZpgwd2/wkpVLwdxeMY04zoPfYMfPcgph0II3s=; b=T67YaH+Tzm2m8mUDLYz7s9eKgA1dACCZq59HbOB9kkLpzDnK8dX3/etC8+z2ggco2Z GPUBXpzbxQci+YqLQJezkUdF/61MSi4rOf2a12wVfDN0/S0SrtoLjjMdWkAq+ZyayEPL BkKYebhxE8VKJjP6vPfDSGT+SrDjRqNijGeg0KJbYiGGuEni15NSeUVOO3BxHVt/FE6t TB14i5roL7FaQ0jG+sqDcKvCXyGRb4LOwkwGkfUrF5ZHI9BNVbuyM/O3XIxYoSdLT+Y8 bTomxGm/I0gpHgIUqXpJEwb/8UtkGCKiOlsj41TkBUYnloNTawRDNNIsf3ZD0sIXAUf8 VLyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722364473; x=1722969273; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9dhdHUZpgwd2/wkpVLwdxeMY04zoPfYMfPcgph0II3s=; b=auDRxTMFaCx7yl3qsnAPti7Pz5bzodDqVeAQM/gLwDncnlm8vxat56RN3JB/fn/oZX PT1wjFyE/plBqcaXwcd+Gk87LB+OPNifPYbz/C2gH6rsjhZrW1rHBG7pa2rcK31ekZ/U PEGDGiHB25AyihBY2i6m+ohdNn0VO1fd+4v2Hnj6oJpSdOLfwrEK2XHHuxIoZLET3eV1 aqrdV+R1O1RgHhLg2SwRVZoxgsKHVBNEeQmVO/a7fr9iK7b6/abhPXUC9+rk+nAc+Jn5 4UZF0TSkEi0jEfWR5PdoNaeIpc944I4fXsUTDjkHxlpE7dqIRXJCtPuNCjiDRM1upTef SuSg== X-Forwarded-Encrypted: i=1; AJvYcCVm3KHYZqBunqfXPaAy9okTf/YPxO0VsL72pN+34QReSCBd3ljQMjDKeogo77YfUwoomsTNjPFNpzQ00w==@vger.kernel.org, AJvYcCW4rL5m631QZJxKGdKBnxwpH4FG/1Vl4CpDU9UsNFmb/VnsvNmqFKalaStD477r/Btnx0LrFk+F9zNk6S0=@vger.kernel.org, AJvYcCX9Qxov2T/DdO+aEZmDNKRdUEGdd4MX11KtWyIDrTCfz4bKx2VGNUC+0RjfxEe4hOa0L+G2/zDx@vger.kernel.org X-Gm-Message-State: AOJu0Yy8TeLYCUbj+oCX8lgzG5Oy2sBQrBAsqJYkT57j77w+uGoh9TaU vatnqHA+WhXhyNPSQa+JR2P6tx4301hRsylGgrUSitQccP9ISORQ X-Google-Smtp-Source: AGHT+IHN/mrTA3BMYiZNVarf9NMI3uOSdPrgVQeed6qhHnuiCVI/TIgkXkD2IX40hlbUAtEwpoQeoQ== X-Received: by 2002:a05:6808:f04:b0:3d2:1a92:8f4a with SMTP id 5614622812f47-3db23ac0933mr16165026b6e.23.1722364472735; Tue, 30 Jul 2024 11:34:32 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:f2df:af9:e1f6:390e]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7a9f817f5a2sm7837763a12.24.2024.07.30.11.34.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jul 2024 11:34:31 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Denis Kirjanov , "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, netdev@vger.kernel.org, Allen Pais Subject: [net-next v3 09/15] net: sundance: Convert tasklet API to new bottom half workqueue mechanism Date: Tue, 30 Jul 2024 11:33:57 -0700 Message-Id: <20240730183403.4176544-10-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240730183403.4176544-1-allen.lkml@gmail.com> References: <20240730183403.4176544-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the dlink sundance driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/dlink/sundance.c | 41 ++++++++++++++------------- 1 file changed, 21 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/dlink/sundance.c b/drivers/net/ethernet/dlink/sundance.c index 8af5ecec7d61..65dfd32a9656 100644 --- a/drivers/net/ethernet/dlink/sundance.c +++ b/drivers/net/ethernet/dlink/sundance.c @@ -86,6 +86,7 @@ static char *media[MAX_UNITS]; #include #include #include +#include #include #include #include @@ -395,8 +396,8 @@ struct netdev_private { unsigned int an_enable:1; unsigned int speed; unsigned int wol_enabled:1; /* Wake on LAN enabled */ - struct tasklet_struct rx_tasklet; - struct tasklet_struct tx_tasklet; + struct work_struct rx_bh_work; + struct work_struct tx_bh_work; int budget; int cur_task; /* Multicast and receive mode. */ @@ -430,8 +431,8 @@ static void init_ring(struct net_device *dev); static netdev_tx_t start_tx(struct sk_buff *skb, struct net_device *dev); static int reset_tx (struct net_device *dev); static irqreturn_t intr_handler(int irq, void *dev_instance); -static void rx_poll(struct tasklet_struct *t); -static void tx_poll(struct tasklet_struct *t); +static void rx_poll(struct work_struct *work); +static void tx_poll(struct work_struct *work); static void refill_rx (struct net_device *dev); static void netdev_error(struct net_device *dev, int intr_status); static void netdev_error(struct net_device *dev, int intr_status); @@ -541,8 +542,8 @@ static int sundance_probe1(struct pci_dev *pdev, np->msg_enable = (1 << debug) - 1; spin_lock_init(&np->lock); spin_lock_init(&np->statlock); - tasklet_setup(&np->rx_tasklet, rx_poll); - tasklet_setup(&np->tx_tasklet, tx_poll); + INIT_WORK(&np->rx_bh_work, rx_poll); + INIT_WORK(&np->tx_bh_work, tx_poll); ring_space = dma_alloc_coherent(&pdev->dev, TX_TOTAL_SIZE, &ring_dma, GFP_KERNEL); @@ -965,7 +966,7 @@ static void tx_timeout(struct net_device *dev, unsigned int txqueue) unsigned long flag; netif_stop_queue(dev); - tasklet_disable_in_atomic(&np->tx_tasklet); + disable_work_sync(&np->tx_bh_work); iowrite16(0, ioaddr + IntrEnable); printk(KERN_WARNING "%s: Transmit timed out, TxStatus %2.2x " "TxFrameId %2.2x," @@ -1006,7 +1007,7 @@ static void tx_timeout(struct net_device *dev, unsigned int txqueue) netif_wake_queue(dev); } iowrite16(DEFAULT_INTR, ioaddr + IntrEnable); - tasklet_enable(&np->tx_tasklet); + enable_and_queue_work(system_bh_wq, &np->tx_bh_work); } @@ -1058,9 +1059,9 @@ static void init_ring(struct net_device *dev) } } -static void tx_poll(struct tasklet_struct *t) +static void tx_poll(struct work_struct *work) { - struct netdev_private *np = from_tasklet(np, t, tx_tasklet); + struct netdev_private *np = from_work(np, work, tx_bh_work); unsigned head = np->cur_task % TX_RING_SIZE; struct netdev_desc *txdesc = &np->tx_ring[(np->cur_tx - 1) % TX_RING_SIZE]; @@ -1104,11 +1105,11 @@ start_tx (struct sk_buff *skb, struct net_device *dev) goto drop_frame; txdesc->frag.length = cpu_to_le32 (skb->len | LastFrag); - /* Increment cur_tx before tasklet_schedule() */ + /* Increment cur_tx before bh_work is queued */ np->cur_tx++; mb(); - /* Schedule a tx_poll() task */ - tasklet_schedule(&np->tx_tasklet); + /* Queue a tx_poll() bh work */ + queue_work(system_bh_wq, &np->tx_bh_work); /* On some architectures: explicitly flush cache lines here. */ if (np->cur_tx - np->dirty_tx < TX_QUEUE_LEN - 1 && @@ -1199,7 +1200,7 @@ static irqreturn_t intr_handler(int irq, void *dev_instance) ioaddr + IntrEnable); if (np->budget < 0) np->budget = RX_BUDGET; - tasklet_schedule(&np->rx_tasklet); + queue_work(system_bh_wq, &np->rx_bh_work); } if (intr_status & (IntrTxDone | IntrDrvRqst)) { tx_status = ioread16 (ioaddr + TxStatus); @@ -1315,9 +1316,9 @@ static irqreturn_t intr_handler(int irq, void *dev_instance) return IRQ_RETVAL(handled); } -static void rx_poll(struct tasklet_struct *t) +static void rx_poll(struct work_struct *work) { - struct netdev_private *np = from_tasklet(np, t, rx_tasklet); + struct netdev_private *np = from_work(np, work, rx_bh_work); struct net_device *dev = np->ndev; int entry = np->cur_rx % RX_RING_SIZE; int boguscnt = np->budget; @@ -1407,7 +1408,7 @@ static void rx_poll(struct tasklet_struct *t) np->budget -= received; if (np->budget <= 0) np->budget = RX_BUDGET; - tasklet_schedule(&np->rx_tasklet); + queue_work(system_bh_wq, &np->rx_bh_work); } static void refill_rx (struct net_device *dev) @@ -1819,9 +1820,9 @@ static int netdev_close(struct net_device *dev) struct sk_buff *skb; int i; - /* Wait and kill tasklet */ - tasklet_kill(&np->rx_tasklet); - tasklet_kill(&np->tx_tasklet); + /* Wait and cancel bh work */ + cancel_work_sync(&np->rx_bh_work); + cancel_work_sync(&np->tx_bh_work); np->cur_tx = 0; np->dirty_tx = 0; np->cur_task = 0; From patchwork Tue Jul 30 18:33:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13747747 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-oi1-f181.google.com (mail-oi1-f181.google.com [209.85.167.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB2A318CC19; Tue, 30 Jul 2024 18:34:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364478; cv=none; b=Cz0HOK+aFmllaPHQRXODJhrs15Eb9IWERnWrh2Mhn0b63YUA60kBXCtwlVSkOG/87mxDYkKoi1XmVkfSJqZ4faJosP9M8NTOjY+0yNcfrFPuvvka4eoihoCsBJtqlEatRAX09nSBcLKlykfCqucld+7OhMtNfKVPHA1o0ZYH9AY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364478; c=relaxed/simple; bh=dZhWibPkTPbRop+GGEjVuyTBB+ogvjd9DCdziISEp3o=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=uU2bSahGsZN7yifDlqze+cq6qNmuZ9wD27mkWFmMUJyS3PCpOwxxl+HPudXIKE7KzLD/Y1KbXFGIYwsbcVUzzD/WgQP61hG45DEstsnw2MApj8B1ivqyL0UgAQmYdC0Y1b6zk495sNvLAC/vDZOJNkz7oy0En4QD48+q7zEW/Rg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=nfz5XL0N; arc=none smtp.client-ip=209.85.167.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nfz5XL0N" Received: by mail-oi1-f181.google.com with SMTP id 5614622812f47-3db1e4219f8so2273829b6e.3; Tue, 30 Jul 2024 11:34:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722364476; x=1722969276; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jmTnJaPNct0B9da0B3lTLwHU9zpeK3VMEGqtVtu680U=; b=nfz5XL0NDAgThVMXEgpftuiEineFQSvnTsxtzN8zkd/NEO36Pf2m7AMnYwrUT9WMvZ aSCxCjs67e4l4Dn1trzfzFx2LcdOQx44/tJWUNJsa12tIsluYfSi07wELcPwDGzhXpdN hCn0RJac2vjk/y5QJ7lUEpa4NlgzIMLOBbpsmrnmb/jdxOLQ9iDK0JSiyzdBHrSSoybn vXyHQbk/ySgyKRPTHr8LQh9koH3VwsziXa8bvYeht0NDXpxIrnUYhjANWABaSEEkf1Gz D/PLRu2DuK9B3Hb19PTGGjV6nj0psfdCqDPBzE6M405lviVu9zU8/PlTNEmeW51HKdgV OFhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722364476; x=1722969276; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jmTnJaPNct0B9da0B3lTLwHU9zpeK3VMEGqtVtu680U=; b=rB3FuYC8zaUkFs6Pbqp1tGGlz6Y0fs8qYXAmMvayzoPEjfyTA8bi14UWnArl0150v3 X+YcLC0V4gqDQT4ctIwsUcWYasSGUUHpG+Y9blGWjuf+Mc2BguLJhX9z3XOHNuQ6HC3S bLPYNtCPSgJ8JvwXqCbDPqSCVHqsaX0wyjuMDmQXorfBZ/pBq+lXfvFgaZlVJ1lhtz3G vrz1vNLbYyfI1ir/NV0R9Z2X17oZJWMBZbcYrn7eEy0/6Cy00LXMN66SNegyOKaTyQo7 o+mAJ6w81KDbvP7kFrpkZvhuLbzAQuxWECWchxCj7PaTaWn8TSdYJnvagOuqHJXOFrc/ cUHA== X-Forwarded-Encrypted: i=1; AJvYcCVpVnmsPBi3rUxEcfFNEaDZJ9RdggeGlHXDu8b+jN2i09eJ/XBdu6gckS9s3A+bastreG3Kl3IQ8nMuh/Ewm9QvgW7amCHhrCe4lLf7w3BK8qXyZiAJ5iYSYV9iwoDi6xEf9J2+BWbXw5nrgHtk1dCozu47OAbrXhUkAXi2cDx0sA== X-Gm-Message-State: AOJu0Yz7uc5uoVMX1mI4SRyQtFRAI1BHhTpRh4BvrBPsIL9OsOkRvNYh 1s+IcWt5An/eJ/Dj1GHKhb6PWJvVP95/CYlfgyKK7PS7kpU3rDpe X-Google-Smtp-Source: AGHT+IFYUFqKUc8jAc7GUvLlfMHj0UGc0ym5A8KGJbdJ6mtN9viwatfMgejHOkaagSTYV+OLZvuSLw== X-Received: by 2002:a05:6808:1520:b0:3da:a793:f10e with SMTP id 5614622812f47-3db238a2e47mr10615792b6e.18.1722364475751; Tue, 30 Jul 2024 11:34:35 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:f2df:af9:e1f6:390e]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7a9f817f5a2sm7837763a12.24.2024.07.30.11.34.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jul 2024 11:34:35 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Cai Huoqing , "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, netdev@vger.kernel.org, Allen Pais Subject: [net-next v3 10/15] net: hinic: Convert tasklet API to new bottom half workqueue mechanism Date: Tue, 30 Jul 2024 11:33:58 -0700 Message-Id: <20240730183403.4176544-11-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240730183403.4176544-1-allen.lkml@gmail.com> References: <20240730183403.4176544-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the huawei hinic driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- .../net/ethernet/huawei/hinic/hinic_hw_cmdq.c | 2 +- .../net/ethernet/huawei/hinic/hinic_hw_eqs.c | 18 +++++++++--------- .../net/ethernet/huawei/hinic/hinic_hw_eqs.h | 2 +- 3 files changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c index d39eec9c62bf..f54feae40ef8 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c @@ -344,7 +344,7 @@ static int cmdq_sync_cmd_direct_resp(struct hinic_cmdq *cmdq, struct hinic_hw_wqe *hw_wqe; struct completion done; - /* Keep doorbell index correct. bh - for tasklet(ceq). */ + /* Keep doorbell index correct. For bh_work(ceq). */ spin_lock_bh(&cmdq->cmdq_lock); /* WQE_SIZE = WQEBB_SIZE, we will get the wq element and not shadow*/ diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c index 045c47786a04..1aecc934039e 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c @@ -368,12 +368,12 @@ static void eq_irq_work(struct work_struct *work) } /** - * ceq_tasklet - the tasklet of the EQ that received the event - * @t: the tasklet struct pointer + * ceq_bh_work - the bh_work of the EQ that received the event + * @work: the work struct pointer **/ -static void ceq_tasklet(struct tasklet_struct *t) +static void ceq_bh_work(struct work_struct *work) { - struct hinic_eq *ceq = from_tasklet(ceq, t, ceq_tasklet); + struct hinic_eq *ceq = from_work(ceq, work, ceq_bh_work); eq_irq_handler(ceq); } @@ -413,7 +413,7 @@ static irqreturn_t ceq_interrupt(int irq, void *data) /* clear resend timer cnt register */ hinic_msix_attr_cnt_clear(ceq->hwif, ceq->msix_entry.entry); - tasklet_schedule(&ceq->ceq_tasklet); + queue_work(system_bh_wq, &ceq->ceq_bh_work); return IRQ_HANDLED; } @@ -782,7 +782,7 @@ static int init_eq(struct hinic_eq *eq, struct hinic_hwif *hwif, INIT_WORK(&aeq_work->work, eq_irq_work); } else if (type == HINIC_CEQ) { - tasklet_setup(&eq->ceq_tasklet, ceq_tasklet); + INIT_WORK(&eq->ceq_bh_work, ceq_bh_work); } /* set the attributes of the msix entry */ @@ -833,7 +833,7 @@ static void remove_eq(struct hinic_eq *eq) hinic_hwif_write_reg(eq->hwif, HINIC_CSR_AEQ_CTRL_1_ADDR(eq->q_id), 0); } else if (eq->type == HINIC_CEQ) { - tasklet_kill(&eq->ceq_tasklet); + cancel_work_sync(&eq->ceq_bh_work); /* clear ceq_len to avoid hw access host memory */ hinic_hwif_write_reg(eq->hwif, HINIC_CSR_CEQ_CTRL_1_ADDR(eq->q_id), 0); @@ -968,9 +968,9 @@ void hinic_dump_ceq_info(struct hinic_hwdev *hwdev) ci = hinic_hwif_read_reg(hwdev->hwif, addr); addr = EQ_PROD_IDX_REG_ADDR(eq); pi = hinic_hwif_read_reg(hwdev->hwif, addr); - dev_err(&hwdev->hwif->pdev->dev, "Ceq id: %d, ci: 0x%08x, sw_ci: 0x%08x, pi: 0x%x, tasklet_state: 0x%lx, wrap: %d, ceqe: 0x%x\n", + dev_err(&hwdev->hwif->pdev->dev, "Ceq id: %d, ci: 0x%08x, sw_ci: 0x%08x, pi: 0x%x, work_pending: %d, wrap: %d, ceqe: 0x%x\n", q_id, ci, eq->cons_idx, pi, - eq->ceq_tasklet.state, + work_pending(&eq->ceq_bh_work), eq->wrapped, be32_to_cpu(*(__be32 *)(GET_CURR_CEQ_ELEM(eq)))); } } diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.h index 2f3222174fc7..8fed3155f15c 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.h @@ -193,7 +193,7 @@ struct hinic_eq { struct hinic_eq_work aeq_work; - struct tasklet_struct ceq_tasklet; + struct work_struct ceq_bh_work; }; struct hinic_hw_event_cb { From patchwork Tue Jul 30 18:33:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13747748 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B4A221A7204; Tue, 30 Jul 2024 18:34:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364481; cv=none; b=sgm4y7YTfseWgt1WCoadXQuZz5Fddj2HeG+wZucmELQuD0nQBK7Qvpi7b2czgp0FRtt7prNtKBMD6fyRS+jDJV3PjJJNL2zWwa30cR4Xb0GZPXD5VjNfe+6OLtZ8cB6Q1JksbrIaKFi5jDIgH/nojfg+MY0EgtI2HQ7EJ3gzJCE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364481; c=relaxed/simple; bh=5fDJtwutEEA5fJCM51lQrlFtqk5iBeZOuS616CxfH8M=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=FOMxTbMWVa4Ofys0FdQPGXM0MFKyVi2SE0/LGDOs+VjY/FueCPUckBeD1atLqALMmp88O/Mq6PKrFRhxAX1a5Ki4Vi1bevxFqmfjblu9Sq0jAM48msnasjxaYSzlni+1HfVx5NKMsoFD9FpnrRbbGEqG0ExYCqOUHKL3+B+HHNA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=JXbIJGJB; arc=none smtp.client-ip=209.85.210.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="JXbIJGJB" Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-70d25b5b6b0so3478470b3a.2; Tue, 30 Jul 2024 11:34:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722364479; x=1722969279; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oEA7niYFvLj2PrS7r2T7s19npKTXpaxvDJNgOhJ1Ngk=; b=JXbIJGJBrdeCc31sIetIDFoC4yd1f838X24bwyYlB9r/BX8v5+n2uZX6kea6e7nP5W u+FC9wNnrETWY1J9Wdn5qCZvfr3biJZGbFfkO1rMsSBuV/j/TYp60CsA8VC01qfkodYo tuXxkyWiRHryBRJSZPAt0Jgw8FjaMTdJx+f2Y3oQ0zdZqbOxTYrJ+ek4ouAjlJp8/HY1 SEqiI5OWXBb2gEBRhtuUpAqHjPCT7D56MWugpjd1NP9D8ugGzxKF22oBH82d9XmZX73L 4uC2c4+oCHC0hh7piI77O0EdKASWvNIBesy9xhjCPCAPTF8l8fnogy49pnkBLtQFLr2w 50dQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722364479; x=1722969279; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oEA7niYFvLj2PrS7r2T7s19npKTXpaxvDJNgOhJ1Ngk=; b=f4y0XefvHnwsuAds7FfKlSNnxHQXKXDUF3+a7sMnIk5lSRLYI+CtN0bl5yOTZ7X9ZO e8qqq/pEiDJiHC11cj2YaWY5nCnKXL3WvBdSdVs5WmMG++h/ZvNhHR3Pf5Vp9xFplyOL QOLDqp5nWI6BoCvvPqng5wn2oXO8Nzej9Lshyt4sPWM7boGSoE2KCs55l9oOuiAKzeC4 4jmRQ3FwbToBqGyZIda53TaLTH3TEyorbjEEk70wC2vQxqTmsqeSuQF2Iqjqu04UWSIA cLt/BnyfbLDRU5M8J6v+LCtIaL9CyPbQv4efseNHVPCAjSiRiYINgBNy21NYHfTecTjw W9iA== X-Forwarded-Encrypted: i=1; AJvYcCXE5l03b+TDfrAtERN2W9WPwQoZgCMdR3qYwXyiWMP1++OeCJ9cSt5uS0xdB2BezxicLMqBLiS65lJ108rKdtABUmoBilronn78om+NXnVlcHmjqsVYA8o/bJzlacRT540bPH5pfB8qFQkP8EE+cOnuMDeVl03FMtokcRakOIhYyQ== X-Gm-Message-State: AOJu0YwI5MJVOC4loYIbJQz9O9e9JnNC42eJwqvDt93MMaslJ+XiVnVF J2w5SMB141Fwo/CrZgmdyI3M5AyQ+as6sqtDZszKWG2DG5QwmvZX X-Google-Smtp-Source: AGHT+IEhQ5Y7nVWERwHGJ8+Ff0PeqVL7tnuSuJ2b9yeCbRbyWeoECh5IxoPpUVILeKEPbsdhW+rrew== X-Received: by 2002:a05:6a00:2383:b0:70e:98e2:fdb5 with SMTP id d2e1a72fcca58-70ecedbcabamr10541966b3a.29.1722364478770; Tue, 30 Jul 2024 11:34:38 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:f2df:af9:e1f6:390e]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7a9f817f5a2sm7837763a12.24.2024.07.30.11.34.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jul 2024 11:34:38 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, netdev@vger.kernel.org, Allen Pais Subject: [net-next v3 11/15] net: ehea: Convert tasklet API to new bottom half workqueue mechanism Date: Tue, 30 Jul 2024 11:33:59 -0700 Message-Id: <20240730183403.4176544-12-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240730183403.4176544-1-allen.lkml@gmail.com> References: <20240730183403.4176544-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the ehea driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/ibm/ehea/ehea.h | 3 ++- drivers/net/ethernet/ibm/ehea/ehea_main.c | 14 +++++++------- 2 files changed, 9 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/ibm/ehea/ehea.h b/drivers/net/ethernet/ibm/ehea/ehea.h index 208c440a602b..c1e7e22884fa 100644 --- a/drivers/net/ethernet/ibm/ehea/ehea.h +++ b/drivers/net/ethernet/ibm/ehea/ehea.h @@ -19,6 +19,7 @@ #include #include #include +#include #include #include @@ -381,7 +382,7 @@ struct ehea_adapter { struct platform_device *ofdev; struct ehea_port *port[EHEA_MAX_PORTS]; struct ehea_eq *neq; /* notification event queue */ - struct tasklet_struct neq_tasklet; + struct work_struct neq_bh_work; struct ehea_mr mr; u32 pd; /* protection domain */ u64 max_mc_mac; /* max number of multicast mac addresses */ diff --git a/drivers/net/ethernet/ibm/ehea/ehea_main.c b/drivers/net/ethernet/ibm/ehea/ehea_main.c index 1e29e5c9a2df..6960d06805f6 100644 --- a/drivers/net/ethernet/ibm/ehea/ehea_main.c +++ b/drivers/net/ethernet/ibm/ehea/ehea_main.c @@ -976,7 +976,7 @@ int ehea_sense_port_attr(struct ehea_port *port) u64 hret; struct hcp_ehea_port_cb0 *cb0; - /* may be called via ehea_neq_tasklet() */ + /* may be called via ehea_neq_bh_work() */ cb0 = (void *)get_zeroed_page(GFP_ATOMIC); if (!cb0) { pr_err("no mem for cb0\n"); @@ -1216,9 +1216,9 @@ static void ehea_parse_eqe(struct ehea_adapter *adapter, u64 eqe) } } -static void ehea_neq_tasklet(struct tasklet_struct *t) +static void ehea_neq_bh_work(struct work_struct *work) { - struct ehea_adapter *adapter = from_tasklet(adapter, t, neq_tasklet); + struct ehea_adapter *adapter = from_work(adapter, work, neq_bh_work); struct ehea_eqe *eqe; u64 event_mask; @@ -1243,7 +1243,7 @@ static void ehea_neq_tasklet(struct tasklet_struct *t) static irqreturn_t ehea_interrupt_neq(int irq, void *param) { struct ehea_adapter *adapter = param; - tasklet_hi_schedule(&adapter->neq_tasklet); + queue_work(system_bh_highpri_wq, &adapter->neq_bh_work); return IRQ_HANDLED; } @@ -3423,7 +3423,7 @@ static int ehea_probe_adapter(struct platform_device *dev) goto out_free_ad; } - tasklet_setup(&adapter->neq_tasklet, ehea_neq_tasklet); + INIT_WORK(&adapter->neq_bh_work, ehea_neq_bh_work); ret = ehea_create_device_sysfs(dev); if (ret) @@ -3444,7 +3444,7 @@ static int ehea_probe_adapter(struct platform_device *dev) } /* Handle any events that might be pending. */ - tasklet_hi_schedule(&adapter->neq_tasklet); + queue_work(system_bh_highpri_wq, &adapter->neq_bh_work); ret = 0; goto out; @@ -3485,7 +3485,7 @@ static void ehea_remove(struct platform_device *dev) ehea_remove_device_sysfs(dev); ibmebus_free_irq(adapter->neq->attr.ist1, adapter); - tasklet_kill(&adapter->neq_tasklet); + cancel_work_sync(&adapter->neq_bh_work); ehea_destroy_eq(adapter->neq); ehea_remove_adapter_mr(adapter); From patchwork Tue Jul 30 18:34:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13747749 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 01E761A7F62; Tue, 30 Jul 2024 18:34:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364483; cv=none; b=cTu+PKdC3MCO3MLxMV0hJTmdDpNzDr7AIA1/GwoQj+jinBByRbnTEdphpAYDlnaA/r3yxDg+FlTbpnsuJaoV5sKKChGuwGJwCgAsRF4km6VMtbQ4NNaH9R1HanK/Bit5cz5tmaWB2tTyClQBwysWA4KeLMHLRa9SrQeDZZFPTes= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364483; c=relaxed/simple; bh=zsn+RM0YFuxw4wHbUzBqNmfi+amnO1IxsFnwkeEF500=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=LdRZhK200KbdyUuIzRb2AGwPtx0rmWoLm8NXQ6wjsjclSsLiE8inuOPu9MVn3e5SsVYWBHRt3m7vyB/chvoia2uChhm8Us6yn4G1q49zVYnJgxl0+XcwBKkPj/JbhQI0FtffQot2tgt+OYGilzezFXoQkOzZlXCiC2P6tFTpW1g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=G5mHiHsC; arc=none smtp.client-ip=209.85.215.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="G5mHiHsC" Received: by mail-pg1-f176.google.com with SMTP id 41be03b00d2f7-7b0c9bbddb4so939799a12.3; Tue, 30 Jul 2024 11:34:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722364481; x=1722969281; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XJF28MhC2MiqvCUOHdECUHJyNP06dmnDOSRczPxFDXg=; b=G5mHiHsCDMiVsicyDrxoeBMEnCk5L4eyo9JBc51w3248L9zSReFm6MkIykk9KFk42O mOee6obyzQ3NDRwbDnytyt6YK22nnMxyVt+g8+XL2QVVlKrPzZE5AG7cuCo80ndPtQUs hqnwIEcGE9/75TfED7RkA1SrC2c/KgRtSQgqP2Kx04T74azO2Ur5n5+XSRBZJf07n59q 3FTFksI3R/C6t3/rMtQmSaCctZH4c7hRu3Tpf3TnaSW+jYZreiRDJbBfZPH4fr30n/Cv GE++t8y8j3mBjdlGCp4KFIXhh9TbZSW59jRBQ7jHwbIVfNIyrxFDxay4PmzRhD0Om7RF GzuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722364481; x=1722969281; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XJF28MhC2MiqvCUOHdECUHJyNP06dmnDOSRczPxFDXg=; b=g2bvIyqWzddvH4Is4bDDoE0ktcgO51+644ZMfH59NaAk1w9mxcvIUQNXen98Yt+qvm /iQqrtnBoph1saj/7ElSNbk8SY+S7oC0dgSnGt5jkuK/32K6hn8aUXiz/L+EsGn1TIby KxOJbwB2NTR0xjxXFYqmw7kmc5Of+vlbLVdb6RMK9vz63W76OJutRDplxXYhDpK8S/4w 3xGlDpO8lsDxfxU1pfVIX35xQ+J0CrLa58SUOI89IsBt3kqXXPFlUjKjmIhf8fsHxHNo PG+if++8qcCF7ueQH/AtLLpYrzeGLW/fH6JKZsse/e8mykG75Zgiq+ov+dGIlG8vH4+5 OuLg== X-Forwarded-Encrypted: i=1; AJvYcCXDTNVweLas2yiC2RzMsAPyySDkGHzAYiQ0Rh3jB8iJ2WyezZZXXQhH7N5/UgN76+4YhFS/pb4RNhtla2UR9wL5XXXay1UsX0NRDF4f8jwrRtlAN4qeJg7x+/tOOs7epodbbZyFSMAxL4L9lvjnk4ra18pZqB54qXVOrMnwIJ0oJQ== X-Gm-Message-State: AOJu0YwJDnpYEPuw6xjQZj+becX9++4O0dHUuDl6Cdm63Hm7hVXiZZIh 6VXJgVuYeK/Yd3z3nKEmKjr1XqA9IYAVDZyuC5UBW5YkgXQbq4Ue X-Google-Smtp-Source: AGHT+IGo7lrGjbCeYJR8CgOHZdoGXq6ysooeIc5UeAEWFCJBQ9LFDG+NCFqdDOvXEN4PPr122ZUWMw== X-Received: by 2002:a05:6a20:7347:b0:1c2:97cd:94d8 with SMTP id adf61e73a8af0-1c4a12cbcccmr11576550637.20.1722364481320; Tue, 30 Jul 2024 11:34:41 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:f2df:af9:e1f6:390e]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7a9f817f5a2sm7837763a12.24.2024.07.30.11.34.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jul 2024 11:34:40 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Haren Myneni , Rick Lindsley , Nick Child , Thomas Falcon , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, netdev@vger.kernel.org, Allen Pais , linuxppc-dev@lists.ozlabs.org Subject: [net-next v3 12/15] net: ibmvnic: Convert tasklet API to new bottom half workqueue mechanism Date: Tue, 30 Jul 2024 11:34:00 -0700 Message-Id: <20240730183403.4176544-13-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240730183403.4176544-1-allen.lkml@gmail.com> References: <20240730183403.4176544-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the ibmvnic driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/ibm/ibmvnic.c | 24 ++++++++++++------------ drivers/net/ethernet/ibm/ibmvnic.h | 2 +- 2 files changed, 13 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index 23ebeb143987..0156efeff96a 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -2737,7 +2737,7 @@ static const char *reset_reason_to_string(enum ibmvnic_reset_reason reason) /* * Initialize the init_done completion and return code values. We * can get a transport event just after registering the CRQ and the - * tasklet will use this to communicate the transport event. To ensure + * bh work will use this to communicate the transport event. To ensure * we don't miss the notification/error, initialize these _before_ * regisering the CRQ. */ @@ -4447,7 +4447,7 @@ static void send_request_cap(struct ibmvnic_adapter *adapter, int retry) int cap_reqs; /* We send out 6 or 7 REQUEST_CAPABILITY CRQs below (depending on - * the PROMISC flag). Initialize this count upfront. When the tasklet + * the PROMISC flag). Initialize this count upfront. When the bh work * receives a response to all of these, it will send the next protocol * message (QUERY_IP_OFFLOAD). */ @@ -4983,7 +4983,7 @@ static void send_query_cap(struct ibmvnic_adapter *adapter) int cap_reqs; /* We send out 25 QUERY_CAPABILITY CRQs below. Initialize this count - * upfront. When the tasklet receives a response to all of these, it + * upfront. When the bh work receives a response to all of these, it * can send out the next protocol messaage (REQUEST_CAPABILITY). */ cap_reqs = 25; @@ -5495,7 +5495,7 @@ static int handle_login_rsp(union ibmvnic_crq *login_rsp_crq, int i; /* CHECK: Test/set of login_pending does not need to be atomic - * because only ibmvnic_tasklet tests/clears this. + * because only ibmvnic_bh_work tests/clears this. */ if (!adapter->login_pending) { netdev_warn(netdev, "Ignoring unexpected login response\n"); @@ -6081,13 +6081,13 @@ static irqreturn_t ibmvnic_interrupt(int irq, void *instance) { struct ibmvnic_adapter *adapter = instance; - tasklet_schedule(&adapter->tasklet); + queue_work(system_bh_wq, &adapter->bh_work); return IRQ_HANDLED; } -static void ibmvnic_tasklet(struct tasklet_struct *t) +static void ibmvnic_bh_work(struct work_struct *work) { - struct ibmvnic_adapter *adapter = from_tasklet(adapter, t, tasklet); + struct ibmvnic_adapter *adapter = from_work(adapter, work, bh_work); struct ibmvnic_crq_queue *queue = &adapter->crq; union ibmvnic_crq *crq; unsigned long flags; @@ -6168,7 +6168,7 @@ static void release_crq_queue(struct ibmvnic_adapter *adapter) netdev_dbg(adapter->netdev, "Releasing CRQ\n"); free_irq(vdev->irq, adapter); - tasklet_kill(&adapter->tasklet); + cancel_work_sync(&adapter->bh_work); do { rc = plpar_hcall_norets(H_FREE_CRQ, vdev->unit_address); } while (rc == H_BUSY || H_IS_LONG_BUSY(rc)); @@ -6219,7 +6219,7 @@ static int init_crq_queue(struct ibmvnic_adapter *adapter) retrc = 0; - tasklet_setup(&adapter->tasklet, (void *)ibmvnic_tasklet); + INIT_WORK(&adapter->bh_work, (void *)ibmvnic_bh_work); netdev_dbg(adapter->netdev, "registering irq 0x%x\n", vdev->irq); snprintf(crq->name, sizeof(crq->name), "ibmvnic-%x", @@ -6241,12 +6241,12 @@ static int init_crq_queue(struct ibmvnic_adapter *adapter) spin_lock_init(&crq->lock); /* process any CRQs that were queued before we enabled interrupts */ - tasklet_schedule(&adapter->tasklet); + queue_work(system_bh_wq, &adapter->bh_work); return retrc; req_irq_failed: - tasklet_kill(&adapter->tasklet); + cancel_work_sync(&adapter->bh_work); do { rc = plpar_hcall_norets(H_FREE_CRQ, vdev->unit_address); } while (rc == H_BUSY || H_IS_LONG_BUSY(rc)); @@ -6639,7 +6639,7 @@ static int ibmvnic_resume(struct device *dev) if (adapter->state != VNIC_OPEN) return 0; - tasklet_schedule(&adapter->tasklet); + queue_work(system_bh_wq, &adapter->bh_work); return 0; } diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h index 94ac36b1408b..b65b210a8059 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.h +++ b/drivers/net/ethernet/ibm/ibmvnic.h @@ -1036,7 +1036,7 @@ struct ibmvnic_adapter { u32 cur_rx_buf_sz; u32 prev_rx_buf_sz; - struct tasklet_struct tasklet; + struct work_struct bh_work; enum vnic_state state; /* Used for serialization of state field. When taking both state * and rwi locks, take state lock first. From patchwork Tue Jul 30 18:34:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13747750 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF6731A7F94; Tue, 30 Jul 2024 18:34:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364486; cv=none; b=MtXFPdT8xJca97e/SgjAUorDJXQNZT4dk6FN8iADASobhThfAcRhZX2mssFQwspV5kHxRkfRGbz0L7d6rBR29H9sPVgXDyMMu8Lip2hTpGaAkwgzlWWx25nRIj6+XEF1Rp28r9kpv+4IayxXHGY63YKauzx3RfxieaSdOChfFwQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364486; c=relaxed/simple; bh=deS3mafeOZtc6o394hOxUHa0kJK4MmCbZnzACvGXWHY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hQjwSxNg2N5dl6fQYTHhfBMNlwSRXY6CTmj7J/YtrP8W/d4EVJ/87WRjXVV7U2YtgRfFeOOfG8RpiuGIjR0gfLauVEFgiJsBiP9YonF3NgzIUCLvoqu70Ka5sdaP+1IeX4Zevclko6fO3ptUvO2icr1Dv3aMcjZusYZDgNCefNw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=fs7BynC8; arc=none smtp.client-ip=209.85.210.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fs7BynC8" Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-70eb73a9f14so3646498b3a.2; Tue, 30 Jul 2024 11:34:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722364484; x=1722969284; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=e4IhuhN76x48NFIaGWh3almHhHQ4sS1Iy4QAIkMSA2s=; b=fs7BynC89xexAQ/6zr9dKBcV+nBTzGqszVpm/4vy8CkP41x/Q4EZVKaurbweAa+qcb HmkAo8B9LJK6PUUd1kPkGOj6lpZhs7Tb+snleS3cVYtjccGKWtxk7TZyC7G2z4q7mg5W 2/IhrKVqlVDgE5usXhsOMTi1jT7JsLB/7/Rywv7t6dNhc3+zZNcnQkIlFnsqsBXu2xwx tN8XP98VH9bFR7VIWzRos/A5hGJqPDHxSsxSibkYjsd/CZ6qUdBB66ih93Oy6YtM21nM WoAquD6RPDkE9ICuCChYsNCLVmc5OvuOQ3XnfmzIGzXYw4fsrm/pzWXj0qqDHvrUYEEw lAJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722364484; x=1722969284; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e4IhuhN76x48NFIaGWh3almHhHQ4sS1Iy4QAIkMSA2s=; b=Nq/abbAqcpR8LRR3cJsFq7ynAaoOGB880Ab5gXX+RnybPyksxa4stLjsYLc07lobk7 P43kCYzhbi6TW+0YSDEH6T6oPC5xMXEXTJc7CYUTUTaCUkttogdINsiDpDfxHUMbZ9/G OhjueIqkKuvLg9rIYp5KcQyTye8hJnDpxPV3RSztLifWIB2IwqwVz+WsyiVKDGYRzLr1 qwK2sSnNDfATGzLsN+dUwXwhyhsivq794/Nbc8W5qdGSyPhoEV1U5erSf3hwl0RTUSBJ BWnM5jUxa6fPN1WnBWXw0zufDtA5870/pdzZQAudMV42FLbzsKpEOH5+R3coEHrc4xrO TKnA== X-Forwarded-Encrypted: i=1; AJvYcCX4jOmp0LvF0Lth65SNBUFu/nMP957hWqF/J+N8IIT9/DK9aEdA5SdnddVvMlq+ENwELPwfmoJ+LwMYFhkaX8wl41jHZqzcz8CiWN4XxrNFLRpVFXO5L2jyiI1xCgFg6tniWi1t6LiGMxpwb9Byq3xlCqredwkRS+oFajfhaonKNA== X-Gm-Message-State: AOJu0YxQQGM8OA2uNQxIcCglNwQ9L4olaxHKjMt4ddWg2SdpQpHKKkf4 HO4PCxzAlh6oeIhhlZXFlBdBYsT7YXJ0XiXLA26ApwHPCkxGmtka X-Google-Smtp-Source: AGHT+IG88geK142qz/ueS6a+nSgNU9iKp3YcNv0ScHzsZKu/ThRnN9UisAwSywOnQSL0Ilsld4j4rg== X-Received: by 2002:a05:6a21:32a9:b0:1c0:f590:f77f with SMTP id adf61e73a8af0-1c4a0e03d56mr17146966637.0.1722364483829; Tue, 30 Jul 2024 11:34:43 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:f2df:af9:e1f6:390e]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7a9f817f5a2sm7837763a12.24.2024.07.30.11.34.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jul 2024 11:34:43 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Guo-Fu Tseng , "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, netdev@vger.kernel.org, Allen Pais Subject: [net-next v3 13/15] net: jme: Convert tasklet API to new bottom half workqueue mechanism Date: Tue, 30 Jul 2024 11:34:01 -0700 Message-Id: <20240730183403.4176544-14-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240730183403.4176544-1-allen.lkml@gmail.com> References: <20240730183403.4176544-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the jme driver. This transition ensures compatibility with the latest design and enhances performance. We should queue the work only if it was queued at cancel time. Introduce rxempty_bh_work_queued, suggested by Paolo Abeni. Signed-off-by: Allen Pais --- drivers/net/ethernet/jme.c | 80 +++++++++++++++++++++----------------- drivers/net/ethernet/jme.h | 9 +++-- 2 files changed, 49 insertions(+), 40 deletions(-) diff --git a/drivers/net/ethernet/jme.c b/drivers/net/ethernet/jme.c index b06e24562973..bdaeaeb477e4 100644 --- a/drivers/net/ethernet/jme.c +++ b/drivers/net/ethernet/jme.c @@ -1141,7 +1141,7 @@ jme_dynamic_pcc(struct jme_adapter *jme) if (unlikely(dpi->attempt != dpi->cur && dpi->cnt > 5)) { if (dpi->attempt < dpi->cur) - tasklet_schedule(&jme->rxclean_task); + queue_work(system_bh_wq, &jme->rxclean_bh_work); jme_set_rx_pcc(jme, dpi->attempt); dpi->cur = dpi->attempt; dpi->cnt = 0; @@ -1182,9 +1182,9 @@ jme_shutdown_nic(struct jme_adapter *jme) } static void -jme_pcc_tasklet(struct tasklet_struct *t) +jme_pcc_bh_work(struct work_struct *work) { - struct jme_adapter *jme = from_tasklet(jme, t, pcc_task); + struct jme_adapter *jme = from_work(jme, work, pcc_bh_work); struct net_device *netdev = jme->dev; if (unlikely(test_bit(JME_FLAG_SHUTDOWN, &jme->flags))) { @@ -1282,9 +1282,9 @@ static void jme_link_change_work(struct work_struct *work) jme_stop_shutdown_timer(jme); jme_stop_pcc_timer(jme); - tasklet_disable(&jme->txclean_task); - tasklet_disable(&jme->rxclean_task); - tasklet_disable(&jme->rxempty_task); + disable_work_sync(&jme->txclean_bh_work); + disable_work_sync(&jme->rxclean_bh_work); + jme->rxempty_bh_work_queued = disable_work_sync(&jme->rxempty_bh_work); if (netif_carrier_ok(netdev)) { jme_disable_rx_engine(jme); @@ -1304,7 +1304,7 @@ static void jme_link_change_work(struct work_struct *work) rc = jme_setup_rx_resources(jme); if (rc) { pr_err("Allocating resources for RX error, Device STOPPED!\n"); - goto out_enable_tasklet; + goto out_enable_bh_work; } rc = jme_setup_tx_resources(jme); @@ -1326,22 +1326,26 @@ static void jme_link_change_work(struct work_struct *work) jme_start_shutdown_timer(jme); } - goto out_enable_tasklet; + goto out_enable_bh_work; err_out_free_rx_resources: jme_free_rx_resources(jme); -out_enable_tasklet: - tasklet_enable(&jme->txclean_task); - tasklet_enable(&jme->rxclean_task); - tasklet_enable(&jme->rxempty_task); +out_enable_bh_work: + enable_and_queue_work(system_bh_wq, &jme->txclean_bh_work); + enable_and_queue_work(system_bh_wq, &jme->rxclean_bh_work); + if (jme->rxempty_bh_work_queued) + enable_and_queue_work(system_bh_wq, &jme->rxempty_bh_work); + else + enable_work(&jme->rxempty_bh_work); + out: atomic_inc(&jme->link_changing); } static void -jme_rx_clean_tasklet(struct tasklet_struct *t) +jme_rx_clean_bh_work(struct work_struct *work) { - struct jme_adapter *jme = from_tasklet(jme, t, rxclean_task); + struct jme_adapter *jme = from_work(jme, work, rxclean_bh_work); struct dynpcc_info *dpi = &(jme->dpi); jme_process_receive(jme, jme->rx_ring_size); @@ -1374,9 +1378,9 @@ jme_poll(JME_NAPI_HOLDER(holder), JME_NAPI_WEIGHT(budget)) } static void -jme_rx_empty_tasklet(struct tasklet_struct *t) +jme_rx_empty_bh_work(struct work_struct *work) { - struct jme_adapter *jme = from_tasklet(jme, t, rxempty_task); + struct jme_adapter *jme = from_work(jme, work, rxempty_bh_work); if (unlikely(atomic_read(&jme->link_changing) != 1)) return; @@ -1386,7 +1390,7 @@ jme_rx_empty_tasklet(struct tasklet_struct *t) netif_info(jme, rx_status, jme->dev, "RX Queue Full!\n"); - jme_rx_clean_tasklet(&jme->rxclean_task); + jme_rx_clean_bh_work(&jme->rxclean_bh_work); while (atomic_read(&jme->rx_empty) > 0) { atomic_dec(&jme->rx_empty); @@ -1410,9 +1414,9 @@ jme_wake_queue_if_stopped(struct jme_adapter *jme) } -static void jme_tx_clean_tasklet(struct tasklet_struct *t) +static void jme_tx_clean_bh_work(struct work_struct *work) { - struct jme_adapter *jme = from_tasklet(jme, t, txclean_task); + struct jme_adapter *jme = from_work(jme, work, txclean_bh_work); struct jme_ring *txring = &(jme->txring[0]); struct txdesc *txdesc = txring->desc; struct jme_buffer_info *txbi = txring->bufinf, *ctxbi, *ttxbi; @@ -1510,12 +1514,12 @@ jme_intr_msi(struct jme_adapter *jme, u32 intrstat) if (intrstat & INTR_TMINTR) { jwrite32(jme, JME_IEVE, INTR_TMINTR); - tasklet_schedule(&jme->pcc_task); + queue_work(system_bh_wq, &jme->pcc_bh_work); } if (intrstat & (INTR_PCCTXTO | INTR_PCCTX)) { jwrite32(jme, JME_IEVE, INTR_PCCTXTO | INTR_PCCTX | INTR_TX0); - tasklet_schedule(&jme->txclean_task); + queue_work(system_bh_wq, &jme->txclean_bh_work); } if ((intrstat & (INTR_PCCRX0TO | INTR_PCCRX0 | INTR_RX0EMP))) { @@ -1538,9 +1542,9 @@ jme_intr_msi(struct jme_adapter *jme, u32 intrstat) } else { if (intrstat & INTR_RX0EMP) { atomic_inc(&jme->rx_empty); - tasklet_hi_schedule(&jme->rxempty_task); + queue_work(system_bh_highpri_wq, &jme->rxempty_bh_work); } else if (intrstat & (INTR_PCCRX0TO | INTR_PCCRX0)) { - tasklet_hi_schedule(&jme->rxclean_task); + queue_work(system_bh_highpri_wq, &jme->rxclean_bh_work); } } @@ -1826,9 +1830,9 @@ jme_open(struct net_device *netdev) jme_clear_pm_disable_wol(jme); JME_NAPI_ENABLE(jme); - tasklet_setup(&jme->txclean_task, jme_tx_clean_tasklet); - tasklet_setup(&jme->rxclean_task, jme_rx_clean_tasklet); - tasklet_setup(&jme->rxempty_task, jme_rx_empty_tasklet); + INIT_WORK(&jme->txclean_bh_work, jme_tx_clean_bh_work); + INIT_WORK(&jme->rxclean_bh_work, jme_rx_clean_bh_work); + INIT_WORK(&jme->rxempty_bh_work, jme_rx_empty_bh_work); rc = jme_request_irq(jme); if (rc) @@ -1914,9 +1918,10 @@ jme_close(struct net_device *netdev) JME_NAPI_DISABLE(jme); cancel_work_sync(&jme->linkch_task); - tasklet_kill(&jme->txclean_task); - tasklet_kill(&jme->rxclean_task); - tasklet_kill(&jme->rxempty_task); + cancel_work_sync(&jme->txclean_bh_work); + cancel_work_sync(&jme->rxclean_bh_work); + jme->rxempty_bh_work_queued = false; + cancel_work_sync(&jme->rxempty_bh_work); jme_disable_rx_engine(jme); jme_disable_tx_engine(jme); @@ -3020,7 +3025,7 @@ jme_init_one(struct pci_dev *pdev, atomic_set(&jme->tx_cleaning, 1); atomic_set(&jme->rx_empty, 1); - tasklet_setup(&jme->pcc_task, jme_pcc_tasklet); + INIT_WORK(&jme->pcc_bh_work, jme_pcc_bh_work); INIT_WORK(&jme->linkch_task, jme_link_change_work); jme->dpi.cur = PCC_P1; @@ -3180,9 +3185,9 @@ jme_suspend(struct device *dev) netif_stop_queue(netdev); jme_stop_irq(jme); - tasklet_disable(&jme->txclean_task); - tasklet_disable(&jme->rxclean_task); - tasklet_disable(&jme->rxempty_task); + disable_work_sync(&jme->txclean_bh_work); + disable_work_sync(&jme->rxclean_bh_work); + jme->rxempty_bh_work_queued = disable_work_sync(&jme->rxempty_bh_work); if (netif_carrier_ok(netdev)) { if (test_bit(JME_FLAG_POLL, &jme->flags)) @@ -3198,9 +3203,12 @@ jme_suspend(struct device *dev) jme->phylink = 0; } - tasklet_enable(&jme->txclean_task); - tasklet_enable(&jme->rxclean_task); - tasklet_enable(&jme->rxempty_task); + enable_and_queue_work(system_bh_wq, &jme->txclean_bh_work); + enable_and_queue_work(system_bh_wq, &jme->rxclean_bh_work); + if (jme->rxempty_bh_work_queued) + enable_and_queue_work(system_bh_wq, &jme->rxempty_bh_work); + else + enable_work(&jme->rxempty_bh_work); jme_powersave_phy(jme); diff --git a/drivers/net/ethernet/jme.h b/drivers/net/ethernet/jme.h index 860494ff3714..44aaf7625dc3 100644 --- a/drivers/net/ethernet/jme.h +++ b/drivers/net/ethernet/jme.h @@ -406,11 +406,12 @@ struct jme_adapter { spinlock_t phy_lock; spinlock_t macaddr_lock; spinlock_t rxmcs_lock; - struct tasklet_struct rxempty_task; - struct tasklet_struct rxclean_task; - struct tasklet_struct txclean_task; + struct work_struct rxempty_bh_work; + struct work_struct rxclean_bh_work; + struct work_struct txclean_bh_work; + bool rxempty_bh_work_queued; struct work_struct linkch_task; - struct tasklet_struct pcc_task; + struct work_struct pcc_bh_work; unsigned long flags; u32 reg_txcs; u32 reg_txpfc; From patchwork Tue Jul 30 18:34:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13747751 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EEA921A8BE5; Tue, 30 Jul 2024 18:34:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364488; cv=none; b=C59wyrhpyxzkVrdB5UuF4dupNmuWfrm4AJgoeQZLNtGkrNSLOV20GPh+qXUqDAiSoWrppX7ClJk0H0NFK89qMS70r8h1OJI9SygiXnBgaAOmQPvZ1bxc0ss+0tyFBwPA6FDwsr3g/B5fo1rNunNKq8r9hgH3W0Kkicj0UPBV/BM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364488; c=relaxed/simple; bh=LEyJiOXnSr7IzNm7xZKPMDnB7ns46OG9/YCUuG54m9E=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=teXs1ith0O6Ndog2xp8NArQK5wlHRxi8AYEzEPEd2LsnB+0G7rUBjcxmpB7i495NH+BWFoDMxSTX2AUj2nFLF+5bv8J1Lpj2KPuh9K0jTIT9aTHu/lF6xHl1yhnxfVeN899S++b1kfG8v/aeFGvu/w9YLvherVdjbQNanye/oRE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=YEorkEb+; arc=none smtp.client-ip=209.85.215.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YEorkEb+" Received: by mail-pg1-f173.google.com with SMTP id 41be03b00d2f7-76cb5b6b3e4so2958286a12.1; Tue, 30 Jul 2024 11:34:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722364486; x=1722969286; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9/o4jte8rJSyLfv/o/bxl996aMu/g1aGy8txwuXlNI8=; b=YEorkEb+Bzl+5CeaGdI1AQ6S6TKomaAnRwMwSvCob2A/pddmEb8RKjDLixPACVuLCB BQS8qfZgkmEg6ExD49lXfcoipS+ARVyzs/64VWyHqkBG++P93eVBtSYn6XhXidhjjjpY APK0gqF7euAyO4GDn2CDsTP4YV7qtD3CXZG8zuciThY3Vaf67jEvt+5ftxfO1f379WuF VKjxIF7ZzRHrryIzJNsv8GVJjpuKRUlyfK/Tw35r9oelojjHtk1q7t7xDWkOweeERrn3 n3vfloFeQfU/m0xrhFIlPKT4AUYs3p0ZM0xXxbt5XIo09mhlTQJJa3kVviG1afmZnbBe DwNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722364486; x=1722969286; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9/o4jte8rJSyLfv/o/bxl996aMu/g1aGy8txwuXlNI8=; b=K2+EKkU5zfKA9dwmVSAKPvuU/TLbSTgRD3gAelFTbM1GEu36DbW0ttqLmQG05W0VXm 1UlrL03Px6NlSa4XR9ZB4RIV8yrItILw9QeVm3RXlJQdZzwuPK8ESOfWud+a24EuLeoX 38VTu0vJxBFCV92tWUYkgyQDD/CnYJT8jDMTha3GZDMmtJ3Ap06NAvlRmmUr75uYRnyu yuXS/ZvbCPCRI91IG/tDrqRZhoBnt7IXC1AuRMgquIg1rr0rkDOMXz1W2TxzKYkbIX0D tWmIYVw5S6aDrQT3SOJgaLWWh1yfFz0WfOB51kNzvxeZ2I7ejaUgm7nPvVyThUbucxeq YGlA== X-Forwarded-Encrypted: i=1; AJvYcCW6HlmyrnAWPrbhzazbTybPG1yrjtuRDnWoJsU/rO4GLnOvPDhYyARdZ7lXuOSg8KUqIhaiBoqDERivAyAbHWsnMwAsFR6oRP1gXD0IcMoygTFT0Sd1KORCnKlhcfwSZM7G00ecvPu30LQVduliR1XU+1Ag/gugAiI+jkfKg0BT0w== X-Gm-Message-State: AOJu0YxWCngqN66PXLFg/SxJj81qAfap30JYA/qH4MASJJSSlLP4SnSd p4R0j2oZRwMJBR1WS8Ez9/7FwRhLl3P8JDD44Z32D+lyYDm7mttO X-Google-Smtp-Source: AGHT+IGzTBr0vkyMv23wVvhNpJLqIZ9X+i9f11iA1tZboMTTSqcscHqHJfajMlLIFlgYxY/rQBa59w== X-Received: by 2002:a05:6a20:2588:b0:1c2:94ad:1c5d with SMTP id adf61e73a8af0-1c4a117dd82mr11599199637.2.1722364486353; Tue, 30 Jul 2024 11:34:46 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:f2df:af9:e1f6:390e]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7a9f817f5a2sm7837763a12.24.2024.07.30.11.34.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jul 2024 11:34:45 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Marcin Wojtas , Russell King , "David S. Miller" , Eric Dumazet , Paolo Abeni , Mirko Lindner , Stephen Hemminger Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, netdev@vger.kernel.org, Allen Pais Subject: [net-next v3 14/15] net: marvell: Convert tasklet API to new bottom half workqueue mechanism Date: Tue, 30 Jul 2024 11:34:02 -0700 Message-Id: <20240730183403.4176544-15-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240730183403.4176544-1-allen.lkml@gmail.com> References: <20240730183403.4176544-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the marvell drivers. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 9 ++++++--- drivers/net/ethernet/marvell/skge.c | 12 ++++++------ drivers/net/ethernet/marvell/skge.h | 3 ++- 3 files changed, 14 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index 8c45ad983abc..adffbbd20962 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -2628,9 +2628,12 @@ static u32 mvpp2_txq_desc_csum(int l3_offs, __be16 l3_proto, * The number of sent descriptors is returned. * Per-thread access * - * Called only from mvpp2_txq_done(), called from mvpp2_tx() - * (migration disabled) and from the TX completion tasklet (migration - * disabled) so using smp_processor_id() is OK. + * Called only from mvpp2_txq_done(). + * + * Historically, this function was invoked directly from mvpp2_tx() + * (with migration disabled) and from the bottom half workqueue. + * Verify that the use of smp_processor_id() is still appropriate + * considering the current bottom half workqueue implementation. */ static inline int mvpp2_txq_sent_desc_proc(struct mvpp2_port *port, struct mvpp2_tx_queue *txq) diff --git a/drivers/net/ethernet/marvell/skge.c b/drivers/net/ethernet/marvell/skge.c index fcfb34561882..4448af079447 100644 --- a/drivers/net/ethernet/marvell/skge.c +++ b/drivers/net/ethernet/marvell/skge.c @@ -3342,13 +3342,13 @@ static void skge_error_irq(struct skge_hw *hw) } /* - * Interrupt from PHY are handled in tasklet (softirq) + * Interrupt from PHY are handled in bh work (softirq) * because accessing phy registers requires spin wait which might * cause excess interrupt latency. */ -static void skge_extirq(struct tasklet_struct *t) +static void skge_extirq(struct work_struct *work) { - struct skge_hw *hw = from_tasklet(hw, t, phy_task); + struct skge_hw *hw = from_work(hw, work, phy_bh_work); int port; for (port = 0; port < hw->ports; port++) { @@ -3389,7 +3389,7 @@ static irqreturn_t skge_intr(int irq, void *dev_id) status &= hw->intr_mask; if (status & IS_EXT_REG) { hw->intr_mask &= ~IS_EXT_REG; - tasklet_schedule(&hw->phy_task); + queue_work(system_bh_wq, &hw->phy_bh_work); } if (status & (IS_XA1_F|IS_R1_F)) { @@ -3937,7 +3937,7 @@ static int skge_probe(struct pci_dev *pdev, const struct pci_device_id *ent) hw->pdev = pdev; spin_lock_init(&hw->hw_lock); spin_lock_init(&hw->phy_lock); - tasklet_setup(&hw->phy_task, skge_extirq); + INIT_WORK(&hw->phy_bh_work, skge_extirq); hw->regs = ioremap(pci_resource_start(pdev, 0), 0x4000); if (!hw->regs) { @@ -4035,7 +4035,7 @@ static void skge_remove(struct pci_dev *pdev) dev0 = hw->dev[0]; unregister_netdev(dev0); - tasklet_kill(&hw->phy_task); + cancel_work_sync(&hw->phy_bh_work); spin_lock_irq(&hw->hw_lock); hw->intr_mask = 0; diff --git a/drivers/net/ethernet/marvell/skge.h b/drivers/net/ethernet/marvell/skge.h index f72217348eb4..0cf77f4b1c57 100644 --- a/drivers/net/ethernet/marvell/skge.h +++ b/drivers/net/ethernet/marvell/skge.h @@ -5,6 +5,7 @@ #ifndef _SKGE_H #define _SKGE_H #include +#include /* PCI config registers */ #define PCI_DEV_REG1 0x40 @@ -2418,7 +2419,7 @@ struct skge_hw { u32 ram_offset; u16 phy_addr; spinlock_t phy_lock; - struct tasklet_struct phy_task; + struct work_struct phy_bh_work; char irq_name[]; /* skge@pci:000:04:00.0 */ }; From patchwork Tue Jul 30 18:34:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13747752 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 90CB01A8C1A; Tue, 30 Jul 2024 18:34:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364491; cv=none; b=uk4BC2Q+mzbokB8y1FUpwJUalv7L9p3AhBpw/2GQCzi8sn4cfRKx8u3TbyVhZWqFdls06H57Mi82ukdwQGJPwdu6u1hkUG1ZIHjCeP47VkGjz6bIFF+ytjba+LkqZUScUwQkNR2OYKuRVNy0gYwIbUj1WizuQwpHg5xuGUhA+fE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722364491; c=relaxed/simple; bh=cI06cWMeiiaaHvVrivHUrUMAP27szVAGIwpicbm6Rjk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=gv24Xz4QXpRQWR8deQZ2NHTOEEARxAoipXi3i6YMlHyJDRjRAk8Ty9mqat0KbEsQ1ky0f2LLp+klDZtkDLKo9Oyy9Dbh32Jas6GjuXvGaj9jdLO2UivlQqwirI6yMYVa0BgXVxnDEAW1vqCiCooEyZNHHj+cIBM8Crf6rEyoe0c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=DSXFyeU5; arc=none smtp.client-ip=209.85.210.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DSXFyeU5" Received: by mail-pf1-f180.google.com with SMTP id d2e1a72fcca58-70d316f0060so103127b3a.1; Tue, 30 Jul 2024 11:34:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722364489; x=1722969289; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Duz+R7OD5yozyO0xOdsWZmjZsXtc7sydmF+n4fo7yxk=; b=DSXFyeU59qGR0zDPifqdax5QeLucKdmOEIU7nvy0g1tQuAY57S3MwEmrSAknaOUaZ9 uWxV6cs17gO+BcT3HxE2aE9LqRM3DusMx96kJFK8OF9zaYc2pNwEjo9OlbaWqtPOthdv GwKDhZl1FwzyjhJX7k2XfVXcokoEYaRBXvdYB6bkpLV10C3+xKz+LxkxCL+UjH/QPpJH uoc1c0fjhv6zEywQlEUYBdtXdfQ9oCEPUYPobyWab6TuBuC1no7IwFpJJtbNacTghg31 Dy1MBJOMD1cOmJ+JsGCW4ZDhQh1o7gTr7GwH0ejObaoIkJILZwxV9rctt98I30wd8vqW SLHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722364489; x=1722969289; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Duz+R7OD5yozyO0xOdsWZmjZsXtc7sydmF+n4fo7yxk=; b=jDjPWxihWV5PC4w1WqZ+cRdcioo0qkSfV8F46Hnk+1dTxqSyMTqv+7Je4EegGaJ/uX bXa9mjNn3xBEou84IPMx7COL8y/S0K0SIUx/8ETBr/dPpJxHAW238MPtmXjm7nJxR4fL 6/WIhxkrC1fdIHGFz0pGn8YP1X8QNzaepAPcrueyqplXWo902N4zSg2dP2RVcx8kYCLh PZXaMlPuxnVZN2mLJ7Qy8LiaLCccHFjJj19fW+FYcO4Co/hB9ia4cctz+xVinVkUtB9z PJw+O6VxMdBoYIaQAws0zJCb9V70DBx6BZ/JFvm25Iwh9UwwgPOGC7lQsXpHeoP1IS7B cmTw== X-Forwarded-Encrypted: i=1; AJvYcCVUAPkaenQoLmjuMo+K8N6c1ktS4oKZwtA6uxU+Q1jVEfoBaLRIzWrObcmz2gtE2QelDfX13kJWFP4DfDY4QsQGiLtezSerC4qJwNp0J7vWq0zUX7Oa9MojniXG4kxib/c2DRN7j8/MAj4tsP48+Dx1CEmnusBX4bs0A/DHIWm19w== X-Gm-Message-State: AOJu0YwilgX/5rhLaaemuBi7spbkipV+F37my6Y5XYFtce/ln+EIQUrX G/M7RhqZLkf6RtsCbU0bQqbm+1f1/kd22wGAHaGln0bM7O6XLhyh X-Google-Smtp-Source: AGHT+IGxxd98poQZ5VsvVAwrejG7muttGAGaSqTTcastI4KpGM1zGNrfIl2gMAD8HKtiDAx4TeEmCQ== X-Received: by 2002:a05:6a20:8425:b0:1c4:8294:95da with SMTP id adf61e73a8af0-1c4e4883b4cmr4197746637.26.1722364488814; Tue, 30 Jul 2024 11:34:48 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:f2df:af9:e1f6:390e]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7a9f817f5a2sm7837763a12.24.2024.07.30.11.34.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jul 2024 11:34:48 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Felix Fietkau , Sean Wang , Mark Lee , Lorenzo Bianconi , "David S. Miller" , Eric Dumazet , Paolo Abeni , Matthias Brugger , AngeloGioacchino Del Regno Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, netdev@vger.kernel.org, Allen Pais , linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org Subject: [net-next v3 15/15] net: mtk-wed: Convert tasklet API to new bottom half workqueue mechanism Date: Tue, 30 Jul 2024 11:34:03 -0700 Message-Id: <20240730183403.4176544-16-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240730183403.4176544-1-allen.lkml@gmail.com> References: <20240730183403.4176544-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the mtk-wed driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/mediatek/mtk_wed_wo.c | 12 ++++++------ drivers/net/ethernet/mediatek/mtk_wed_wo.h | 3 ++- 2 files changed, 8 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ethernet/mediatek/mtk_wed_wo.c index 7063c78bd35f..acca9ec67fcf 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c @@ -71,7 +71,7 @@ static void mtk_wed_wo_irq_enable(struct mtk_wed_wo *wo, u32 mask) { mtk_wed_wo_set_isr_mask(wo, 0, mask, false); - tasklet_schedule(&wo->mmio.irq_tasklet); + queue_work(system_bh_wq, &wo->mmio.irq_bh_work); } static void @@ -227,14 +227,14 @@ mtk_wed_wo_irq_handler(int irq, void *data) struct mtk_wed_wo *wo = data; mtk_wed_wo_set_isr(wo, 0); - tasklet_schedule(&wo->mmio.irq_tasklet); + queue_work(system_bh_wq, &wo->mmio.irq_bh_work); return IRQ_HANDLED; } -static void mtk_wed_wo_irq_tasklet(struct tasklet_struct *t) +static void mtk_wed_wo_irq_bh_work(struct work_struct *work) { - struct mtk_wed_wo *wo = from_tasklet(wo, t, mmio.irq_tasklet); + struct mtk_wed_wo *wo = from_work(wo, work, mmio.irq_bh_work); u32 intr, mask; /* disable interrupts */ @@ -395,7 +395,7 @@ mtk_wed_wo_hardware_init(struct mtk_wed_wo *wo) wo->mmio.irq = irq_of_parse_and_map(np, 0); wo->mmio.irq_mask = MTK_WED_WO_ALL_INT_MASK; spin_lock_init(&wo->mmio.lock); - tasklet_setup(&wo->mmio.irq_tasklet, mtk_wed_wo_irq_tasklet); + INIT_WORK(&wo->mmio.irq_bh_work, mtk_wed_wo_irq_bh_work); ret = devm_request_irq(wo->hw->dev, wo->mmio.irq, mtk_wed_wo_irq_handler, IRQF_TRIGGER_HIGH, @@ -449,7 +449,7 @@ mtk_wed_wo_hw_deinit(struct mtk_wed_wo *wo) /* disable interrupts */ mtk_wed_wo_set_isr(wo, 0); - tasklet_disable(&wo->mmio.irq_tasklet); + disable_work_sync(&wo->mmio.irq_bh_work); disable_irq(wo->mmio.irq); devm_free_irq(wo->hw->dev, wo->mmio.irq, wo); diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.h b/drivers/net/ethernet/mediatek/mtk_wed_wo.h index 87a67fa3868d..50d619fa213a 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.h +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.h @@ -6,6 +6,7 @@ #include #include +#include struct mtk_wed_hw; @@ -247,7 +248,7 @@ struct mtk_wed_wo { struct regmap *regs; spinlock_t lock; - struct tasklet_struct irq_tasklet; + struct work_struct irq_bh_work; int irq; u32 irq_mask; } mmio;