From patchwork Fri Jun 21 05:05:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13706820 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-ot1-f53.google.com (mail-ot1-f53.google.com [209.85.210.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 397FB12C801; Fri, 21 Jun 2024 05:05:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946343; cv=none; b=SIbIWgbYsmWnLDve8gm340VRENMlsA5Yg6tkWxc8qrekissiAjDL0Ak44FpiGxCrWLwc57DiQQsEDk7weA6IuYBaj30ZBOpsFef01ZrcUz9SfJqcitgQl3AEB0BLb6SDT/BjLzhGMF5pjyj25lAEVhQOefnJXHNLe/kSFP1sFI0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946343; c=relaxed/simple; bh=KdYeKBr4GE3XLiqNm9PoEa7iY9lF/A9jXufuvajDY0Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=rEuLjEgQItjFanjQVmL9e4tZnNXgma6IFFToxakEIAAt+R7neY8VCVDJyZX6GK7ACdNTnAGdO68AjMpuX0uvMaa9NQ23u/62iiPvvZmpgVguSwd6onrZJIj1Zgf6GXjdhv//uhGbNj4mSI/kYjpYrxDCEBk1EB1EOHilLVIdpsI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=kaYmDC/e; arc=none smtp.client-ip=209.85.210.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="kaYmDC/e" Received: by mail-ot1-f53.google.com with SMTP id 46e09a7af769-6f8edde24b3so997068a34.2; Thu, 20 Jun 2024 22:05:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718946341; x=1719551141; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6P3Gdc0qfrmDJeJXUl6nP3lBuuq2/4cAEsXf9G6JMHY=; b=kaYmDC/edzgwS45qGiUWWufQvLXFfRqWulhZ6/oSWoU6GSDJfkc78RERm2eGaWfrmt xMT3erASLggHgWclkGwegRiQ+tcQyEXPlWjA3me1YiZp80o7Oiz5nxFRInzRU6/AVAGa YFlLsZ5ZB+5pLY9hJ+KUdsZUNeE6Ep+Xp/+Z5GTi4Z3p4mSVnihSmlv9g38XBdf2YX/o FigtE/2SmADZdTBTqfalMvaf+4qAuFaMDX48h30/1I2j7Z3mv0h1etwGzhz9FQLDdxgL V8bmw4kP0teVwiJEiVSnfWGXPFdNAurqcQ4i/9GEpQdlOotc7uKRfGKSY3Z01dqX+1jd NUiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718946341; x=1719551141; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6P3Gdc0qfrmDJeJXUl6nP3lBuuq2/4cAEsXf9G6JMHY=; b=YK9T6ZIoxgT5g9NPf6ECZfuvsaigDkINpM7hY5hFMS2gY6qZxXXEPC2asU3zG9IR5j 1rFSfKVfOtguE1gfVmIzM7e3SXYQkMDwbHezolFI8eB5SFYQN9B0ZYX74BVDAapiMjIa QunxX9dkswXc3RcU0jeYmpBKdsK5FA90Ct4QxIpI+stGt1WGeQn+Z20Vju1y69rU/NiL dECab9zZxmltt1z2ZuQWkb4VlNwl+6+2Vto5F0xoV+LgX8gHg9Fh3kEJXXLk2hxEZOSC DV6JtrMleTGLpyBuLQsfrLYvNm+bJtJ1jYVLIZDobq66gSbPTAMqbnD2awE8BdsLBF9J l8IQ== X-Forwarded-Encrypted: i=1; AJvYcCUw9hldr7/nFnHocUP79Kc7GCQPuBm2MQSmBr4Och8tNjtI7cp3uoZs1gEST6MYdZvUPdJbv5Kqo6w5uGt3E3Yb+pC1F4CuSBT5CqzEdQoEDPPBspfjnI59ylYwRBHuAkAHh4N4UbnmGahNeI3I0ezpPVjAJzObeDbMov1d19ZwKA== X-Gm-Message-State: AOJu0Yya6aD82Y9TBQPRML+UAew2TZmU2fp2GlC5ULjBIjHfFXMTBwkK 0HvmMB6IGAKvR7+feUpXIX0+YVRvonbZddiOflMHMskpOyUIpRDd X-Google-Smtp-Source: AGHT+IFX0r3F5AwqpcuZVxyeJQNdGLegGwkn0/4iOt94ueHmQ8no9Tm2T36xWOozgSqBHiZ/ArVcuQ== X-Received: by 2002:a05:6830:18e1:b0:6f9:9fad:159c with SMTP id 46e09a7af769-70075e594abmr6736549a34.33.1718946341230; Thu, 20 Jun 2024 22:05:41 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:fb4e:6cf3:3ec6:9292]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-716c950d71asm371308a12.62.2024.06.20.22.05.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 22:05:40 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Jes Sorensen , "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, Allen Pais , netdev@vger.kernel.org Subject: [PATCH 01/15] net: alteon: Convert tasklet API to new bottom half workqueue mechanism Date: Thu, 20 Jun 2024 22:05:11 -0700 Message-Id: <20240621050525.3720069-2-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240621050525.3720069-1-allen.lkml@gmail.com> References: <20240621050525.3720069-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the alteon driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/alteon/acenic.c | 26 +++++++++++++------------- drivers/net/ethernet/alteon/acenic.h | 8 ++++---- 2 files changed, 17 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/alteon/acenic.c b/drivers/net/ethernet/alteon/acenic.c index 3d8ac63132fb..9e6f91df2ba0 100644 --- a/drivers/net/ethernet/alteon/acenic.c +++ b/drivers/net/ethernet/alteon/acenic.c @@ -1560,9 +1560,9 @@ static void ace_watchdog(struct net_device *data, unsigned int txqueue) } -static void ace_tasklet(struct tasklet_struct *t) +static void ace_bh_work(struct work_struct *work) { - struct ace_private *ap = from_tasklet(ap, t, ace_tasklet); + struct ace_private *ap = from_work(ap, work, ace_bh_work); struct net_device *dev = ap->ndev; int cur_size; @@ -1595,7 +1595,7 @@ static void ace_tasklet(struct tasklet_struct *t) #endif ace_load_jumbo_rx_ring(dev, RX_JUMBO_SIZE - cur_size); } - ap->tasklet_pending = 0; + ap->bh_work_pending = 0; } @@ -1617,7 +1617,7 @@ static void ace_dump_trace(struct ace_private *ap) * * Loading rings is safe without holding the spin lock since this is * done only before the device is enabled, thus no interrupts are - * generated and by the interrupt handler/tasklet handler. + * generated and by the interrupt handler/bh handler. */ static void ace_load_std_rx_ring(struct net_device *dev, int nr_bufs) { @@ -2160,7 +2160,7 @@ static irqreturn_t ace_interrupt(int irq, void *dev_id) */ if (netif_running(dev)) { int cur_size; - int run_tasklet = 0; + int run_bh_work = 0; cur_size = atomic_read(&ap->cur_rx_bufs); if (cur_size < RX_LOW_STD_THRES) { @@ -2172,7 +2172,7 @@ static irqreturn_t ace_interrupt(int irq, void *dev_id) ace_load_std_rx_ring(dev, RX_RING_SIZE - cur_size); } else - run_tasklet = 1; + run_bh_work = 1; } if (!ACE_IS_TIGON_I(ap)) { @@ -2188,7 +2188,7 @@ static irqreturn_t ace_interrupt(int irq, void *dev_id) ace_load_mini_rx_ring(dev, RX_MINI_SIZE - cur_size); } else - run_tasklet = 1; + run_bh_work = 1; } } @@ -2205,12 +2205,12 @@ static irqreturn_t ace_interrupt(int irq, void *dev_id) ace_load_jumbo_rx_ring(dev, RX_JUMBO_SIZE - cur_size); } else - run_tasklet = 1; + run_bh_work = 1; } } - if (run_tasklet && !ap->tasklet_pending) { - ap->tasklet_pending = 1; - tasklet_schedule(&ap->ace_tasklet); + if (run_bh_work && !ap->bh_work_pending) { + ap->bh_work_pending = 1; + queue_work(system_bh_wq, &ap->ace_bh_work); } } @@ -2267,7 +2267,7 @@ static int ace_open(struct net_device *dev) /* * Setup the bottom half rx ring refill handler */ - tasklet_setup(&ap->ace_tasklet, ace_tasklet); + INIT_WORK(&ap->ace_bh_work, ace_bh_work); return 0; } @@ -2301,7 +2301,7 @@ static int ace_close(struct net_device *dev) cmd.idx = 0; ace_issue_cmd(regs, &cmd); - tasklet_kill(&ap->ace_tasklet); + cancel_work_sync(&ap->ace_bh_work); /* * Make sure one CPU is not processing packets while diff --git a/drivers/net/ethernet/alteon/acenic.h b/drivers/net/ethernet/alteon/acenic.h index ca5ce0cbbad1..0e45a97b9c9b 100644 --- a/drivers/net/ethernet/alteon/acenic.h +++ b/drivers/net/ethernet/alteon/acenic.h @@ -2,7 +2,7 @@ #ifndef _ACENIC_H_ #define _ACENIC_H_ #include - +#include /* * Generate TX index update each time, when TX ring is closed. @@ -667,8 +667,8 @@ struct ace_private struct rx_desc *rx_mini_ring; struct rx_desc *rx_return_ring; - int tasklet_pending, jumbo; - struct tasklet_struct ace_tasklet; + int bh_work_pending, jumbo; + struct work_struct ace_bh_work; struct event *evt_ring; @@ -776,7 +776,7 @@ static int ace_open(struct net_device *dev); static netdev_tx_t ace_start_xmit(struct sk_buff *skb, struct net_device *dev); static int ace_close(struct net_device *dev); -static void ace_tasklet(struct tasklet_struct *t); +static void ace_bh_work(struct work_struct *work); static void ace_dump_trace(struct ace_private *ap); static void ace_set_multicast_list(struct net_device *dev); static int ace_change_mtu(struct net_device *dev, int new_mtu); From patchwork Fri Jun 21 05:05:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13706821 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-oo1-f50.google.com (mail-oo1-f50.google.com [209.85.161.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A054813C699; Fri, 21 Jun 2024 05:05:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946346; cv=none; b=uzMgearLNt0pm2db/pEPHXHPNv02ZUfWf0p8kD6R7BwQfp4wh2bfx2EQeT5m3z4M968hQHlv8UOJaoU2+YSckPnvZ4+3umUx1iKPuC1oTiw7ybGsDCDBTYlMOxZd7jtqp3TXq7152A7Sk2a8caJoQxXUyuR029gXw/wwhlRu/5Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946346; c=relaxed/simple; bh=6SvRW/yxoNpZwoYZMTVDeEIcDTd9jfiBS9x45YxGCMA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=peN8B8lj2I6UZRH+dUiaYAmQ+UZwJ0flyGOXCGBKf51YO9G2Dm9WLDbXSC+LWOwoAC9v7Z18fhP1kWsJeD4wE9Gtf+5lepavPe5tFclbUyKN+X9xNOoOzfQJ/Lc5P9NJGV3w66jYOnDAvHo4ZjQwjiQQgwYgq2JO052czBqdSDw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=HKjDs0Ny; arc=none smtp.client-ip=209.85.161.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="HKjDs0Ny" Received: by mail-oo1-f50.google.com with SMTP id 006d021491bc7-5b970e90ab8so932467eaf.3; Thu, 20 Jun 2024 22:05:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718946343; x=1719551143; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mB0wI3bTOL8Z7NkoFf7Yv1HR9LADL20dFoU50ttk+I0=; b=HKjDs0NyCk51NX4yZS8KRIlmjCI0yyBahxCWlq5Ckmj8OxTyCDImiorye00A9VcjpA uEVqO/yqCsYBN7mTwyaqA/uaIwpBAxoPTLWj6iJ+20YE+lQJoFagH3WRgZsVYHwViWFv b4R3DFZsJIjhavuKGcpYfn9ARtz5P0Aa5m1FNDwHDegmYuMjc93tuKEZyYGUPc0G7C/n 994CSg4qS+Z+1xKW7iQoKtOOp6jvmp+YWv/w4LhTMgkz68MI7/wPWdPEjVJI/9jQz8UW f/xH2ztsGTaiYvfvGtOrvVDbLKr45kCSRNgw+Iaxw1e1pLjH4Rn0aCot+ckC6S2w/tJz nSjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718946343; x=1719551143; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mB0wI3bTOL8Z7NkoFf7Yv1HR9LADL20dFoU50ttk+I0=; b=fi8U9PQBWzhfOqo/KcCfnqMR/IMRQ//UBCxub6c13p7OMBVfAEosdvxQkDsl5Pt1sk 2igUosPYNJ90F8greOm3zZI5W7J6OOl6ZTeP1yEDgDkBi3Grn8nPdd6Rv31BD1s+lhw/ PXSgxZ79S3CWx8jEDTjbVWg5O+uPGBTsPF/XasTYwPVuE2OEiN/QWl72yCBeApgWdoX/ 76Cp7DxlX2wfFSCd9WReekP0rb2MlexfNf2Htt0dHaQ4EIej+vckYPd6GIJZI/kuSBVb 1kfsAOGGn2L8i7+OFLZ23q1Ru7gDpmE1f5gADWc5X/9KAEIEPfhc90WZZbsPQg1k0g4b wIpw== X-Forwarded-Encrypted: i=1; AJvYcCXpkVMyRcuvRJs28AZ8FfXrckuIKLxpFlIoHNARfs0ZNh7o4zFQtkQh4ovXm9lCLGB5NHxrIDrZGdNCoruSSmVFumQX72UpCk8JBCFMba6v/DxnuUA2bUVpqkPJyl3NWgU7zeB0UYT+j9+kl5E20ief1+AvVhUBQoY/+8egTSPsRw== X-Gm-Message-State: AOJu0YwrktYpFHbjqySKXHVVGt6fOFwrMrrFeVMUbJBty44YQIuQCYKG A4Fab1CVdJsLHS+Rhh20yMEi1Vy+v150bhSbFAOLMF+HCSX/T1/o X-Google-Smtp-Source: AGHT+IEPykeX3KJVjBw2qzSdoEEVKBhMmLRWnSS1iqV8XcskPP4QJUVTUFr8gAFOeBTEH00rdx5mtA== X-Received: by 2002:a05:6358:5e08:b0:19f:3a23:880 with SMTP id e5c5f4694b2df-1a1fd5259e3mr926233255d.23.1718946343519; Thu, 20 Jun 2024 22:05:43 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:fb4e:6cf3:3ec6:9292]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-716c950d71asm371308a12.62.2024.06.20.22.05.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 22:05:43 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Shyam Sundar S K , "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, Allen Pais , netdev@vger.kernel.org Subject: [PATCH 02/15] net: xgbe: Convert tasklet API to new bottom half workqueue mechanism Date: Thu, 20 Jun 2024 22:05:12 -0700 Message-Id: <20240621050525.3720069-3-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240621050525.3720069-1-allen.lkml@gmail.com> References: <20240621050525.3720069-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the xgbe driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/amd/xgbe/xgbe-drv.c | 30 +++++++++++------------ drivers/net/ethernet/amd/xgbe/xgbe-i2c.c | 16 ++++++------ drivers/net/ethernet/amd/xgbe/xgbe-mdio.c | 16 ++++++------ drivers/net/ethernet/amd/xgbe/xgbe-pci.c | 4 +-- drivers/net/ethernet/amd/xgbe/xgbe.h | 10 ++++---- 5 files changed, 38 insertions(+), 38 deletions(-) diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c index c4a4e316683f..5475867708f4 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c @@ -403,9 +403,9 @@ static bool xgbe_ecc_ded(struct xgbe_prv_data *pdata, unsigned long *period, return false; } -static void xgbe_ecc_isr_task(struct tasklet_struct *t) +static void xgbe_ecc_isr_bh_work(struct work_struct *work) { - struct xgbe_prv_data *pdata = from_tasklet(pdata, t, tasklet_ecc); + struct xgbe_prv_data *pdata = from_work(pdata, work, ecc_bh_work); unsigned int ecc_isr; bool stop = false; @@ -465,17 +465,17 @@ static irqreturn_t xgbe_ecc_isr(int irq, void *data) { struct xgbe_prv_data *pdata = data; - if (pdata->isr_as_tasklet) - tasklet_schedule(&pdata->tasklet_ecc); + if (pdata->isr_as_bh_work) + queue_work(system_bh_wq, &pdata->ecc_bh_work); else - xgbe_ecc_isr_task(&pdata->tasklet_ecc); + xgbe_ecc_isr_bh_work(&pdata->ecc_bh_work); return IRQ_HANDLED; } -static void xgbe_isr_task(struct tasklet_struct *t) +static void xgbe_isr_bh_work(struct work_struct *work) { - struct xgbe_prv_data *pdata = from_tasklet(pdata, t, tasklet_dev); + struct xgbe_prv_data *pdata = from_work(pdata, work, dev_bh_work); struct xgbe_hw_if *hw_if = &pdata->hw_if; struct xgbe_channel *channel; unsigned int dma_isr, dma_ch_isr; @@ -582,7 +582,7 @@ static void xgbe_isr_task(struct tasklet_struct *t) /* If there is not a separate ECC irq, handle it here */ if (pdata->vdata->ecc_support && (pdata->dev_irq == pdata->ecc_irq)) - xgbe_ecc_isr_task(&pdata->tasklet_ecc); + xgbe_ecc_isr_bh_work(&pdata->ecc_bh_work); /* If there is not a separate I2C irq, handle it here */ if (pdata->vdata->i2c_support && (pdata->dev_irq == pdata->i2c_irq)) @@ -604,10 +604,10 @@ static irqreturn_t xgbe_isr(int irq, void *data) { struct xgbe_prv_data *pdata = data; - if (pdata->isr_as_tasklet) - tasklet_schedule(&pdata->tasklet_dev); + if (pdata->isr_as_bh_work) + queue_work(system_bh_wq, &pdata->dev_bh_work); else - xgbe_isr_task(&pdata->tasklet_dev); + xgbe_isr_bh_work(&pdata->dev_bh_work); return IRQ_HANDLED; } @@ -1007,8 +1007,8 @@ static int xgbe_request_irqs(struct xgbe_prv_data *pdata) unsigned int i; int ret; - tasklet_setup(&pdata->tasklet_dev, xgbe_isr_task); - tasklet_setup(&pdata->tasklet_ecc, xgbe_ecc_isr_task); + INIT_WORK(&pdata->dev_bh_work, xgbe_isr_bh_work); + INIT_WORK(&pdata->ecc_bh_work, xgbe_ecc_isr_bh_work); ret = devm_request_irq(pdata->dev, pdata->dev_irq, xgbe_isr, 0, netdev_name(netdev), pdata); @@ -1078,8 +1078,8 @@ static void xgbe_free_irqs(struct xgbe_prv_data *pdata) devm_free_irq(pdata->dev, pdata->dev_irq, pdata); - tasklet_kill(&pdata->tasklet_dev); - tasklet_kill(&pdata->tasklet_ecc); + cancel_work_sync(&pdata->dev_bh_work); + cancel_work_sync(&pdata->ecc_bh_work); if (pdata->vdata->ecc_support && (pdata->dev_irq != pdata->ecc_irq)) devm_free_irq(pdata->dev, pdata->ecc_irq, pdata); diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c b/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c index a9ccc4258ee5..7a833894f52a 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c @@ -274,9 +274,9 @@ static void xgbe_i2c_clear_isr_interrupts(struct xgbe_prv_data *pdata, XI2C_IOREAD(pdata, IC_CLR_STOP_DET); } -static void xgbe_i2c_isr_task(struct tasklet_struct *t) +static void xgbe_i2c_isr_bh_work(struct work_struct *work) { - struct xgbe_prv_data *pdata = from_tasklet(pdata, t, tasklet_i2c); + struct xgbe_prv_data *pdata = from_work(pdata, work, i2c_bh_work); struct xgbe_i2c_op_state *state = &pdata->i2c.op_state; unsigned int isr; @@ -321,10 +321,10 @@ static irqreturn_t xgbe_i2c_isr(int irq, void *data) { struct xgbe_prv_data *pdata = (struct xgbe_prv_data *)data; - if (pdata->isr_as_tasklet) - tasklet_schedule(&pdata->tasklet_i2c); + if (pdata->isr_as_bh_work) + queue_work(system_bh_wq, &pdata->i2c_bh_work); else - xgbe_i2c_isr_task(&pdata->tasklet_i2c); + xgbe_i2c_isr_bh_work(&pdata->i2c_bh_work); return IRQ_HANDLED; } @@ -369,7 +369,7 @@ static void xgbe_i2c_set_target(struct xgbe_prv_data *pdata, unsigned int addr) static irqreturn_t xgbe_i2c_combined_isr(struct xgbe_prv_data *pdata) { - xgbe_i2c_isr_task(&pdata->tasklet_i2c); + xgbe_i2c_isr_bh_work(&pdata->i2c_bh_work); return IRQ_HANDLED; } @@ -449,7 +449,7 @@ static void xgbe_i2c_stop(struct xgbe_prv_data *pdata) if (pdata->dev_irq != pdata->i2c_irq) { devm_free_irq(pdata->dev, pdata->i2c_irq, pdata); - tasklet_kill(&pdata->tasklet_i2c); + cancel_work_sync(&pdata->i2c_bh_work); } } @@ -464,7 +464,7 @@ static int xgbe_i2c_start(struct xgbe_prv_data *pdata) /* If we have a separate I2C irq, enable it */ if (pdata->dev_irq != pdata->i2c_irq) { - tasklet_setup(&pdata->tasklet_i2c, xgbe_i2c_isr_task); + INIT_WORK(&pdata->i2c_bh_work, xgbe_i2c_isr_bh_work); ret = devm_request_irq(pdata->dev, pdata->i2c_irq, xgbe_i2c_isr, 0, pdata->i2c_name, diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c index 4a2dc705b528..07f4f3418d01 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c @@ -703,9 +703,9 @@ static void xgbe_an73_isr(struct xgbe_prv_data *pdata) } } -static void xgbe_an_isr_task(struct tasklet_struct *t) +static void xgbe_an_isr_bh_work(struct work_struct *work) { - struct xgbe_prv_data *pdata = from_tasklet(pdata, t, tasklet_an); + struct xgbe_prv_data *pdata = from_work(pdata, work, an_bh_work); netif_dbg(pdata, intr, pdata->netdev, "AN interrupt received\n"); @@ -727,17 +727,17 @@ static irqreturn_t xgbe_an_isr(int irq, void *data) { struct xgbe_prv_data *pdata = (struct xgbe_prv_data *)data; - if (pdata->isr_as_tasklet) - tasklet_schedule(&pdata->tasklet_an); + if (pdata->isr_as_bh_work) + queue_work(system_bh_wq, &pdata->an_bh_work); else - xgbe_an_isr_task(&pdata->tasklet_an); + xgbe_an_isr_bh_work(&pdata->an_bh_work); return IRQ_HANDLED; } static irqreturn_t xgbe_an_combined_isr(struct xgbe_prv_data *pdata) { - xgbe_an_isr_task(&pdata->tasklet_an); + xgbe_an_isr_bh_work(&pdata->an_bh_work); return IRQ_HANDLED; } @@ -1454,7 +1454,7 @@ static void xgbe_phy_stop(struct xgbe_prv_data *pdata) if (pdata->dev_irq != pdata->an_irq) { devm_free_irq(pdata->dev, pdata->an_irq, pdata); - tasklet_kill(&pdata->tasklet_an); + cancel_work_sync(&pdata->an_bh_work); } pdata->phy_if.phy_impl.stop(pdata); @@ -1477,7 +1477,7 @@ static int xgbe_phy_start(struct xgbe_prv_data *pdata) /* If we have a separate AN irq, enable it */ if (pdata->dev_irq != pdata->an_irq) { - tasklet_setup(&pdata->tasklet_an, xgbe_an_isr_task); + INIT_WORK(&pdata->an_bh_work, xgbe_an_isr_bh_work); ret = devm_request_irq(pdata->dev, pdata->an_irq, xgbe_an_isr, 0, pdata->an_name, diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c index c5e5fac49779..c636999a6a84 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c @@ -139,7 +139,7 @@ static int xgbe_config_multi_msi(struct xgbe_prv_data *pdata) return ret; } - pdata->isr_as_tasklet = 1; + pdata->isr_as_bh_work = 1; pdata->irq_count = ret; pdata->dev_irq = pci_irq_vector(pdata->pcidev, 0); @@ -176,7 +176,7 @@ static int xgbe_config_irqs(struct xgbe_prv_data *pdata) return ret; } - pdata->isr_as_tasklet = pdata->pcidev->msi_enabled ? 1 : 0; + pdata->isr_as_bh_work = pdata->pcidev->msi_enabled ? 1 : 0; pdata->irq_count = 1; pdata->channel_irq_count = 1; diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h index f01a1e566da6..d85386cac8d1 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe.h +++ b/drivers/net/ethernet/amd/xgbe/xgbe.h @@ -1298,11 +1298,11 @@ struct xgbe_prv_data { unsigned int lpm_ctrl; /* CTRL1 for resume */ - unsigned int isr_as_tasklet; - struct tasklet_struct tasklet_dev; - struct tasklet_struct tasklet_ecc; - struct tasklet_struct tasklet_i2c; - struct tasklet_struct tasklet_an; + unsigned int isr_as_bh_work; + struct work_struct dev_bh_work; + struct work_struct ecc_bh_work; + struct work_struct i2c_bh_work; + struct work_struct an_bh_work; struct dentry *xgbe_debugfs; From patchwork Fri Jun 21 05:05:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13706822 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B7BF23A6; Fri, 21 Jun 2024 05:05:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946347; cv=none; b=ZI8dyL87IB4ILLhGIZ0dPQjHIa71OlUaQWpUyuEmLoSWIiBK4cON8zxWz7At+EhdQxf2clIreJvz9oju6HFTdjmXGkGWDuq/j6L53opduwaL4EWsrEaQVTDJvO3FdLId4dV0KtVf4Xu3wWB1qLBVsonIQ0q6rzu08bkPjisuzSU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946347; c=relaxed/simple; bh=0lVjZMbqARbw7gbAYzWGubSXpkB35s6c9hWX6BwHW18=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=XBt7ff4JmuNtD9e0QyqAkTqM9XcD8KaNwuRO04NTCo7SA4L2pZ5jBWBVDqwR18JfWa+ypJikV5rVD7zrS7sVd69qIkvdrOzFovnT9z+D/QtqqHhCN0/AppPCBV0O3Z0f4z2vwAZQxCLp1Av8S/M3DMX5XjmohaDeH8g1xcIWTdc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ZG0D9NSV; arc=none smtp.client-ip=209.85.215.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZG0D9NSV" Received: by mail-pg1-f181.google.com with SMTP id 41be03b00d2f7-6c5bcb8e8edso1275577a12.2; Thu, 20 Jun 2024 22:05:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718946345; x=1719551145; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VYc7brdTKERYILuKkf0RRNv0Il2mQiysyxHJxGPSiB4=; b=ZG0D9NSVZBz7s4VmB5sdA5XQ+TRI/0WZ56EBORY10efu8hmSX2rNiLuzlE9TsEKVzw YZCPuBjTKnjSZoHwdYrJkVMzeu409IoIxG68W9Dk8Ov0S5Qx5y1ArPOXWi7GkzpQlI+z huUeMxmKJ4LHceEDsgOGrcNE1dmZ1t+uUxB8p4CtDscfl/O8m4xTV3okYz+CidnP/L5O cN6pv/Gimei4DUQeRoe059LEffZQYZhWigOwDH4F5jOuKEg/VRXqiP+FhDK19eRCnMFS lNmmMCQcsr6ayllNoI5luNsNzdD0MZHn2/RlgSf9BQnnlMjezU/qsY1e3GnVlNO7pO34 GrhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718946345; x=1719551145; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VYc7brdTKERYILuKkf0RRNv0Il2mQiysyxHJxGPSiB4=; b=HFuncynkjJ4FSBW6QVmRi0NgAjDOdssgRDWOKi+M6GsTFscZt0BKsJ2N7TYxQz7vJC SQP5WKq4Y7qVKhxAYlTfuYDaWG+DzgfJVfdqi92amzvdMPer3d+4Fpfa2SKPwXxHtPKh C7uwNKMiWbRVr2L8MU4nWNt/joUDxHSrsMCnY4DRnysfmH2jJRSfNv4jk46TT43NmWJ3 2FsvifBvGBzGKVVB/ZgKJIO+WoZI9CgU+/ErzUZ07jNBSZsqe6kSdZ7VuScwRq1J3knH 4pp++md7E8F9mn21efI5h29lw7A+GjL8A61RLEs8QHDrgj1TiD1D1+dnKoC4D1qiCTyN oWzg== X-Forwarded-Encrypted: i=1; AJvYcCVXLFf1rLaZRunOEQnY4JT4C9Zmo44AfaHq1bs0gmVied3F7uNHFvi1Vh28/kYz2syI95W1scDJCj5NNp3hK7lLzieFNBRU7zIOW0mvBCfdigFHMQF95znewqs0NKGP7LilBMWHhuUaA8B2EuBELTZvuviONzMGBpe2xClzc5gCHQ== X-Gm-Message-State: AOJu0YyBGOVr3uj0b4CVg0A5GY+Vs5ZvI60S3Wg1rpVvADY+SUR86sCC xCbtdxBoLefPgUgdvsfPXoECFV7pjmqs4CnLXizSxbZ7OKGs0Inr X-Google-Smtp-Source: AGHT+IHYAJXxWZVMfjWxNbi3UugHkou19VoY9+TyivjErZNQu4x7I1j8JUOv3Dq0XmIXZA/nVjVyHQ== X-Received: by 2002:a17:90a:1bc8:b0:2c5:32c3:a777 with SMTP id 98e67ed59e1d1-2c7b5c8ffccmr7374530a91.28.1718946345256; Thu, 20 Jun 2024 22:05:45 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:fb4e:6cf3:3ec6:9292]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-716c950d71asm371308a12.62.2024.06.20.22.05.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 22:05:45 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, Allen Pais , netdev@vger.kernel.org Subject: [PATCH 03/15] net: cnic: Convert tasklet API to new bottom half workqueue mechanism Date: Thu, 20 Jun 2024 22:05:13 -0700 Message-Id: <20240621050525.3720069-4-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240621050525.3720069-1-allen.lkml@gmail.com> References: <20240621050525.3720069-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the cnic driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/broadcom/cnic.c | 19 ++++++++++--------- drivers/net/ethernet/broadcom/cnic.h | 2 +- 2 files changed, 11 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/broadcom/cnic.c b/drivers/net/ethernet/broadcom/cnic.c index c2b4188a1ef1..a9040c42d2ff 100644 --- a/drivers/net/ethernet/broadcom/cnic.c +++ b/drivers/net/ethernet/broadcom/cnic.c @@ -31,6 +31,7 @@ #include #include #include +#include #if IS_ENABLED(CONFIG_VLAN_8021Q) #define BCM_VLAN 1 #endif @@ -3015,9 +3016,9 @@ static int cnic_service_bnx2(void *data, void *status_blk) return cnic_service_bnx2_queues(dev); } -static void cnic_service_bnx2_msix(struct tasklet_struct *t) +static void cnic_service_bnx2_msix(struct work_struct *work) { - struct cnic_local *cp = from_tasklet(cp, t, cnic_irq_task); + struct cnic_local *cp = from_work(cp, work, cnic_irq_bh_work); struct cnic_dev *dev = cp->dev; cp->last_status_idx = cnic_service_bnx2_queues(dev); @@ -3036,7 +3037,7 @@ static void cnic_doirq(struct cnic_dev *dev) prefetch(cp->status_blk.gen); prefetch(&cp->kcq1.kcq[KCQ_PG(prod)][KCQ_IDX(prod)]); - tasklet_schedule(&cp->cnic_irq_task); + queue_work(system_bh_wq, &cp->cnic_irq_bh_work); } } @@ -3140,9 +3141,9 @@ static u32 cnic_service_bnx2x_kcq(struct cnic_dev *dev, struct kcq_info *info) return last_status; } -static void cnic_service_bnx2x_bh(struct tasklet_struct *t) +static void cnic_service_bnx2x_bh_work(struct work_struct *work) { - struct cnic_local *cp = from_tasklet(cp, t, cnic_irq_task); + struct cnic_local *cp = from_work(cp, work, cnic_irq_bh_work); struct cnic_dev *dev = cp->dev; struct bnx2x *bp = netdev_priv(dev->netdev); u32 status_idx, new_status_idx; @@ -4428,7 +4429,7 @@ static void cnic_free_irq(struct cnic_dev *dev) if (ethdev->drv_state & CNIC_DRV_STATE_USING_MSIX) { cp->disable_int_sync(dev); - tasklet_kill(&cp->cnic_irq_task); + cancel_work_sync(&cp->cnic_irq_bh_work); free_irq(ethdev->irq_arr[0].vector, dev); } } @@ -4441,7 +4442,7 @@ static int cnic_request_irq(struct cnic_dev *dev) err = request_irq(ethdev->irq_arr[0].vector, cnic_irq, 0, "cnic", dev); if (err) - tasklet_disable(&cp->cnic_irq_task); + disable_work_sync(&cp->cnic_irq_bh_work); return err; } @@ -4464,7 +4465,7 @@ static int cnic_init_bnx2_irq(struct cnic_dev *dev) CNIC_WR(dev, base + BNX2_HC_CMD_TICKS_OFF, (64 << 16) | 220); cp->last_status_idx = cp->status_blk.bnx2->status_idx; - tasklet_setup(&cp->cnic_irq_task, cnic_service_bnx2_msix); + INIT_WORK(&cp->cnic_irq_bh_work, cnic_service_bnx2_msix); err = cnic_request_irq(dev); if (err) return err; @@ -4873,7 +4874,7 @@ static int cnic_init_bnx2x_irq(struct cnic_dev *dev) struct cnic_eth_dev *ethdev = cp->ethdev; int err = 0; - tasklet_setup(&cp->cnic_irq_task, cnic_service_bnx2x_bh); + INIT_WORK(&cp->cnic_irq_bh_work, cnic_service_bnx2x_bh_work); if (ethdev->drv_state & CNIC_DRV_STATE_USING_MSIX) err = cnic_request_irq(dev); diff --git a/drivers/net/ethernet/broadcom/cnic.h b/drivers/net/ethernet/broadcom/cnic.h index fedc84ada937..1a314a75d2d2 100644 --- a/drivers/net/ethernet/broadcom/cnic.h +++ b/drivers/net/ethernet/broadcom/cnic.h @@ -268,7 +268,7 @@ struct cnic_local { u32 bnx2x_igu_sb_id; u32 int_num; u32 last_status_idx; - struct tasklet_struct cnic_irq_task; + struct work_struct cnic_irq_bh_work; struct kcqe *completed_kcq[MAX_COMPLETED_KCQE]; From patchwork Fri Jun 21 05:05:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13706823 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DED65156228; Fri, 21 Jun 2024 05:05:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946349; cv=none; b=RpnOsJFWtpeAzztpYgzouUYR62gN6dFpLMv7DDuRI5LGPist574/D5i7x0WlVd6xkmL9gpeaSdlRxipVENHwUiO0seqihLk4ukkMG576zfXmhvnS2VymD/VTgfAZyR/2BbbFBI14SreZ+S0raMUmVl2U1gzFfJvVnEnRxQA/bQY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946349; c=relaxed/simple; bh=iWmbrGxooEyBj4VWQ7emVHug9EYg04S4JxaGVSqV/yo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=eSrFcgo/s/nM5EEPpjgHKfTGJSR9DAiKP0qtjtED6wHQIshXX0vv3P7KqoZXtf5ymZrnBlckNvBwdjjrRgwU7AEHRXjqrpg7LrxBew9C+kCzMKmLbv+gavnNgWBYL4qF1TtjBQb8mx4p3PVx0OSYv22BBTZUMJS38O8EqsYMCM0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=l/mLuSdE; arc=none smtp.client-ip=209.85.214.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="l/mLuSdE" Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-1f65a3abd01so13423215ad.3; Thu, 20 Jun 2024 22:05:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718946347; x=1719551147; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BiWssLGa6hq6n1xcfQQ/01ONDxIRNpw6q3yCritF5Gs=; b=l/mLuSdEME9mvWmTDhsflfRbx02mqCSeAzMotW/tD31CGFpXqpAriwP7gsOcQIX7Uz X6pz64569CsVJATxIcPUwGMu6rvlDrxaiY6gq67+bBX7zubkQxXhGexNd3YffvsPLzx0 XBJuFr41OX2/slIeKMmrX37g6qJVL1P48nD6+yTlCAYxThDNWZ7SWtzTRYdQ2dQvtuFr 8XUPSEkVuL7K6ZNN0lZU6Im7kLrA07+wtdiB0ZK22BK/78fCUE/agP0ftumCZ2mVt5EP 33oRcXVusqibEJsvoAZ46zpCw+EdBMN8rWcU+gMb3UdO8whf6Yj4x2jAEMuTpzq3c1iy SWnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718946347; x=1719551147; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BiWssLGa6hq6n1xcfQQ/01ONDxIRNpw6q3yCritF5Gs=; b=jp9VbzSOxkVgp2wXuP51pi2If9i2WAv+NYRwqYSwBUDIxD57m0XPENrQWCtGSUNJE6 bzeyi64hGLhXzVPYGD4xhui9OUa6+ryt/SzAp9OEG9N3bqCd+vlGGZfnHiXtAN6Upmu8 GuCnGXv8Wj80UaYowj5j2vjyi+sZVPwtWZ2rIAmDLoKJ+ZkDtIK5ZbPeXFceT5HNwTLj gLo3qlrnTc8vNrK7Gw5Oe5QuV/u9YMqjA77PZ01uhp494iDV4GoTys/PMP/QIvBHL3z1 eIQSTHDR5K97D0ZQ7vbv4XwyWfgxeYp/sxFwVK4TZSKnd39fcJA3OJBRdCuI4VlFOAjO YZJw== X-Forwarded-Encrypted: i=1; AJvYcCUcilfz258+i6ebRXOa1CPFR8FFL93VWYVu3v1M7bkcTnDbuwxenGlWHvNO68VUzaLamcY+wSOpuOzo+q24BXba1fVC/b7Cg6Z4wiBAc45ZTUI5ZYssKci8feIDQmGZahZ3+s/WuFae2d25WWd+Ynog2QHZIvGNYkHDKz5+Gs+aFg== X-Gm-Message-State: AOJu0Yyi1nDen05r6UGMjV93Snu9NFwFgY9xZuiPwb3veJl57GIkCzYE 9LrI1vybUzqlixPqxb+rDpaS4RMVIdiBTXTV5Ll724xP3cdPNyUN X-Google-Smtp-Source: AGHT+IFx1JB4r97zoJsVZl74xpEVNx+0aSHyzo7ylxxfGZ4kG9dJuWHhO+YimxwJEMrF46gYY2aUjw== X-Received: by 2002:a17:90b:115:b0:2bd:9319:3da1 with SMTP id 98e67ed59e1d1-2c7b5c8be4dmr7325642a91.25.1718946347005; Thu, 20 Jun 2024 22:05:47 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:fb4e:6cf3:3ec6:9292]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-716c950d71asm371308a12.62.2024.06.20.22.05.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 22:05:46 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Nicolas Ferre , Claudiu Beznea , "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, Allen Pais , netdev@vger.kernel.org Subject: [PATCH 04/15] net: macb: Convert tasklet API to new bottom half workqueue mechanism Date: Thu, 20 Jun 2024 22:05:14 -0700 Message-Id: <20240621050525.3720069-5-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240621050525.3720069-1-allen.lkml@gmail.com> References: <20240621050525.3720069-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the macb driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/cadence/macb.h | 3 ++- drivers/net/ethernet/cadence/macb_main.c | 10 +++++----- 2 files changed, 7 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h index aa5700ac9c00..e570cad705d2 100644 --- a/drivers/net/ethernet/cadence/macb.h +++ b/drivers/net/ethernet/cadence/macb.h @@ -13,6 +13,7 @@ #include #include #include +#include #if defined(CONFIG_ARCH_DMA_ADDR_T_64BIT) || defined(CONFIG_MACB_USE_HWSTAMP) #define MACB_EXT_DESC @@ -1322,7 +1323,7 @@ struct macb { spinlock_t rx_fs_lock; unsigned int max_tuples; - struct tasklet_struct hresp_err_tasklet; + struct work_struct hresp_err_bh_work; int rx_bd_rd_prefetch; int tx_bd_rd_prefetch; diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c index 241ce9a2fa99..0dc21a9ae215 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -1792,9 +1792,9 @@ static int macb_tx_poll(struct napi_struct *napi, int budget) return work_done; } -static void macb_hresp_error_task(struct tasklet_struct *t) +static void macb_hresp_error_task(struct work_struct *work) { - struct macb *bp = from_tasklet(bp, t, hresp_err_tasklet); + struct macb *bp = from_work(bp, work, hresp_err_bh_work); struct net_device *dev = bp->dev; struct macb_queue *queue; unsigned int q; @@ -1994,7 +1994,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id) } if (status & MACB_BIT(HRESP)) { - tasklet_schedule(&bp->hresp_err_tasklet); + queue_work(system_bh_wq, &bp->hresp_err_bh_work); netdev_err(dev, "DMA bus error: HRESP not OK\n"); if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) @@ -5150,7 +5150,7 @@ static int macb_probe(struct platform_device *pdev) goto err_out_unregister_mdio; } - tasklet_setup(&bp->hresp_err_tasklet, macb_hresp_error_task); + INIT_WORK(&bp->hresp_err_bh_work, macb_hresp_error_task); netdev_info(dev, "Cadence %s rev 0x%08x at 0x%08lx irq %d (%pM)\n", macb_is_gem(bp) ? "GEM" : "MACB", macb_readl(bp, MID), @@ -5194,7 +5194,7 @@ static void macb_remove(struct platform_device *pdev) mdiobus_free(bp->mii_bus); unregister_netdev(dev); - tasklet_kill(&bp->hresp_err_tasklet); + cancel_work_sync(&bp->hresp_err_bh_work); pm_runtime_disable(&pdev->dev); pm_runtime_dont_use_autosuspend(&pdev->dev); if (!pm_runtime_suspended(&pdev->dev)) { From patchwork Fri Jun 21 05:05:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13706824 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 11A1415A49F; Fri, 21 Jun 2024 05:05:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946351; cv=none; b=Zc+77LC3JIY8kqT70aYMIvG7wOowkVz7P0SUChfOFYKzAa21y6u9VDOaf+jsw4xZng+0mAfvlgGvCAc+97wJv0+lVIL12UwMNynaUEo7iAnGrlFNcJgCgfDLuUQSVbU2n9xmwg9N2Yi5zU7yXdxmb6rEGHLme/8ZV34G+tTxBH0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946351; c=relaxed/simple; bh=gI85TTQP7iKwQXoxekl9V3cWSZJ22aXkG4Y9tecNtfs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=kyW2vziJ5RGPZsvn85qJrZ76wAbJc1iiVdYPY3ZYiuSXCOFlNCAERAItBKNoub2tbH0DXJ1ACuwN9yTT/Sdu11Kj8f5Nsg+FmqykJfI/lA3QCRcXPDgAak6XnAZndH7WPFtoJ9I2uxwPHEglfh/ToaUMCSohzkU8vep/i2nnVtg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=hhoUsYUp; arc=none smtp.client-ip=209.85.210.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hhoUsYUp" Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-705b9a89e08so1435704b3a.1; Thu, 20 Jun 2024 22:05:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718946349; x=1719551149; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=s9GhuL189NlsReo3FKcMODlZqu4R4eu4VKJPZNqH5aQ=; b=hhoUsYUpN+TMVxGCU8k7tCbj+Zrl3DMJaM2miS774GCmQAIvPshJiw8W+LOYQsfOfn ia3/zjrcb76+a/DiId6gXJOC1FKlazBMml+UBU3EG48wZC0AaQbtr18ycXmmqUAPp355 +LwReFkQLp6jiVbx0q0334GiM0LG6ES+JQsKL4zvbM5WPdYEKh5xTSzwQi/H9Pxy0mkO 1J4MsERS9RO2TUQQsR02YwVJKkTv8qlEII+fg1uxFFRJC56CqWl0g16D7pHM7wlVBfdw TEd8Oq+F35RC61wtexTMWVSaeXk8bgg2apyLmD77Mll/1E4x4vCfyTZedUzDth6RZiZH VhZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718946349; x=1719551149; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=s9GhuL189NlsReo3FKcMODlZqu4R4eu4VKJPZNqH5aQ=; b=ckpMCQI6PyRglXdyRSkn6zjgLUQLNqLU+PN5A6Vz3GDD59aQqzKHxXKmbInoCHW/Ca bwvAY/mLGsUjqJyB1LWEvLY5tLW2oOAji8n35QVQM8mQQ+U5GnZWzheuXa9up96csizR erj9PNjRuWQBu8QPY1NympwCSpLl/vFAaeLhrjdQJlanG6Oz4D45RbjZGQPTUWVzX/Q8 /py+rwfU2kBN0KHs3qIXQZU/aw9jwyMOqDn9Km2AYZdQeWpJU+xFGXOPx9L8N3CCgk8u qVEeoEChk9aGMy0eEAqmQv1OZbsrUNMMUSlXJ76PtxN7DzfChlF4QrCh4p/ko56ehROe Lwlw== X-Forwarded-Encrypted: i=1; AJvYcCXb+LdC4aVMEtLLtp8i1f2JSd00wbROxKhkDurZmSUb6S0rgiHs0VDAv9zzIK09UIKAL4AIYt+dLzihm7BxvQQjlRP3AefD/rJclbg3bNYK24xJ2bLSin9GABlwOZubu9wLVyE6ru2MhTq++q8Ph0mzOqntk6SNlHSLYaW9AMUPnA== X-Gm-Message-State: AOJu0YwhmRluIPfWdP6Il/pCNcz8qY1MkX9EzvGTbQpR7f2C2uKx0o2+ W4FK44hgQIjynG5PdHg7pYWl/qA2T7ZWbm6ZEhp0uhJP63nmerjI X-Google-Smtp-Source: AGHT+IH/7s09xv7VzYI4kG7aQgM/agVLY4vyVwDy64jVJQOC9aLZ2gYrLJ2yU43J9Xyj0jq3fHSCig== X-Received: by 2002:a05:6a20:5007:b0:1b7:175a:6756 with SMTP id adf61e73a8af0-1bcbb653e31mr7261907637.50.1718946349067; Thu, 20 Jun 2024 22:05:49 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:fb4e:6cf3:3ec6:9292]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-716c950d71asm371308a12.62.2024.06.20.22.05.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 22:05:48 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, Allen Pais , netdev@vger.kernel.org Subject: [PATCH 05/15] net: cavium/liquidio: Convert tasklet API to new bottom half workqueue mechanism Date: Thu, 20 Jun 2024 22:05:15 -0700 Message-Id: <20240621050525.3720069-6-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240621050525.3720069-1-allen.lkml@gmail.com> References: <20240621050525.3720069-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the cavium/liquidio driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais Reviewed-by: Sunil Goutham --- .../net/ethernet/cavium/liquidio/lio_core.c | 4 ++-- .../net/ethernet/cavium/liquidio/lio_main.c | 24 +++++++++---------- .../ethernet/cavium/liquidio/lio_vf_main.c | 10 ++++---- .../ethernet/cavium/liquidio/octeon_droq.c | 4 ++-- .../ethernet/cavium/liquidio/octeon_main.h | 4 ++-- 5 files changed, 23 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/cavium/liquidio/lio_core.c b/drivers/net/ethernet/cavium/liquidio/lio_core.c index 674c54831875..37307e02a6ff 100644 --- a/drivers/net/ethernet/cavium/liquidio/lio_core.c +++ b/drivers/net/ethernet/cavium/liquidio/lio_core.c @@ -925,7 +925,7 @@ int liquidio_schedule_msix_droq_pkt_handler(struct octeon_droq *droq, u64 ret) if (OCTEON_CN23XX_VF(oct)) dev_err(&oct->pci_dev->dev, "should not come here should not get rx when poll mode = 0 for vf\n"); - tasklet_schedule(&oct_priv->droq_tasklet); + queue_work(system_bh_wq, &oct_priv->droq_bh_work); return 1; } /* this will be flushed periodically by check iq db */ @@ -975,7 +975,7 @@ static void liquidio_schedule_droq_pkt_handlers(struct octeon_device *oct) droq->ops.napi_fn(droq); oct_priv->napi_mask |= BIT_ULL(oq_no); } else { - tasklet_schedule(&oct_priv->droq_tasklet); + queue_work(system_bh_wq, &oct_priv->droq_bh_work); } } } diff --git a/drivers/net/ethernet/cavium/liquidio/lio_main.c b/drivers/net/ethernet/cavium/liquidio/lio_main.c index 1d79f6eaa41f..d348656c2f38 100644 --- a/drivers/net/ethernet/cavium/liquidio/lio_main.c +++ b/drivers/net/ethernet/cavium/liquidio/lio_main.c @@ -150,12 +150,12 @@ static int liquidio_set_vf_link_state(struct net_device *netdev, int vfidx, static struct handshake handshake[MAX_OCTEON_DEVICES]; static struct completion first_stage; -static void octeon_droq_bh(struct tasklet_struct *t) +static void octeon_droq_bh(struct work_struct *work) { int q_no; int reschedule = 0; - struct octeon_device_priv *oct_priv = from_tasklet(oct_priv, t, - droq_tasklet); + struct octeon_device_priv *oct_priv = from_work(oct_priv, work, + droq_bh_work); struct octeon_device *oct = oct_priv->dev; for (q_no = 0; q_no < MAX_OCTEON_OUTPUT_QUEUES(oct); q_no++) { @@ -180,7 +180,7 @@ static void octeon_droq_bh(struct tasklet_struct *t) } if (reschedule) - tasklet_schedule(&oct_priv->droq_tasklet); + queue_work(system_bh_wq, &oct_priv->droq_bh_work); } static int lio_wait_for_oq_pkts(struct octeon_device *oct) @@ -199,7 +199,7 @@ static int lio_wait_for_oq_pkts(struct octeon_device *oct) } if (pkt_cnt > 0) { pending_pkts += pkt_cnt; - tasklet_schedule(&oct_priv->droq_tasklet); + queue_work(system_bh_wq, &oct_priv->droq_bh_work); } pkt_cnt = 0; schedule_timeout_uninterruptible(1); @@ -1130,7 +1130,7 @@ static void octeon_destroy_resources(struct octeon_device *oct) break; } /* end switch (oct->status) */ - tasklet_kill(&oct_priv->droq_tasklet); + cancel_work_sync(&oct_priv->droq_bh_work); } /** @@ -1234,7 +1234,7 @@ static void liquidio_destroy_nic_device(struct octeon_device *oct, int ifidx) list_for_each_entry_safe(napi, n, &netdev->napi_list, dev_list) netif_napi_del(napi); - tasklet_enable(&oct_priv->droq_tasklet); + enable_and_queue_work(system_bh_wq, &oct_priv->droq_bh_work); if (atomic_read(&lio->ifstate) & LIO_IFSTATE_REGISTERED) unregister_netdev(netdev); @@ -1770,7 +1770,7 @@ static int liquidio_open(struct net_device *netdev) int ret = 0; if (oct->props[lio->ifidx].napi_enabled == 0) { - tasklet_disable(&oct_priv->droq_tasklet); + disable_work_sync(&oct_priv->droq_bh_work); list_for_each_entry_safe(napi, n, &netdev->napi_list, dev_list) napi_enable(napi); @@ -1896,7 +1896,7 @@ static int liquidio_stop(struct net_device *netdev) if (OCTEON_CN23XX_PF(oct)) oct->droq[0]->ops.poll_mode = 0; - tasklet_enable(&oct_priv->droq_tasklet); + enable_and_queue_work(system_bh_wq, &oct_priv->droq_bh_work); } dev_info(&oct->pci_dev->dev, "%s interface is stopped\n", netdev->name); @@ -4204,9 +4204,9 @@ static int octeon_device_init(struct octeon_device *octeon_dev) } } - /* Initialize the tasklet that handles output queue packet processing.*/ - dev_dbg(&octeon_dev->pci_dev->dev, "Initializing droq tasklet\n"); - tasklet_setup(&oct_priv->droq_tasklet, octeon_droq_bh); + /* Initialize the bh work that handles output queue packet processing.*/ + dev_dbg(&octeon_dev->pci_dev->dev, "Initializing droq bh work\n"); + INIT_WORK(&oct_priv->droq_bh_work, octeon_droq_bh); /* Setup the interrupt handler and record the INT SUM register address */ diff --git a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c index 62c2eadc33e3..04117625f388 100644 --- a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c +++ b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c @@ -87,7 +87,7 @@ static int lio_wait_for_oq_pkts(struct octeon_device *oct) } if (pkt_cnt > 0) { pending_pkts += pkt_cnt; - tasklet_schedule(&oct_priv->droq_tasklet); + queue_work(system_bh_wq, &oct_priv->droq_bh_work); } pkt_cnt = 0; schedule_timeout_uninterruptible(1); @@ -584,7 +584,7 @@ static void octeon_destroy_resources(struct octeon_device *oct) break; } - tasklet_kill(&oct_priv->droq_tasklet); + cancel_work_sync(&oct_priv->droq_bh_work); } /** @@ -687,7 +687,7 @@ static void liquidio_destroy_nic_device(struct octeon_device *oct, int ifidx) list_for_each_entry_safe(napi, n, &netdev->napi_list, dev_list) netif_napi_del(napi); - tasklet_enable(&oct_priv->droq_tasklet); + enable_and_queue_work(system_bh_wq, &oct_priv->droq_bh_work); if (atomic_read(&lio->ifstate) & LIO_IFSTATE_REGISTERED) unregister_netdev(netdev); @@ -911,7 +911,7 @@ static int liquidio_open(struct net_device *netdev) int ret = 0; if (!oct->props[lio->ifidx].napi_enabled) { - tasklet_disable(&oct_priv->droq_tasklet); + disable_work_sync(&oct_priv->droq_bh_work); list_for_each_entry_safe(napi, n, &netdev->napi_list, dev_list) napi_enable(napi); @@ -986,7 +986,7 @@ static int liquidio_stop(struct net_device *netdev) oct->droq[0]->ops.poll_mode = 0; - tasklet_enable(&oct_priv->droq_tasklet); + enable_and_queue_work(system_bh_wq, &oct_priv->droq_bh_work); } cancel_delayed_work_sync(&lio->stats_wk.work); diff --git a/drivers/net/ethernet/cavium/liquidio/octeon_droq.c b/drivers/net/ethernet/cavium/liquidio/octeon_droq.c index eef12fdd246d..4e5f8bbc891b 100644 --- a/drivers/net/ethernet/cavium/liquidio/octeon_droq.c +++ b/drivers/net/ethernet/cavium/liquidio/octeon_droq.c @@ -96,7 +96,7 @@ u32 octeon_droq_check_hw_for_pkts(struct octeon_droq *droq) last_count = pkt_count - droq->pkt_count; droq->pkt_count = pkt_count; - /* we shall write to cnts at napi irq enable or end of droq tasklet */ + /* we shall write to cnts at napi irq enable or end of droq bh_work */ if (last_count) atomic_add(last_count, &droq->pkts_pending); @@ -764,7 +764,7 @@ octeon_droq_process_packets(struct octeon_device *oct, (u16)rdisp->rinfo->recv_pkt->rh.r.subcode)); } - /* If there are packets pending. schedule tasklet again */ + /* If there are packets pending. schedule bh_work again */ if (atomic_read(&droq->pkts_pending)) return 1; diff --git a/drivers/net/ethernet/cavium/liquidio/octeon_main.h b/drivers/net/ethernet/cavium/liquidio/octeon_main.h index 5b4cb725f60f..a8f2a0a7b08e 100644 --- a/drivers/net/ethernet/cavium/liquidio/octeon_main.h +++ b/drivers/net/ethernet/cavium/liquidio/octeon_main.h @@ -24,6 +24,7 @@ #define _OCTEON_MAIN_H_ #include +#include #if BITS_PER_LONG == 32 #define CVM_CAST64(v) ((long long)(v)) @@ -36,8 +37,7 @@ #define DRV_NAME "LiquidIO" struct octeon_device_priv { - /** Tasklet structures for this device. */ - struct tasklet_struct droq_tasklet; + struct work_struct droq_bh_work; unsigned long napi_mask; struct octeon_device *dev; }; From patchwork Fri Jun 21 05:05:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13706825 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 562FB16B3A0; Fri, 21 Jun 2024 05:05:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946352; cv=none; b=gmGr594oSXB60ylqGNd7p5i7jyRC4Udmt3S80ZvAg6qp0c+Naf8qMBsUmzAy5f/wRYta2lmAE3NW7+HoumKJ7TlfTIOLjDM0999/YBRDpWnnvAVQd3AK9G5eftWMK2HMN5rgPKW7jWDQ+/XvCqYBdjmvZ/oVlntlzgaPF1N0rTc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946352; c=relaxed/simple; bh=i5I8aG+aYcWEncWeF56dW54t1OzqXvJqCOlfAieN1cs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=VyIzkLBtHjEsnNMdN412IafigIWJcwFGj8kBG1J7/FhYKtsYuBsAjKgF9hf/vZLPKPk5ZTHEPlr0h7N0dni8SGLXsOW1nTNvuYNpBy+h2sFBOu2he9I680WSkq6qNzLlfLQhDjQws173gl3gdqqxKwLJKUOvbIdvowPuI9iQDdY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=FMeaS5rK; arc=none smtp.client-ip=209.85.216.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FMeaS5rK" Received: by mail-pj1-f52.google.com with SMTP id 98e67ed59e1d1-2c81ce83715so224219a91.3; Thu, 20 Jun 2024 22:05:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718946351; x=1719551151; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VSOLAIhMr7ucmgYUhEqNv7YbAkTdXAs1lqXyxcdm0Hs=; b=FMeaS5rKcYJky99tWSaBxOp8kr7ywHh/jSNZ2/FBQb+Vx0G9yF7sZ6FKj6HBkhx3sA dJU0Xbzrduzgja7FYggFbPOcq4yiEyn5tFa2Qi4MHJWmv0ftnl56pM6eOC+xqmBNMjZE 2WxM5uYwKQHCJ7OapZSW4vUMAawnN0ypj0fi8Qe0wvUV1ht4bthocC6Au5lIu39ZczMB 0OuJ5e5GCH2Ebts3x9LoqaAUaNyQwMrUeD+oY6OBlbm+noi6PS/XvXzqYjsZIXdQsD4c GXOP+BZqBxIUFVP4n/0Hmr3nCxmk7oqJP6Ey/0xsrY+z8f2IryJsUUsr4WwAaxlD6ZZe y4Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718946351; x=1719551151; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VSOLAIhMr7ucmgYUhEqNv7YbAkTdXAs1lqXyxcdm0Hs=; b=G6UGT8jPb4OYsIZjmFKnP6d34H8PQLHUdvSCjtlDX7wiHATVOabWazU7LPyg1luABD wLgMhjpOCVUoCR76RFnlTulbTSrdA7WT/5douudU/pkF5iJbMRCaIAsB8Zprwq9Lq0Bk /xbqn3V6XPkLNyBOVNxKoMAT9ztZgyhUJCL709iLoW/fl4lKMHtUzdNh7nA2tyvQJNN8 cqRMDVwoAcTNm8QKuaxzeqLucy3Ih9Lb4mmd1/pPHEuxzIhD1mVJLc/cCx6KNMcF/pmL haudWda8b8N/PtbCXfPqKESIT+L1Bydfcg2uZP8KBBNcLIvwaTANGZHGIM1Vh6ij2KBj EqcQ== X-Forwarded-Encrypted: i=1; AJvYcCU9GPFfnnxXQ6X0a/ckalLD7KKZM1S18EcbgfUEl35URn16xT+RjTBn5lwUstQWDH38UHUvqcvjIc9LIBaKN4cjS8qbW0fR93ZQedmO27LPpo6byG9vtQj5E+D69Gqhziwfm2IomsfdfMwqE+FnsvwfXC06u5TOhEsrJG983elzaQ== X-Gm-Message-State: AOJu0YxzePXBrAMfEjGeILf86wa2i33Q5rpwBsfZ6fGQcXlGys6bO1jy 1FxNP0R8RoPnUHTLzNXcDem8+UWjceB5IwK2Hnn32OxJY4JHSvzm X-Google-Smtp-Source: AGHT+IEyuwYNpw8Gzr/dG2J+s7Nt7f5YGBzB8AF6/hrpl+K05+/Kt7O4aTIQGkGtG1XbQE+PkcSe7w== X-Received: by 2002:a17:90a:3908:b0:2c7:2fdf:57b7 with SMTP id 98e67ed59e1d1-2c7b5dcd506mr6764966a91.46.1718946350554; Thu, 20 Jun 2024 22:05:50 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:fb4e:6cf3:3ec6:9292]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-716c950d71asm371308a12.62.2024.06.20.22.05.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 22:05:50 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, Allen Pais , netdev@vger.kernel.org Subject: [PATCH 06/15] net: octeon: Convert tasklet API to new bottom half workqueue mechanism Date: Thu, 20 Jun 2024 22:05:16 -0700 Message-Id: <20240621050525.3720069-7-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240621050525.3720069-1-allen.lkml@gmail.com> References: <20240621050525.3720069-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the cavium/octeon driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/cavium/octeon/octeon_mgmt.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c index 744f2434f7fa..0db993c1cc36 100644 --- a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c +++ b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -144,7 +145,7 @@ struct octeon_mgmt { unsigned int last_speed; struct device *dev; struct napi_struct napi; - struct tasklet_struct tx_clean_tasklet; + struct work_struct tx_clean_bh_work; struct device_node *phy_np; resource_size_t mix_phys; resource_size_t mix_size; @@ -315,9 +316,9 @@ static void octeon_mgmt_clean_tx_buffers(struct octeon_mgmt *p) netif_wake_queue(p->netdev); } -static void octeon_mgmt_clean_tx_tasklet(struct tasklet_struct *t) +static void octeon_mgmt_clean_tx_bh_work(struct work_struct *work) { - struct octeon_mgmt *p = from_tasklet(p, t, tx_clean_tasklet); + struct octeon_mgmt *p = from_work(p, work, tx_clean_bh_work); octeon_mgmt_clean_tx_buffers(p); octeon_mgmt_enable_tx_irq(p); } @@ -684,7 +685,7 @@ static irqreturn_t octeon_mgmt_interrupt(int cpl, void *dev_id) } if (mixx_isr.s.orthresh) { octeon_mgmt_disable_tx_irq(p); - tasklet_schedule(&p->tx_clean_tasklet); + queue_work(system_bh_wq, &p->tx_clean_bh_work); } return IRQ_HANDLED; @@ -1487,8 +1488,8 @@ static int octeon_mgmt_probe(struct platform_device *pdev) skb_queue_head_init(&p->tx_list); skb_queue_head_init(&p->rx_list); - tasklet_setup(&p->tx_clean_tasklet, - octeon_mgmt_clean_tx_tasklet); + INIT_WORK(&p->tx_clean_bh_work, + octeon_mgmt_clean_tx_bh_work); netdev->priv_flags |= IFF_UNICAST_FLT; From patchwork Fri Jun 21 05:05:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13706826 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-oo1-f41.google.com (mail-oo1-f41.google.com [209.85.161.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7AC1C16C698; Fri, 21 Jun 2024 05:05:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946355; cv=none; b=Fs5tnSHXhekazZUQAagbgrNOdiMVhcKBGL7rAwGfoG9YqXqwKFHzJ7hdJcMmPE2zJagzawCGswc3BXPf87WSJTUQEshbNu1KXvnABGGRkeOzlSZt/YaxCGbd2gVdJo2YEmr7jI7c07PVPG/i7F9YJ4CuTgrAj2Htx+tJBEMCHXA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946355; c=relaxed/simple; bh=RHIQ1ZK7HhrkITryP32SV1M7i6oMh9uMx3UMrh72w6I=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=FzIOHZUb5dQjIukmgllO8aMLXdmHXSHBieXkZfvWyISJ/Z73VIkTvS1BTJgEU916SLiQCslJkrrOxMt+4PRbukMd5PKDTnYkdp+yH5C9zSCHARD/WVnyqV/aV0KeegcVFZ3c1loWBeZdbMtSwlev6IsiYfc/iB+83blzO5k8QqE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=GjvcKiKA; arc=none smtp.client-ip=209.85.161.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GjvcKiKA" Received: by mail-oo1-f41.google.com with SMTP id 006d021491bc7-5b9778bb7c8so777249eaf.3; Thu, 20 Jun 2024 22:05:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718946352; x=1719551152; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=d7B644WJji2EeDQXIp+qb+T3U50yFIygDXuUoanjxuY=; b=GjvcKiKAg3/8luYqiGZ75Ou0okowRiK8gixypPysvrHk5u+Eaxdq0O+/CRHjzOi94K K/MqpKXL37Hn9zPWDOJHCVoQ2B2oINmx8K5SSoAnf3DOJGxGIDCWNwe/rNcN1fskHROp M/9TOZHwam3bVCN/fCji7fzZyBFLs6bH6LysZdNbqW95f2Q2+fhjH2wK1akdqo9DhGie zcafdZTr/EB9VZYtsfFqxPWOltJ3Okhp1Zr8WbMlSNMiLiPtPbuO1ZyIJn5+bZK7Rh+2 5XZM5U/Rsf5aGFRp1sFfBzEY/mrC9iDHGuSC5gU2Rwp47sVcaH2ThK9P2m9TsGapyFID ncRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718946352; x=1719551152; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=d7B644WJji2EeDQXIp+qb+T3U50yFIygDXuUoanjxuY=; b=qd7BY42fYmFkQDAFInbIdwC9quJq63Yu9MqcXDgFNATGjaG0wzytgfQjKnJxGfuz0C haNzzCcOwPNVi8PzmnZyiiWjRC1GMN/iKmViCcCh6DjyowWr+k4sdkDOVRoid33P4J+s +OeJU/66p98uxoB7m4NvezgVMR4kT21+p88lCYkSzvuacwSOKYs63PhdvO0YveCoa30o Tt1QJibVoMO2wPME6VuD7bVdZSbPyAoU/hAfxgfBlv+9rAKoAUo3k6nwvGETLg+mVcd6 hKYjeBU4LQxx4KUXZiy3GqfcZx6MRCDnwq8mfpbnx6iGbhcyovu06LYpDRX3vFvWSLqp zXSg== X-Forwarded-Encrypted: i=1; AJvYcCXr0Z36eG+ofgvY0HcWOeZ/U0yj7xxSScSfropGki+W0Ef8FRfdvuGm46Vs0O6AEWlb3eJEy8nRMrNAgn9C9MjjuaONqnqFt+29OXgkm+DqOAB3xSqUb3fl9XeFvTEnr/aJFjqy+KsR2ueuCJloN7z97ZMseiHHV1PdXIfQzBup1A== X-Gm-Message-State: AOJu0YxnplXruFPONLW3NVXG8HE0vGSr9Bby3c7jR0gS3tZL27sYzvtt j8mXCFsHrizrGe4jLLWtn5LUI+ucrRW5zRfgBMZu19QTUSd6KpDh X-Google-Smtp-Source: AGHT+IGZar/opqQUcuItaXuxxIwbAvWDnKX9EGrfatjDViIRrCPB4a9s+Z6quwYramSKiqkj+FmKtQ== X-Received: by 2002:a05:6358:5e08:b0:19f:4a60:e6fc with SMTP id e5c5f4694b2df-1a1fd57ea2amr962858555d.25.1718946352503; Thu, 20 Jun 2024 22:05:52 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:fb4e:6cf3:3ec6:9292]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-716c950d71asm371308a12.62.2024.06.20.22.05.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 22:05:52 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Sunil Goutham , "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, Allen Pais , linux-arm-kernel@lists.infradead.org, netdev@vger.kernel.org Subject: [PATCH 07/15] net: thunderx: Convert tasklet API to new bottom half workqueue mechanism Date: Thu, 20 Jun 2024 22:05:17 -0700 Message-Id: <20240621050525.3720069-8-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240621050525.3720069-1-allen.lkml@gmail.com> References: <20240621050525.3720069-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the cavium/thunderx driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais Reviewed-by: Sunil Goutham --- drivers/net/ethernet/cavium/thunder/nic.h | 5 ++-- .../net/ethernet/cavium/thunder/nicvf_main.c | 24 +++++++++---------- .../ethernet/cavium/thunder/nicvf_queues.c | 4 ++-- .../ethernet/cavium/thunder/nicvf_queues.h | 2 +- 4 files changed, 18 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/cavium/thunder/nic.h b/drivers/net/ethernet/cavium/thunder/nic.h index 090d6b83982a..ecc175b6e7fa 100644 --- a/drivers/net/ethernet/cavium/thunder/nic.h +++ b/drivers/net/ethernet/cavium/thunder/nic.h @@ -8,6 +8,7 @@ #include #include +#include #include #include "thunder_bgx.h" @@ -295,7 +296,7 @@ struct nicvf { bool rb_work_scheduled; struct page *rb_page; struct delayed_work rbdr_work; - struct tasklet_struct rbdr_task; + struct work_struct rbdr_bh_work; /* Secondary Qset */ u8 sqs_count; @@ -319,7 +320,7 @@ struct nicvf { bool loopback_supported; struct nicvf_rss_info rss_info; struct nicvf_pfc pfc; - struct tasklet_struct qs_err_task; + struct work_struct qs_err_bh_work; struct work_struct reset_task; struct nicvf_work rx_mode_work; /* spinlock to protect workqueue arguments from concurrent access */ diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c index aebb9fef3f6e..b0878bd25cf0 100644 --- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c +++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c @@ -982,9 +982,9 @@ static int nicvf_poll(struct napi_struct *napi, int budget) * * As of now only CQ errors are handled */ -static void nicvf_handle_qs_err(struct tasklet_struct *t) +static void nicvf_handle_qs_err(struct work_struct *work) { - struct nicvf *nic = from_tasklet(nic, t, qs_err_task); + struct nicvf *nic = from_work(nic, work, qs_err_bh_work); struct queue_set *qs = nic->qs; int qidx; u64 status; @@ -1069,7 +1069,7 @@ static irqreturn_t nicvf_rbdr_intr_handler(int irq, void *nicvf_irq) if (!nicvf_is_intr_enabled(nic, NICVF_INTR_RBDR, qidx)) continue; nicvf_disable_intr(nic, NICVF_INTR_RBDR, qidx); - tasklet_hi_schedule(&nic->rbdr_task); + queue_work(system_bh_highpri_wq, &nic->rbdr_bh_work); /* Clear interrupt */ nicvf_clear_intr(nic, NICVF_INTR_RBDR, qidx); } @@ -1085,7 +1085,7 @@ static irqreturn_t nicvf_qs_err_intr_handler(int irq, void *nicvf_irq) /* Disable Qset err interrupt and schedule softirq */ nicvf_disable_intr(nic, NICVF_INTR_QS_ERR, 0); - tasklet_hi_schedule(&nic->qs_err_task); + queue_work(system_bh_highpri_wq, &nic->qs_err_bh_work); nicvf_clear_intr(nic, NICVF_INTR_QS_ERR, 0); return IRQ_HANDLED; @@ -1364,8 +1364,8 @@ int nicvf_stop(struct net_device *netdev) for (irq = 0; irq < nic->num_vec; irq++) synchronize_irq(pci_irq_vector(nic->pdev, irq)); - tasklet_kill(&nic->rbdr_task); - tasklet_kill(&nic->qs_err_task); + cancel_work_sync(&nic->rbdr_bh_work); + cancel_work_sync(&nic->qs_err_bh_work); if (nic->rb_work_scheduled) cancel_delayed_work_sync(&nic->rbdr_work); @@ -1488,11 +1488,11 @@ int nicvf_open(struct net_device *netdev) nicvf_hw_set_mac_addr(nic, netdev); } - /* Init tasklet for handling Qset err interrupt */ - tasklet_setup(&nic->qs_err_task, nicvf_handle_qs_err); + /* Init bh_work for handling Qset err interrupt */ + INIT_WORK(&nic->qs_err_bh_work, nicvf_handle_qs_err); - /* Init RBDR tasklet which will refill RBDR */ - tasklet_setup(&nic->rbdr_task, nicvf_rbdr_task); + /* Init RBDR bh_work which will refill RBDR */ + INIT_WORK(&nic->rbdr_bh_work, nicvf_rbdr_bh_work); INIT_DELAYED_WORK(&nic->rbdr_work, nicvf_rbdr_work); /* Configure CPI alorithm */ @@ -1561,8 +1561,8 @@ int nicvf_open(struct net_device *netdev) cleanup: nicvf_disable_intr(nic, NICVF_INTR_MBOX, 0); nicvf_unregister_interrupts(nic); - tasklet_kill(&nic->qs_err_task); - tasklet_kill(&nic->rbdr_task); + cancel_work_sync(&nic->qs_err_bh_work); + cancel_work_sync(&nic->rbdr_bh_work); napi_del: for (qidx = 0; qidx < qs->cq_cnt; qidx++) { cq_poll = nic->napi[qidx]; diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c index 06397cc8bb36..ad71160879e4 100644 --- a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c +++ b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c @@ -461,9 +461,9 @@ void nicvf_rbdr_work(struct work_struct *work) } /* In Softirq context, alloc rcv buffers in atomic mode */ -void nicvf_rbdr_task(struct tasklet_struct *t) +void nicvf_rbdr_bh_work(struct work_struct *work) { - struct nicvf *nic = from_tasklet(nic, t, rbdr_task); + struct nicvf *nic = from_work(nic, work, rbdr_bh_work); nicvf_refill_rbdr(nic, GFP_ATOMIC); if (nic->rb_alloc_fail) { diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_queues.h b/drivers/net/ethernet/cavium/thunder/nicvf_queues.h index 8453defc296c..c6f18fb7c50e 100644 --- a/drivers/net/ethernet/cavium/thunder/nicvf_queues.h +++ b/drivers/net/ethernet/cavium/thunder/nicvf_queues.h @@ -348,7 +348,7 @@ void nicvf_xdp_sq_doorbell(struct nicvf *nic, struct snd_queue *sq, int sq_num); struct sk_buff *nicvf_get_rcv_skb(struct nicvf *nic, struct cqe_rx_t *cqe_rx, bool xdp); -void nicvf_rbdr_task(struct tasklet_struct *t); +void nicvf_rbdr_bh_work(struct work_struct *work); void nicvf_rbdr_work(struct work_struct *work); void nicvf_enable_intr(struct nicvf *nic, int int_type, int q_idx); From patchwork Fri Jun 21 05:05:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13706827 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B28016C874; Fri, 21 Jun 2024 05:05:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946358; cv=none; b=K/JXaJ5JlzvUtMga3JUilrV1rdYv7tRrYXazOeb9ErfIV/KxNGOPiL5Ob/T0i7uY7xH4MYBMuA0cB1uTS2QVrT9UDbK7aLZLwMOav7izJg/eGAkm3tEtANwpHIiC0ASMgRUdhTrFWWubt9hA3q5QkJJoE+z/jtAXlXU4j5n9Iew= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946358; c=relaxed/simple; bh=WtNE/dmbZmq1ThtIDUkgFnTonDnp8MMPvTwAHTwhhpo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=DijC1oWJ7i+2YJfJaS6c8madcKRRiBsq2Bcqxh9lWfg3uDrpQtsqxHQdPvGaJ9fMLCNKsKRhufWRjIl+3B93kuVnSL1tpWL5siUXmNVpUP77Qg0JhuDR8KtJoB2IAga9Hl7DNymSfuTbdCnlzvsXo095V50b8PS4LmMhSMrhYA8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=kHS1TQfc; arc=none smtp.client-ip=209.85.210.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="kHS1TQfc" Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-70436ac8882so1360830b3a.2; Thu, 20 Jun 2024 22:05:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718946355; x=1719551155; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mHsWDKW4hHN8DyEjz7y8O8xUw7bLEPYBA03SJ52fByU=; b=kHS1TQfc+6HOvvbur4fzi5VLf+AlksHZJxXCaoW1kNySTGmVIrhdkjVgu5tv2Wmxc9 IH+gtgyuQnlPCDVC0NXZnalEU4z/khxVK1d55/DH6C09ZUTU6p3GdGZ6f3OaxCbC0p58 RbY/iuyx28xOBC7b8SAEqnVb2Z8ScJRrJZesl2kkoeNcUIqkSqasVfSWSLRd8FO2OHZQ nctPmgYkhqDkTxqCXbJhlhmEks8LHKVlKmtiYhOh4slGS+zXK+g1zsRrVBXlyvqPGpMl oI+G9/d9Iur+D/gFVK4PijIPEaRv0xy4REvnoeooLag66MNK4D4b+MXUMEUrqsjxI2mu mksg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718946355; x=1719551155; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mHsWDKW4hHN8DyEjz7y8O8xUw7bLEPYBA03SJ52fByU=; b=Puk0OVvab+Z345uyeQjUd+EB9szvNpUfayKFw5frtTNBhD6pJng+Ktin65H4GlO/C+ +JtZnniM9pJ0u3t0bJNoUVUhT+8mt8bTvz3GFhyTVNLiMC9FBZz80FItJeZF+reMOVJB 5BeTNtxxdcAld1Y5wdTlRqq3Z0lvYT1zsteQJvB3LDdT0rjpVu9hAquxNci9ThNon9ik MsgT9G44sBpJaJZj1Q1YevADm8bWnDDhvYDj0PvShSi9I7ZyJKETx7F/0Fy+8VlAnsly BUdf9OPUe+3afbI9StqsWa6C09bECopSFgv7FoKsIaAei8DdUN8SzEIG9uFKEBPHwByF lFhQ== X-Forwarded-Encrypted: i=1; AJvYcCWJbybp3MPYBe7VXLfFQJyMzj5meZIoLe02/SzsY3dTh3OLmF2DU2h6Gi/TalvAqiuCmF/virjmME+Lpd6rTCMA6EHWbcrJkctMUClAZf7xHmczBn39o3xxbfIXw3p6YyABqQXmucyC9yHOq1qTan6IxadrjSw822yV4bhugXqnwA== X-Gm-Message-State: AOJu0Yz+zfZNB3zQQcGguVdPAFXd6DGzThawwR8IiQsJxoC4cCG4pJaB E0jSeSG20J3a0aSbXjx3uH7V1MRNqARDAJjDBAmDswUV/I7t1OGYgROo+g== X-Google-Smtp-Source: AGHT+IG6t7y49EHxYkzZHGmiLm+D+QkRmHOq5egoG41f5V6JlPxeks0LH0AnZ7ofCjR0RAh72APb8g== X-Received: by 2002:a05:6a00:69af:b0:704:2bfb:a7fe with SMTP id d2e1a72fcca58-70629d00bf1mr6907201b3a.33.1718946355198; Thu, 20 Jun 2024 22:05:55 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:fb4e:6cf3:3ec6:9292]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-716c950d71asm371308a12.62.2024.06.20.22.05.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 22:05:54 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, "David S. Miller" , Eric Dumazet , Paolo Abeni , Potnuri Bharat Teja Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, Allen Pais , netdev@vger.kernel.org Subject: [PATCH 08/15] net: chelsio: Convert tasklet API to new bottom half workqueue mechanism Date: Thu, 20 Jun 2024 22:05:18 -0700 Message-Id: <20240621050525.3720069-9-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240621050525.3720069-1-allen.lkml@gmail.com> References: <20240621050525.3720069-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the chelsio driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/chelsio/cxgb/sge.c | 19 ++++----- drivers/net/ethernet/chelsio/cxgb4/cxgb4.h | 9 +++-- .../net/ethernet/chelsio/cxgb4/cxgb4_main.c | 2 +- .../ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c | 4 +- .../net/ethernet/chelsio/cxgb4/cxgb4_uld.c | 2 +- drivers/net/ethernet/chelsio/cxgb4/sge.c | 40 +++++++++---------- drivers/net/ethernet/chelsio/cxgb4vf/sge.c | 6 +-- 7 files changed, 42 insertions(+), 40 deletions(-) diff --git a/drivers/net/ethernet/chelsio/cxgb/sge.c b/drivers/net/ethernet/chelsio/cxgb/sge.c index 861edff5ed89..4dab9b0dca86 100644 --- a/drivers/net/ethernet/chelsio/cxgb/sge.c +++ b/drivers/net/ethernet/chelsio/cxgb/sge.c @@ -229,11 +229,11 @@ struct sched { unsigned int port; /* port index (round robin ports) */ unsigned int num; /* num skbs in per port queues */ struct sched_port p[MAX_NPORTS]; - struct tasklet_struct sched_tsk;/* tasklet used to run scheduler */ + struct work_struct sched_bh_work;/* bh_work used to run scheduler */ struct sge *sge; }; -static void restart_sched(struct tasklet_struct *t); +static void restart_sched(struct work_struct *work); /* @@ -270,14 +270,14 @@ static const u8 ch_mac_addr[ETH_ALEN] = { }; /* - * stop tasklet and free all pending skb's + * stop bh_work and free all pending skb's */ static void tx_sched_stop(struct sge *sge) { struct sched *s = sge->tx_sched; int i; - tasklet_kill(&s->sched_tsk); + cancel_work_sync(&s->sched_bh_work); for (i = 0; i < MAX_NPORTS; i++) __skb_queue_purge(&s->p[s->port].skbq); @@ -371,7 +371,7 @@ static int tx_sched_init(struct sge *sge) return -ENOMEM; pr_debug("tx_sched_init\n"); - tasklet_setup(&s->sched_tsk, restart_sched); + INIT_WORK(&s->sched_bh_work, restart_sched); s->sge = sge; sge->tx_sched = s; @@ -1300,12 +1300,12 @@ static inline void reclaim_completed_tx(struct sge *sge, struct cmdQ *q) } /* - * Called from tasklet. Checks the scheduler for any + * Called from bh context. Checks the scheduler for any * pending skbs that can be sent. */ -static void restart_sched(struct tasklet_struct *t) +static void restart_sched(struct work_struct *work) { - struct sched *s = from_tasklet(s, t, sched_tsk); + struct sched *s = from_work(s, work, sched_bh_work); struct sge *sge = s->sge; struct adapter *adapter = sge->adapter; struct cmdQ *q = &sge->cmdQ[0]; @@ -1451,7 +1451,8 @@ static unsigned int update_tx_info(struct adapter *adapter, writel(F_CMDQ0_ENABLE, adapter->regs + A_SG_DOORBELL); } if (sge->tx_sched) - tasklet_hi_schedule(&sge->tx_sched->sched_tsk); + queue_work(system_bh_highpri_wq, + &sge->tx_sched->sched_bh_work); flags &= ~F_CMDQ0_ENABLE; } diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h index fca9533bc011..846040f5e638 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h @@ -53,6 +53,7 @@ #include #include #include +#include #include #include #include "t4_chip_type.h" @@ -880,7 +881,7 @@ struct sge_uld_txq { /* state for an SGE offload Tx queue */ struct sge_txq q; struct adapter *adap; struct sk_buff_head sendq; /* list of backpressured packets */ - struct tasklet_struct qresume_tsk; /* restarts the queue */ + struct work_struct qresume_bh_work; /* restarts the queue */ bool service_ofldq_running; /* service_ofldq() is processing sendq */ u8 full; /* the Tx ring is full */ unsigned long mapping_err; /* # of I/O MMU packet mapping errors */ @@ -890,7 +891,7 @@ struct sge_ctrl_txq { /* state for an SGE control Tx queue */ struct sge_txq q; struct adapter *adap; struct sk_buff_head sendq; /* list of backpressured packets */ - struct tasklet_struct qresume_tsk; /* restarts the queue */ + struct work_struct qresume_bh_work; /* restarts the queue */ u8 full; /* the Tx ring is full */ } ____cacheline_aligned_in_smp; @@ -946,7 +947,7 @@ struct sge_eosw_txq { u32 hwqid; /* Underlying hardware queue index */ struct net_device *netdev; /* Pointer to netdevice */ - struct tasklet_struct qresume_tsk; /* Restarts the queue */ + struct work_struct qresume_bh_work; /* Restarts the queue */ struct completion completion; /* completion for FLOWC rendezvous */ }; @@ -2107,7 +2108,7 @@ void free_tx_desc(struct adapter *adap, struct sge_txq *q, void cxgb4_eosw_txq_free_desc(struct adapter *adap, struct sge_eosw_txq *txq, u32 ndesc); int cxgb4_ethofld_send_flowc(struct net_device *dev, u32 eotid, u32 tc); -void cxgb4_ethofld_restart(struct tasklet_struct *t); +void cxgb4_ethofld_restart(struct work_struct *work); int cxgb4_ethofld_rx_handler(struct sge_rspq *q, const __be64 *rsp, const struct pkt_gl *si); void free_txq(struct adapter *adap, struct sge_txq *q); diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c index 2418645c8823..179517e90da7 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c @@ -589,7 +589,7 @@ static int fwevtq_handler(struct sge_rspq *q, const __be64 *rsp, struct sge_uld_txq *oq; oq = container_of(txq, struct sge_uld_txq, q); - tasklet_schedule(&oq->qresume_tsk); + queue_work(system_bh_wq, &oq->qresume_bh_work); } } else if (opcode == CPL_FW6_MSG || opcode == CPL_FW4_MSG) { const struct cpl_fw6_msg *p = (void *)rsp; diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c index 338b04f339b3..c165d3393e6e 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c @@ -114,7 +114,7 @@ static int cxgb4_init_eosw_txq(struct net_device *dev, eosw_txq->cred = adap->params.ofldq_wr_cred; eosw_txq->hwqid = hwqid; eosw_txq->netdev = dev; - tasklet_setup(&eosw_txq->qresume_tsk, cxgb4_ethofld_restart); + INIT_WORK(&eosw_txq->qresume_bh_work, cxgb4_ethofld_restart); return 0; } @@ -143,7 +143,7 @@ static void cxgb4_free_eosw_txq(struct net_device *dev, cxgb4_clean_eosw_txq(dev, eosw_txq); kfree(eosw_txq->desc); spin_unlock_bh(&eosw_txq->lock); - tasklet_kill(&eosw_txq->qresume_tsk); + cancel_work_sync(&eosw_txq->qresume_bh_work); } static int cxgb4_mqprio_alloc_hw_resources(struct net_device *dev) diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c index 5c13bcb4550d..d9bdf0b1eb69 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c @@ -407,7 +407,7 @@ free_sge_txq_uld(struct adapter *adap, struct sge_uld_txq_info *txq_info) struct sge_uld_txq *txq = &txq_info->uldtxq[i]; if (txq->q.desc) { - tasklet_kill(&txq->qresume_tsk); + cancel_work_sync(&txq->qresume_bh_work); t4_ofld_eq_free(adap, adap->mbox, adap->pf, 0, txq->q.cntxt_id); free_tx_desc(adap, &txq->q, txq->q.in_use, false); diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c index de52bcb884c4..d054979ef850 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/sge.c +++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c @@ -2769,15 +2769,15 @@ static int ctrl_xmit(struct sge_ctrl_txq *q, struct sk_buff *skb) /** * restart_ctrlq - restart a suspended control queue - * @t: pointer to the tasklet associated with this handler + * @work: pointer to the work struct associated with this handler * * Resumes transmission on a suspended Tx control queue. */ -static void restart_ctrlq(struct tasklet_struct *t) +static void restart_ctrlq(struct work_struct *work) { struct sk_buff *skb; unsigned int written = 0; - struct sge_ctrl_txq *q = from_tasklet(q, t, qresume_tsk); + struct sge_ctrl_txq *q = from_work(q, work, qresume_bh_work); spin_lock(&q->sendq.lock); reclaim_completed_tx_imm(&q->q); @@ -3075,13 +3075,13 @@ static int ofld_xmit(struct sge_uld_txq *q, struct sk_buff *skb) /** * restart_ofldq - restart a suspended offload queue - * @t: pointer to the tasklet associated with this handler + * @work: pointer to the work struct associated with this handler * * Resumes transmission on a suspended Tx offload queue. */ -static void restart_ofldq(struct tasklet_struct *t) +static void restart_ofldq(struct work_struct *work) { - struct sge_uld_txq *q = from_tasklet(q, t, qresume_tsk); + struct sge_uld_txq *q = from_work(q, work, qresume_bh_work); spin_lock(&q->sendq.lock); q->full = 0; /* the queue actually is completely empty now */ @@ -4020,10 +4020,10 @@ static int napi_rx_handler(struct napi_struct *napi, int budget) return work_done; } -void cxgb4_ethofld_restart(struct tasklet_struct *t) +void cxgb4_ethofld_restart(struct work_struct *work) { - struct sge_eosw_txq *eosw_txq = from_tasklet(eosw_txq, t, - qresume_tsk); + struct sge_eosw_txq *eosw_txq = from_work(eosw_txq, work, + qresume_bh_work); int pktcount; spin_lock(&eosw_txq->lock); @@ -4050,7 +4050,7 @@ void cxgb4_ethofld_restart(struct tasklet_struct *t) * @si: the gather list of packet fragments * * Process a ETHOFLD Tx completion. Increment the cidx here, but - * free up the descriptors in a tasklet later. + * free up the descriptors later in bh_work. */ int cxgb4_ethofld_rx_handler(struct sge_rspq *q, const __be64 *rsp, const struct pkt_gl *si) @@ -4117,10 +4117,10 @@ int cxgb4_ethofld_rx_handler(struct sge_rspq *q, const __be64 *rsp, spin_unlock(&eosw_txq->lock); - /* Schedule a tasklet to reclaim SKBs and restart ETHOFLD Tx, + /* Schedule a bh work to reclaim SKBs and restart ETHOFLD Tx, * if there were packets waiting for completion. */ - tasklet_schedule(&eosw_txq->qresume_tsk); + queue_work(system_bh_wq, &eosw_txq->qresume_bh_work); } out_done: @@ -4279,7 +4279,7 @@ static void sge_tx_timer_cb(struct timer_list *t) struct sge_uld_txq *txq = s->egr_map[id]; clear_bit(id, s->txq_maperr); - tasklet_schedule(&txq->qresume_tsk); + queue_work(system_bh_wq, &txq->qresume_bh_work); } if (!is_t4(adap->params.chip)) { @@ -4719,7 +4719,7 @@ int t4_sge_alloc_ctrl_txq(struct adapter *adap, struct sge_ctrl_txq *txq, init_txq(adap, &txq->q, FW_EQ_CTRL_CMD_EQID_G(ntohl(c.cmpliqid_eqid))); txq->adap = adap; skb_queue_head_init(&txq->sendq); - tasklet_setup(&txq->qresume_tsk, restart_ctrlq); + INIT_WORK(&txq->qresume_bh_work, restart_ctrlq); txq->full = 0; return 0; } @@ -4809,7 +4809,7 @@ int t4_sge_alloc_uld_txq(struct adapter *adap, struct sge_uld_txq *txq, txq->q.q_type = CXGB4_TXQ_ULD; txq->adap = adap; skb_queue_head_init(&txq->sendq); - tasklet_setup(&txq->qresume_tsk, restart_ofldq); + INIT_WORK(&txq->qresume_bh_work, restart_ofldq); txq->full = 0; txq->mapping_err = 0; return 0; @@ -4952,7 +4952,7 @@ void t4_free_sge_resources(struct adapter *adap) struct sge_ctrl_txq *cq = &adap->sge.ctrlq[i]; if (cq->q.desc) { - tasklet_kill(&cq->qresume_tsk); + cancel_work_sync(&cq->qresume_bh_work); t4_ctrl_eq_free(adap, adap->mbox, adap->pf, 0, cq->q.cntxt_id); __skb_queue_purge(&cq->sendq); @@ -5002,7 +5002,7 @@ void t4_sge_start(struct adapter *adap) * t4_sge_stop - disable SGE operation * @adap: the adapter * - * Stop tasklets and timers associated with the DMA engine. Note that + * Stop bh works and timers associated with the DMA engine. Note that * this is effective only if measures have been taken to disable any HW * events that may restart them. */ @@ -5025,7 +5025,7 @@ void t4_sge_stop(struct adapter *adap) for_each_ofldtxq(&adap->sge, i) { if (txq->q.desc) - tasklet_kill(&txq->qresume_tsk); + cancel_work_sync(&txq->qresume_bh_work); } } } @@ -5039,7 +5039,7 @@ void t4_sge_stop(struct adapter *adap) for_each_ofldtxq(&adap->sge, i) { if (txq->q.desc) - tasklet_kill(&txq->qresume_tsk); + cancel_work_sync(&txq->qresume_bh_work); } } } @@ -5048,7 +5048,7 @@ void t4_sge_stop(struct adapter *adap) struct sge_ctrl_txq *cq = &s->ctrlq[i]; if (cq->q.desc) - tasklet_kill(&cq->qresume_tsk); + cancel_work_sync(&cq->qresume_bh_work); } } diff --git a/drivers/net/ethernet/chelsio/cxgb4vf/sge.c b/drivers/net/ethernet/chelsio/cxgb4vf/sge.c index 5b1d746e6563..1f4628178d28 100644 --- a/drivers/net/ethernet/chelsio/cxgb4vf/sge.c +++ b/drivers/net/ethernet/chelsio/cxgb4vf/sge.c @@ -2587,7 +2587,7 @@ void t4vf_free_sge_resources(struct adapter *adapter) * t4vf_sge_start - enable SGE operation * @adapter: the adapter * - * Start tasklets and timers associated with the DMA engine. + * Start bh work and timers associated with the DMA engine. */ void t4vf_sge_start(struct adapter *adapter) { @@ -2600,7 +2600,7 @@ void t4vf_sge_start(struct adapter *adapter) * t4vf_sge_stop - disable SGE operation * @adapter: the adapter * - * Stop tasklets and timers associated with the DMA engine. Note that + * Stop bh works and timers associated with the DMA engine. Note that * this is effective only if measures have been taken to disable any HW * events that may restart them. */ @@ -2692,7 +2692,7 @@ int t4vf_sge_init(struct adapter *adapter) s->fl_starve_thres = s->fl_starve_thres * 2 + 1; /* - * Set up tasklet timers. + * Set up bh work timers. */ timer_setup(&s->rx_timer, sge_rx_timer_cb, 0); timer_setup(&s->tx_timer, sge_tx_timer_cb, 0); From patchwork Fri Jun 21 05:05:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13706828 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-oo1-f53.google.com (mail-oo1-f53.google.com [209.85.161.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB8E212F37F; Fri, 21 Jun 2024 05:05:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946359; cv=none; b=ZVxwmCXQedBfr8vM69We1gCqGXghs1joG9uPxpbV2+CRjTm3FhGH+6vKJ3X6BeaSMoGzxpSlWfyeddJhAEX0hsV7+u7bPQUN9xc6zzxafgATDyWrHigvadKrJtuAhXiVbC+sYG5nfr1yI/2yn3km1u+HMQJqHgcwkfYZx5I9Hsg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946359; c=relaxed/simple; bh=5wCl1B1i8ji1xA4dADPfV0mjXWvLWyC2722XSPdktnI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UP+95DzvJEWcCtM6AuEfXJQ/eBXMTMoTiym61HxYzjbK6korYPtaH8HutnQkyxwQKDv29aJsHH43UO0Pq3LLtsZE5y0wCJNX31gX3XKhaxPHllisIcw6KXu9qo5Tlohd3xQCSh7t2+pWbcMa9feUhCzaDPtCxrQLhj8ikOVfR1g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=BilxYJbF; arc=none smtp.client-ip=209.85.161.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BilxYJbF" Received: by mail-oo1-f53.google.com with SMTP id 006d021491bc7-5c19d338401so810856eaf.0; Thu, 20 Jun 2024 22:05:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718946357; x=1719551157; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9dhdHUZpgwd2/wkpVLwdxeMY04zoPfYMfPcgph0II3s=; b=BilxYJbF5AQglQIk0tqaiyb4LVUbM/BqGAR4htJwugCfIJrJyfBM0yeRPXyi7THQRJ 8TVm3DYgoJV1DHIj73RKtHKsqOOGOYpuJOekwyPrPdq3RhmvCfEby9qFNYD5vQLezlfF xrIQZJ/3snT77OvqvcuiQRRo+goKbbDOu4LUlYUDuDZxv5FD8Kf29t9KpO7oX5cTp3kM T9GmP1LSQjr2wYSro24JA0Qds2zaD5ei+mQaJFddItfTpEsPCeHENaKseoGmyif+nzBz tw8VRs4LXL88UvpbJ3oYbq5dzp6U4ds6kGQeFMS4E14u8A4SoF8cIuIvL3FDjxuMvMEt gZrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718946357; x=1719551157; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9dhdHUZpgwd2/wkpVLwdxeMY04zoPfYMfPcgph0II3s=; b=ZV1+SxGfy8DEe8V47goOCWH2lGVSf7qkSdFdE1pml0ZScSjscheI5mW+wSFSrup0pP iacw2qEfJTXe5cml06v37UgJfyHzvVBtWDIObPorew7/frsXXB7OQO3T/WDV1LIpm7Pv dvtcWhR1W6se3yHNWsffp/TlS4iwMwlyIGOTEQMPjOL+/3Ewle9KQSMyJg/9wM0MT0kU rsG6G5Hpot3z8leiulxJ2OdT14PJFSqSN8LDjcAMilW+uCmTSrlD0sDJfncnH+kcFJ8z WvgObr63bCqKtxMyQtem14+zSQVlCSvPliHC3r6dDbfAXxYhJXI8lhVdJ9zRJPK9FEm0 yo2Q== X-Forwarded-Encrypted: i=1; AJvYcCUKfHj4k02lgLCdCql3c8gtQZcXHcFNJBvMBFj6+HyAucUd1OmE47bRV+pshX3tBMwcCodmY60wrvKR3rjbp15YB0I0t3Q1cFzwlqn1YsLt5v0dRBNcf1vONWPkQ4QjZ6+8AR7DaoM4agNuQs644/XJglLdz3NKPvQY6Mg2TOox1w== X-Gm-Message-State: AOJu0YwRIp31vA/75iL5H+egC+Z9MVTM3mIK19sINSicQzR0MDj/kdNz 8Oz8F3TgodXeazg16SYDm8yVbfcDlBL1XyEhNDMAyik9KuIxU5G5 X-Google-Smtp-Source: AGHT+IHhA35uL6dKwMEPYqyMrHx0+sbso3nzM6DX+derz+sMKjvn9qW/U4VOg8Eu5V2sPuLfkd8gmQ== X-Received: by 2002:a05:6358:5328:b0:19e:e349:1cfd with SMTP id e5c5f4694b2df-1a1fd5cf280mr812647955d.26.1718946356722; Thu, 20 Jun 2024 22:05:56 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:fb4e:6cf3:3ec6:9292]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-716c950d71asm371308a12.62.2024.06.20.22.05.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 22:05:56 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Denis Kirjanov , "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, Allen Pais , netdev@vger.kernel.org Subject: [PATCH 09/15] net: sundance: Convert tasklet API to new bottom half workqueue mechanism Date: Thu, 20 Jun 2024 22:05:19 -0700 Message-Id: <20240621050525.3720069-10-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240621050525.3720069-1-allen.lkml@gmail.com> References: <20240621050525.3720069-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the dlink sundance driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/dlink/sundance.c | 41 ++++++++++++++------------- 1 file changed, 21 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/dlink/sundance.c b/drivers/net/ethernet/dlink/sundance.c index 8af5ecec7d61..65dfd32a9656 100644 --- a/drivers/net/ethernet/dlink/sundance.c +++ b/drivers/net/ethernet/dlink/sundance.c @@ -86,6 +86,7 @@ static char *media[MAX_UNITS]; #include #include #include +#include #include #include #include @@ -395,8 +396,8 @@ struct netdev_private { unsigned int an_enable:1; unsigned int speed; unsigned int wol_enabled:1; /* Wake on LAN enabled */ - struct tasklet_struct rx_tasklet; - struct tasklet_struct tx_tasklet; + struct work_struct rx_bh_work; + struct work_struct tx_bh_work; int budget; int cur_task; /* Multicast and receive mode. */ @@ -430,8 +431,8 @@ static void init_ring(struct net_device *dev); static netdev_tx_t start_tx(struct sk_buff *skb, struct net_device *dev); static int reset_tx (struct net_device *dev); static irqreturn_t intr_handler(int irq, void *dev_instance); -static void rx_poll(struct tasklet_struct *t); -static void tx_poll(struct tasklet_struct *t); +static void rx_poll(struct work_struct *work); +static void tx_poll(struct work_struct *work); static void refill_rx (struct net_device *dev); static void netdev_error(struct net_device *dev, int intr_status); static void netdev_error(struct net_device *dev, int intr_status); @@ -541,8 +542,8 @@ static int sundance_probe1(struct pci_dev *pdev, np->msg_enable = (1 << debug) - 1; spin_lock_init(&np->lock); spin_lock_init(&np->statlock); - tasklet_setup(&np->rx_tasklet, rx_poll); - tasklet_setup(&np->tx_tasklet, tx_poll); + INIT_WORK(&np->rx_bh_work, rx_poll); + INIT_WORK(&np->tx_bh_work, tx_poll); ring_space = dma_alloc_coherent(&pdev->dev, TX_TOTAL_SIZE, &ring_dma, GFP_KERNEL); @@ -965,7 +966,7 @@ static void tx_timeout(struct net_device *dev, unsigned int txqueue) unsigned long flag; netif_stop_queue(dev); - tasklet_disable_in_atomic(&np->tx_tasklet); + disable_work_sync(&np->tx_bh_work); iowrite16(0, ioaddr + IntrEnable); printk(KERN_WARNING "%s: Transmit timed out, TxStatus %2.2x " "TxFrameId %2.2x," @@ -1006,7 +1007,7 @@ static void tx_timeout(struct net_device *dev, unsigned int txqueue) netif_wake_queue(dev); } iowrite16(DEFAULT_INTR, ioaddr + IntrEnable); - tasklet_enable(&np->tx_tasklet); + enable_and_queue_work(system_bh_wq, &np->tx_bh_work); } @@ -1058,9 +1059,9 @@ static void init_ring(struct net_device *dev) } } -static void tx_poll(struct tasklet_struct *t) +static void tx_poll(struct work_struct *work) { - struct netdev_private *np = from_tasklet(np, t, tx_tasklet); + struct netdev_private *np = from_work(np, work, tx_bh_work); unsigned head = np->cur_task % TX_RING_SIZE; struct netdev_desc *txdesc = &np->tx_ring[(np->cur_tx - 1) % TX_RING_SIZE]; @@ -1104,11 +1105,11 @@ start_tx (struct sk_buff *skb, struct net_device *dev) goto drop_frame; txdesc->frag.length = cpu_to_le32 (skb->len | LastFrag); - /* Increment cur_tx before tasklet_schedule() */ + /* Increment cur_tx before bh_work is queued */ np->cur_tx++; mb(); - /* Schedule a tx_poll() task */ - tasklet_schedule(&np->tx_tasklet); + /* Queue a tx_poll() bh work */ + queue_work(system_bh_wq, &np->tx_bh_work); /* On some architectures: explicitly flush cache lines here. */ if (np->cur_tx - np->dirty_tx < TX_QUEUE_LEN - 1 && @@ -1199,7 +1200,7 @@ static irqreturn_t intr_handler(int irq, void *dev_instance) ioaddr + IntrEnable); if (np->budget < 0) np->budget = RX_BUDGET; - tasklet_schedule(&np->rx_tasklet); + queue_work(system_bh_wq, &np->rx_bh_work); } if (intr_status & (IntrTxDone | IntrDrvRqst)) { tx_status = ioread16 (ioaddr + TxStatus); @@ -1315,9 +1316,9 @@ static irqreturn_t intr_handler(int irq, void *dev_instance) return IRQ_RETVAL(handled); } -static void rx_poll(struct tasklet_struct *t) +static void rx_poll(struct work_struct *work) { - struct netdev_private *np = from_tasklet(np, t, rx_tasklet); + struct netdev_private *np = from_work(np, work, rx_bh_work); struct net_device *dev = np->ndev; int entry = np->cur_rx % RX_RING_SIZE; int boguscnt = np->budget; @@ -1407,7 +1408,7 @@ static void rx_poll(struct tasklet_struct *t) np->budget -= received; if (np->budget <= 0) np->budget = RX_BUDGET; - tasklet_schedule(&np->rx_tasklet); + queue_work(system_bh_wq, &np->rx_bh_work); } static void refill_rx (struct net_device *dev) @@ -1819,9 +1820,9 @@ static int netdev_close(struct net_device *dev) struct sk_buff *skb; int i; - /* Wait and kill tasklet */ - tasklet_kill(&np->rx_tasklet); - tasklet_kill(&np->tx_tasklet); + /* Wait and cancel bh work */ + cancel_work_sync(&np->rx_bh_work); + cancel_work_sync(&np->tx_bh_work); np->cur_tx = 0; np->dirty_tx = 0; np->cur_task = 0; From patchwork Fri Jun 21 05:05:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13706829 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-oo1-f51.google.com (mail-oo1-f51.google.com [209.85.161.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 81E0A16D33A; Fri, 21 Jun 2024 05:05:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946361; cv=none; b=ogfWFtr9dh0DuC/DFnpt2BaYYSBPbutZV8ZKUAInFTY1Rv7wVPLjdXmJws5raIvBZ1fdsefTqNdC3P1LdsJPF4pC6aYrkWsPwAyN12GravMLizlhHdrG4+idHo16VuhTOUwxTd6St13ne4uRvbyabfxfoA3Vlk1zsoMRinIBhqM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946361; c=relaxed/simple; bh=zSy815pDO1WbtvQlDxRmFtHqhoClMQgl+qNaM0qSTdg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Bl1+SaTKtjhwCZO2KVVBNt9p2H2Ndh2mBjobtcxmNYD+rcyUmtpnSZSGOVfE3tCsH8MMO/n6t6DxMrvPRy9m4PMhhAyqbWY5NMtwgrAtrwJF0ino4UzM7D+PF4cFFlwiBxQ6GDmmJsIKXkZBUoRxc7KTSQ6Cbsp3+EjRnQASkzs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=US0l0Dtt; arc=none smtp.client-ip=209.85.161.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="US0l0Dtt" Received: by mail-oo1-f51.google.com with SMTP id 006d021491bc7-5bb041514c1so874816eaf.0; Thu, 20 Jun 2024 22:05:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718946358; x=1719551158; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lx2TLXT5aNJZ79dehWaykAutPRc+hLUEunig8hxU8YM=; b=US0l0DttBofAPbq/sKJEw3oMMqtihZps6nPvSmnPm7dUb+2RZ5e4FG6JZKcxSGFqIu Vfe6rJCXxIh043RICE3pJMHE2B7+3+/+a4hMC/PTd+RNOEmzoIyzObNCjB+kxOvVWzRw HJtwCEs+xv9PCLEgCqgrRKVb2lft6kZ0ktRl+XQlLP9/5k9RW1ohq5OjDtnInBH72HNg uAPq/mSdsnRSOTsW1tYXCHLLw+Yzb6DEiL3hLLqknd8vBG51GYPfISq3V5U9cFpa7LtO xroFHEz4dhoxvZiJpCJ804aUVc0A5z4Xlyx0lW6NUqbRqON8Cg3dIqRcLV/A3UiHCURm I7Qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718946358; x=1719551158; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lx2TLXT5aNJZ79dehWaykAutPRc+hLUEunig8hxU8YM=; b=PoOWKqjk9BlZX/5Sesoh7Dtq/9Mpm63DgZNqA700e3yGs9dqMnKuqZjbrdD1j+QF3O MNj0n72NNZt2S/1Qz/GjqGS5/T+jVPimDZVzxbTQTYhaFjCbGQMKFiBCNmHBTipmVunR B1oLhFzBU0qu/LLc8XjruVeu+l8MLpLH/eit8geP8zF6HH0xwYIFrlxMIpxv6MXYU83q nWNIIFhHzrsIQ6+ib+FghZ6kQz0bnwo2XK9nJflZuvRb/swV6tCbZHR3pklEzrGlgvTw JR/xmzqBAakkShE1N6l+O4HGSG5Ny9+SejtrCq5MJA7/+xNKxCTmPRYGxgP07N1bQumy jEkw== X-Forwarded-Encrypted: i=1; AJvYcCX+4TqOio52JxY54f/5psc9EypEU1Z8d4QRSZP2SnkrM0ouzR/sEX9cGn4UusFccAocU4/pbYp8zCBeEYFGzCCucP+nErw9Q2/WKkUI1JaySVKf2KV56i8fM000isygqtl7S+jsXGY4FyykFHAMLttw7x6xNoAE7kTVbo7VkwNW7Q== X-Gm-Message-State: AOJu0YwUG0xawb1s9GMPmWR2RS4A8w7Vp6aryp055thlm4plya9X5QyA 5z6v1gKakVIVqL1651VFJfymsKHyFS5B6ikliHW2yv/LVhPF4ZWD X-Google-Smtp-Source: AGHT+IFsloJi4lcSbB91bbm5zeI31FHR+XREEpqaM3BX4NY1sLxjxwnbe9vu5NBXWUhyl+ydsqrBXA== X-Received: by 2002:a05:6359:5fa9:b0:19f:3355:d300 with SMTP id e5c5f4694b2df-1a1fd55965emr948922755d.25.1718946358477; Thu, 20 Jun 2024 22:05:58 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:fb4e:6cf3:3ec6:9292]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-716c950d71asm371308a12.62.2024.06.20.22.05.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 22:05:58 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Cai Huoqing , "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, Allen Pais , netdev@vger.kernel.org Subject: [PATCH 10/15] net: hinic: Convert tasklet API to new bottom half workqueue mechanism Date: Thu, 20 Jun 2024 22:05:20 -0700 Message-Id: <20240621050525.3720069-11-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240621050525.3720069-1-allen.lkml@gmail.com> References: <20240621050525.3720069-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the huawei hinic driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- .../net/ethernet/huawei/hinic/hinic_hw_cmdq.c | 2 +- .../net/ethernet/huawei/hinic/hinic_hw_eqs.c | 17 ++++++++--------- .../net/ethernet/huawei/hinic/hinic_hw_eqs.h | 2 +- 3 files changed, 10 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c index d39eec9c62bf..f54feae40ef8 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c @@ -344,7 +344,7 @@ static int cmdq_sync_cmd_direct_resp(struct hinic_cmdq *cmdq, struct hinic_hw_wqe *hw_wqe; struct completion done; - /* Keep doorbell index correct. bh - for tasklet(ceq). */ + /* Keep doorbell index correct. For bh_work(ceq). */ spin_lock_bh(&cmdq->cmdq_lock); /* WQE_SIZE = WQEBB_SIZE, we will get the wq element and not shadow*/ diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c index 045c47786a04..381ced8f3c93 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c @@ -368,12 +368,12 @@ static void eq_irq_work(struct work_struct *work) } /** - * ceq_tasklet - the tasklet of the EQ that received the event - * @t: the tasklet struct pointer + * ceq_bh_work - the bh_work of the EQ that received the event + * @work: the work struct pointer **/ -static void ceq_tasklet(struct tasklet_struct *t) +static void ceq_bh_work(struct work_struct *work) { - struct hinic_eq *ceq = from_tasklet(ceq, t, ceq_tasklet); + struct hinic_eq *ceq = from_work(ceq, work, ceq_bh_work); eq_irq_handler(ceq); } @@ -413,7 +413,7 @@ static irqreturn_t ceq_interrupt(int irq, void *data) /* clear resend timer cnt register */ hinic_msix_attr_cnt_clear(ceq->hwif, ceq->msix_entry.entry); - tasklet_schedule(&ceq->ceq_tasklet); + queue_work(system_bh_wq, &ceq->ceq_bh_work); return IRQ_HANDLED; } @@ -782,7 +782,7 @@ static int init_eq(struct hinic_eq *eq, struct hinic_hwif *hwif, INIT_WORK(&aeq_work->work, eq_irq_work); } else if (type == HINIC_CEQ) { - tasklet_setup(&eq->ceq_tasklet, ceq_tasklet); + INIT_WORK(&eq->ceq_bh_work, ceq_bh_work); } /* set the attributes of the msix entry */ @@ -833,7 +833,7 @@ static void remove_eq(struct hinic_eq *eq) hinic_hwif_write_reg(eq->hwif, HINIC_CSR_AEQ_CTRL_1_ADDR(eq->q_id), 0); } else if (eq->type == HINIC_CEQ) { - tasklet_kill(&eq->ceq_tasklet); + cancel_work_sync(&eq->ceq_bh_work); /* clear ceq_len to avoid hw access host memory */ hinic_hwif_write_reg(eq->hwif, HINIC_CSR_CEQ_CTRL_1_ADDR(eq->q_id), 0); @@ -968,9 +968,8 @@ void hinic_dump_ceq_info(struct hinic_hwdev *hwdev) ci = hinic_hwif_read_reg(hwdev->hwif, addr); addr = EQ_PROD_IDX_REG_ADDR(eq); pi = hinic_hwif_read_reg(hwdev->hwif, addr); - dev_err(&hwdev->hwif->pdev->dev, "Ceq id: %d, ci: 0x%08x, sw_ci: 0x%08x, pi: 0x%x, tasklet_state: 0x%lx, wrap: %d, ceqe: 0x%x\n", + dev_err(&hwdev->hwif->pdev->dev, "Ceq id: %d, ci: 0x%08x, sw_ci: 0x%08x, pi: 0x%x, wrap: %d, ceqe: 0x%x\n", q_id, ci, eq->cons_idx, pi, - eq->ceq_tasklet.state, eq->wrapped, be32_to_cpu(*(__be32 *)(GET_CURR_CEQ_ELEM(eq)))); } } diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.h index 2f3222174fc7..8fed3155f15c 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.h @@ -193,7 +193,7 @@ struct hinic_eq { struct hinic_eq_work aeq_work; - struct tasklet_struct ceq_tasklet; + struct work_struct ceq_bh_work; }; struct hinic_hw_event_cb { From patchwork Fri Jun 21 05:05:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13706830 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD0F516D4E0; Fri, 21 Jun 2024 05:06:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946362; cv=none; b=oJeJhTS3uHC1c9VFi9iRQ/A+DmQgPTSlgi1ZQ3ubPTECx3MsS5JZ1TjFNiuRc7wPcgNuv/TxgDl4Ht6Fhnh0xMi1q0VBcAFqrfH9wy2xBkKlEV73Yc4b2ta+9ZSnT8QM99YRcN7AbPxpkviGaYg3CIVLZ3xBxilAsGnnGpSt/Zg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946362; c=relaxed/simple; bh=5fDJtwutEEA5fJCM51lQrlFtqk5iBeZOuS616CxfH8M=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=MoMK0SPHrHUsnb25Wszk9Y+u+hvxS3q/GXgtSH77mS3nsKGZxYdnEzcK58lRsQzFhP2Y289d68fzcSsyjmn9kaBFWdx9xV/j7jrwh2Y8K2dGYqoKtqGw1IpVHynjS70h1tHcqwRQ+rDMEretd5HoecFij3Ghzw713IAug+NUP90= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=TqTKYOYT; arc=none smtp.client-ip=209.85.210.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="TqTKYOYT" Received: by mail-pf1-f180.google.com with SMTP id d2e1a72fcca58-7023b6d810bso1251581b3a.3; Thu, 20 Jun 2024 22:06:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718946360; x=1719551160; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oEA7niYFvLj2PrS7r2T7s19npKTXpaxvDJNgOhJ1Ngk=; b=TqTKYOYTPgeWRO1T2L0ypRCLJsYaE+3kSsASJSAIt6JELEUzBiLmDon3B9YuV93hoz DawFIC8url5Z+yqfX4i8EbziDtK0nrXZSaW5DgvM2i3DY+Lx73ZXy8Vcrs9dtlamjkGb jn3gpvW2U775uiTvvLOiYUDuS/9n9N0+PK2uGSlFiG9VWwKHCkv9GnfKyeQnq50hXY9g TzUKX0lsSATedNC+uINJiO8KCH8ffJ4x4n+MNqzE0fEneTg0nGrkN4/yrN5jY9Uql941 otGuL/Htn0XETlnJVIhuEjmx+szjID8sgm7dTtD26cyT4QFYe45qW97CHki+1OlgNzJ2 vHrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718946360; x=1719551160; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oEA7niYFvLj2PrS7r2T7s19npKTXpaxvDJNgOhJ1Ngk=; b=n6E5WKJVMipW2e5uZgNmAhEat5ldUdM9/fq/zOsQtWzPC1O9SLRQnQGbUPqY+KdAVJ CbFN0dgVQ16GNCKVdU76ZNeSVTyPrw/0vSF12wJo+I/nMmBT8k4FJFghX3OG4KkDIfzR CwCwze8iVCreuvgYwycPQFjlIcPwomoZfGZRb+9PXmSL+8UyYYi8rIAraxV2pFgjVKhh cIMCxRtESDKKG5D6LPVTbcxe8sAZBBPJpw/fUsj8eVrP2SuheW5pmRA9XXgZXBT92q37 feT5lqUxw4If32G+lQjR+R9BSGPjrNNLE0GPQHMJiQKUHPmFE1X7DT9AMiicIs+s6FQK a2vQ== X-Forwarded-Encrypted: i=1; AJvYcCUBWrBvuZck6UcAyf5eVOtBDsQN3XApr3Hj1nuBXESPpv6J5/Sfv8vr1682EZ0EowpjsiTqjAN7QBhwdj0zopM4TRgm3PzEBmqSzlAnK0d3Z79jPZjrC8gikMpdaCjWcAGm/8TTo0I5fnBeRkL9KdTvvnx9yEorEbDG0aGTfPRERg== X-Gm-Message-State: AOJu0Yx+aANcIF2hl2a2+d/rGdwfcO0i8UVNvsfTAUpXiSGPr/d6Kjpf EjYmkv/kLaAf1dZzJDc11ABZl9jqeiwj77q9y/hG7E1O88zW/d/G X-Google-Smtp-Source: AGHT+IEVOdj0RXoWiJFMYJI3sqR93h6FlwoyW52g7W54MYrEVDJaXXjVVjRyUme5vG156RMqKoKpRQ== X-Received: by 2002:a05:6a00:982:b0:705:b0aa:a6bf with SMTP id d2e1a72fcca58-70629c1f982mr9111181b3a.2.1718946360102; Thu, 20 Jun 2024 22:06:00 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:fb4e:6cf3:3ec6:9292]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-716c950d71asm371308a12.62.2024.06.20.22.05.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 22:05:59 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, Allen Pais , netdev@vger.kernel.org Subject: [PATCH 11/15] net: ehea: Convert tasklet API to new bottom half workqueue mechanism Date: Thu, 20 Jun 2024 22:05:21 -0700 Message-Id: <20240621050525.3720069-12-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240621050525.3720069-1-allen.lkml@gmail.com> References: <20240621050525.3720069-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the ehea driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/ibm/ehea/ehea.h | 3 ++- drivers/net/ethernet/ibm/ehea/ehea_main.c | 14 +++++++------- 2 files changed, 9 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/ibm/ehea/ehea.h b/drivers/net/ethernet/ibm/ehea/ehea.h index 208c440a602b..c1e7e22884fa 100644 --- a/drivers/net/ethernet/ibm/ehea/ehea.h +++ b/drivers/net/ethernet/ibm/ehea/ehea.h @@ -19,6 +19,7 @@ #include #include #include +#include #include #include @@ -381,7 +382,7 @@ struct ehea_adapter { struct platform_device *ofdev; struct ehea_port *port[EHEA_MAX_PORTS]; struct ehea_eq *neq; /* notification event queue */ - struct tasklet_struct neq_tasklet; + struct work_struct neq_bh_work; struct ehea_mr mr; u32 pd; /* protection domain */ u64 max_mc_mac; /* max number of multicast mac addresses */ diff --git a/drivers/net/ethernet/ibm/ehea/ehea_main.c b/drivers/net/ethernet/ibm/ehea/ehea_main.c index 1e29e5c9a2df..6960d06805f6 100644 --- a/drivers/net/ethernet/ibm/ehea/ehea_main.c +++ b/drivers/net/ethernet/ibm/ehea/ehea_main.c @@ -976,7 +976,7 @@ int ehea_sense_port_attr(struct ehea_port *port) u64 hret; struct hcp_ehea_port_cb0 *cb0; - /* may be called via ehea_neq_tasklet() */ + /* may be called via ehea_neq_bh_work() */ cb0 = (void *)get_zeroed_page(GFP_ATOMIC); if (!cb0) { pr_err("no mem for cb0\n"); @@ -1216,9 +1216,9 @@ static void ehea_parse_eqe(struct ehea_adapter *adapter, u64 eqe) } } -static void ehea_neq_tasklet(struct tasklet_struct *t) +static void ehea_neq_bh_work(struct work_struct *work) { - struct ehea_adapter *adapter = from_tasklet(adapter, t, neq_tasklet); + struct ehea_adapter *adapter = from_work(adapter, work, neq_bh_work); struct ehea_eqe *eqe; u64 event_mask; @@ -1243,7 +1243,7 @@ static void ehea_neq_tasklet(struct tasklet_struct *t) static irqreturn_t ehea_interrupt_neq(int irq, void *param) { struct ehea_adapter *adapter = param; - tasklet_hi_schedule(&adapter->neq_tasklet); + queue_work(system_bh_highpri_wq, &adapter->neq_bh_work); return IRQ_HANDLED; } @@ -3423,7 +3423,7 @@ static int ehea_probe_adapter(struct platform_device *dev) goto out_free_ad; } - tasklet_setup(&adapter->neq_tasklet, ehea_neq_tasklet); + INIT_WORK(&adapter->neq_bh_work, ehea_neq_bh_work); ret = ehea_create_device_sysfs(dev); if (ret) @@ -3444,7 +3444,7 @@ static int ehea_probe_adapter(struct platform_device *dev) } /* Handle any events that might be pending. */ - tasklet_hi_schedule(&adapter->neq_tasklet); + queue_work(system_bh_highpri_wq, &adapter->neq_bh_work); ret = 0; goto out; @@ -3485,7 +3485,7 @@ static void ehea_remove(struct platform_device *dev) ehea_remove_device_sysfs(dev); ibmebus_free_irq(adapter->neq->attr.ist1, adapter); - tasklet_kill(&adapter->neq_tasklet); + cancel_work_sync(&adapter->neq_bh_work); ehea_destroy_eq(adapter->neq); ehea_remove_adapter_mr(adapter); From patchwork Fri Jun 21 05:05:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13706831 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-ot1-f52.google.com (mail-ot1-f52.google.com [209.85.210.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EA73116D9DD; Fri, 21 Jun 2024 05:06:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946364; cv=none; b=fCwmuIFDFvMJ8JfeGmHQwE9twJeYy2twYedQ8LGSdzWCj73/7D/1EnkwJQXDCTdG8RYZ7+CHmFqXnNqBkXeD72S8FgDjTsSDgbvjSuSvK8+myxYD4ucmAQU2Oowhl/H6HGw2mKzyIzGyhoSNqeXh48o1UmgxTvWicTTtmK+cDco= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946364; c=relaxed/simple; bh=B6wWmVi8kcrXu726jCPPXrr1syt8wFPlSjG9yS2c6po=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=k9++Z3oow0JdDdU/SFyZguTPEVyl68Slh8UfXQ8UthwRLMM+v91C0xBtdUagYU5ya4sOAbEDwBpyLtQa14ci90rENyf7tQhODO5ox7TYhB4ySQdLVTRlT7oMQnTKdN/oKEGLzJxJh9Zujn1EVYajWYtI1RZBSaYSOlgVmdEre9o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=cxXlPTbI; arc=none smtp.client-ip=209.85.210.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="cxXlPTbI" Received: by mail-ot1-f52.google.com with SMTP id 46e09a7af769-6fa11ac8695so952454a34.3; Thu, 20 Jun 2024 22:06:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718946362; x=1719551162; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=K7Qm0uXfQVJ0pL26mcNJNw6zIMxofNWokQ38AnTAGFg=; b=cxXlPTbInsxVT1+9MGTlx+A5dutWAuoMxFvGDeRQ8zpcI8aBq27oLWyfQiNYnkYv79 KOki+GxSaufo9Jz2R/w+NRDu4DOnuZY2q8WAK/DruVkJX05QlVIoEIGAZosDCqxZ+78Z Q9PIj3Ckrs0ubGPrIOULiZcVJKPEb4jmPSaPAZh246A5ayac9Q+GeDZoFHixGyPtyuf3 RnL+/rKRd3q+qChsmERbyWHqpSuhzlesU6G5HhyvhU+LsMpFZplC0dZqijOPYhecLjn1 +gJkjVvGfPyjFkWjZcXCMaZWZk6lPvFtvzkkzcrBlvwFpj8ITU96EJZVz7TcSEp9Sv/J 13nA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718946362; x=1719551162; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=K7Qm0uXfQVJ0pL26mcNJNw6zIMxofNWokQ38AnTAGFg=; b=ti9HmRPBhDOqCsbrkMLhcgy5KxO2od1SiH+JAByT3zp617RKlC5yogCjMaQiRWtXjR Mb3C7qB4V9VtR8JAWEcONlvnjNRFZ96BDt4cIVq9i1xJNngjUmSW9CBRDJV8seqorR9q 3VKREZEFqU9MKBJkF/odjWkfWsQV8+f/wGUfAhyJLoujqHIsNH9vNLUwCKkGknSQQUQC IoHOT2Wud3OkjkANvFeogj/KRZGhSp9Ms691YUK4sDnlTf4G0kbu/b3F9bSQ9Qh/LAtH QiMlDpme1Z/BOpPdCfsJAm1SQX5YaNZnL1NkjqwEbvXUNaZY+UwwoyBn46hY5RhA+NP5 oveQ== X-Forwarded-Encrypted: i=1; AJvYcCUzToxlHiLwwtwkDaFOGL1NQRXfbZ7/qXNnrum1lEPdiQ+QhOcxXXy/SiXhC/yCOzcDXSj5CuHrZDzz/3/vCThE9sys6AG10v6l2krqfpveZKYbngbPVdwGrXwA6Z8QvImrgh4VUoKWytxRtjHZzvW6ybnvhRafoBA4fVDq13ncYg== X-Gm-Message-State: AOJu0YzWjCSrH4jp54cthufT1A23EBSzVweLKTaJcLPhU6CS/OqsuKWN QLvq9zwcfR5Nza0KmdoQwlfq8WsprWCv+8qsdRRe8g026rsKW3KL X-Google-Smtp-Source: AGHT+IEl8AUL32RskP9KGXc80Es0L2iMivKVGf7869FhG8wMEglB481MAH7e/tfcS1tKjlee8zzZuw== X-Received: by 2002:a05:6830:18ea:b0:6f9:9540:76a8 with SMTP id 46e09a7af769-70073b35f82mr7986503a34.13.1718946361927; Thu, 20 Jun 2024 22:06:01 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:fb4e:6cf3:3ec6:9292]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-716c950d71asm371308a12.62.2024.06.20.22.06.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 22:06:01 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Michael Ellerman , Nicholas Piggin , Christophe Leroy , "Naveen N. Rao" , Haren Myneni , Rick Lindsley , Nick Child , Thomas Falcon , "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, aneesh.kumar@kernel.org, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, Allen Pais , linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org Subject: [PATCH 12/15] net: ibmvnic: Convert tasklet API to new bottom half workqueue mechanism Date: Thu, 20 Jun 2024 22:05:22 -0700 Message-Id: <20240621050525.3720069-13-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240621050525.3720069-1-allen.lkml@gmail.com> References: <20240621050525.3720069-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the ibmvnic driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/ibm/ibmvnic.c | 24 ++++++++++++------------ drivers/net/ethernet/ibm/ibmvnic.h | 2 +- 2 files changed, 13 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index 5e9a93bdb518..2e817a560c3a 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -2725,7 +2725,7 @@ static const char *reset_reason_to_string(enum ibmvnic_reset_reason reason) /* * Initialize the init_done completion and return code values. We * can get a transport event just after registering the CRQ and the - * tasklet will use this to communicate the transport event. To ensure + * bh work will use this to communicate the transport event. To ensure * we don't miss the notification/error, initialize these _before_ * regisering the CRQ. */ @@ -4429,7 +4429,7 @@ static void send_request_cap(struct ibmvnic_adapter *adapter, int retry) int cap_reqs; /* We send out 6 or 7 REQUEST_CAPABILITY CRQs below (depending on - * the PROMISC flag). Initialize this count upfront. When the tasklet + * the PROMISC flag). Initialize this count upfront. When the bh work * receives a response to all of these, it will send the next protocol * message (QUERY_IP_OFFLOAD). */ @@ -4965,7 +4965,7 @@ static void send_query_cap(struct ibmvnic_adapter *adapter) int cap_reqs; /* We send out 25 QUERY_CAPABILITY CRQs below. Initialize this count - * upfront. When the tasklet receives a response to all of these, it + * upfront. When the bh work receives a response to all of these, it * can send out the next protocol messaage (REQUEST_CAPABILITY). */ cap_reqs = 25; @@ -5477,7 +5477,7 @@ static int handle_login_rsp(union ibmvnic_crq *login_rsp_crq, int i; /* CHECK: Test/set of login_pending does not need to be atomic - * because only ibmvnic_tasklet tests/clears this. + * because only ibmvnic_bh_work tests/clears this. */ if (!adapter->login_pending) { netdev_warn(netdev, "Ignoring unexpected login response\n"); @@ -6063,13 +6063,13 @@ static irqreturn_t ibmvnic_interrupt(int irq, void *instance) { struct ibmvnic_adapter *adapter = instance; - tasklet_schedule(&adapter->tasklet); + queue_work(system_bh_wq, &adapter->bh_work); return IRQ_HANDLED; } -static void ibmvnic_tasklet(struct tasklet_struct *t) +static void ibmvnic_bh_work(struct work_struct *work) { - struct ibmvnic_adapter *adapter = from_tasklet(adapter, t, tasklet); + struct ibmvnic_adapter *adapter = from_work(adapter, work, bh_work); struct ibmvnic_crq_queue *queue = &adapter->crq; union ibmvnic_crq *crq; unsigned long flags; @@ -6150,7 +6150,7 @@ static void release_crq_queue(struct ibmvnic_adapter *adapter) netdev_dbg(adapter->netdev, "Releasing CRQ\n"); free_irq(vdev->irq, adapter); - tasklet_kill(&adapter->tasklet); + cancel_work_sync(&adapter->bh_work); do { rc = plpar_hcall_norets(H_FREE_CRQ, vdev->unit_address); } while (rc == H_BUSY || H_IS_LONG_BUSY(rc)); @@ -6201,7 +6201,7 @@ static int init_crq_queue(struct ibmvnic_adapter *adapter) retrc = 0; - tasklet_setup(&adapter->tasklet, (void *)ibmvnic_tasklet); + INIT_WORK(&adapter->bh_work, (void *)ibmvnic_bh_work); netdev_dbg(adapter->netdev, "registering irq 0x%x\n", vdev->irq); snprintf(crq->name, sizeof(crq->name), "ibmvnic-%x", @@ -6223,12 +6223,12 @@ static int init_crq_queue(struct ibmvnic_adapter *adapter) spin_lock_init(&crq->lock); /* process any CRQs that were queued before we enabled interrupts */ - tasklet_schedule(&adapter->tasklet); + queue_work(system_bh_wq, &adapter->bh_work); return retrc; req_irq_failed: - tasklet_kill(&adapter->tasklet); + cancel_work_sync(&adapter->bh_work); do { rc = plpar_hcall_norets(H_FREE_CRQ, vdev->unit_address); } while (rc == H_BUSY || H_IS_LONG_BUSY(rc)); @@ -6621,7 +6621,7 @@ static int ibmvnic_resume(struct device *dev) if (adapter->state != VNIC_OPEN) return 0; - tasklet_schedule(&adapter->tasklet); + queue_work(system_bh_wq, &adapter->bh_work); return 0; } diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h index 94ac36b1408b..b65b210a8059 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.h +++ b/drivers/net/ethernet/ibm/ibmvnic.h @@ -1036,7 +1036,7 @@ struct ibmvnic_adapter { u32 cur_rx_buf_sz; u32 prev_rx_buf_sz; - struct tasklet_struct tasklet; + struct work_struct bh_work; enum vnic_state state; /* Used for serialization of state field. When taking both state * and rwi locks, take state lock first. From patchwork Fri Jun 21 05:05:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13706832 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-ot1-f54.google.com (mail-ot1-f54.google.com [209.85.210.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C2F0C16DED4; Fri, 21 Jun 2024 05:06:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946366; cv=none; b=oulRkUI5EXQn9KjgxYuU6huRxVOLxbfaXwh1/k9zqMGMugJnVMFDGrzMCjhaJitfwC3pAD79kjBMN6g6g05I/j9ozd2SVeS9PiWUu7hWlFZtHgaU26ZpERmPB+TP7LATWBYmpJGlDGCJenYm+ncvkrkPcKZ++M9eNea7lthbWwk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946366; c=relaxed/simple; bh=ZDz2LhFLQz9qCiesFC2yTihbYdhNV6oWS3LIkrpVToI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ax68J8kOrGOolyzAMZWOKeQ0hkMw6014QBuzDgKQvcJZHc1DQ1dsPhuQO1OdH/oT291aanPMT5Ikhx7DHbJ6FPgoizV+4LoaEgoUTtfFtrqL3H1mWZ3nmYdv22GzYDzu6w8t6dc+FqXtsI4RAIVEbKLuPBQgXejcieJVGkl8Il0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=dWpfJ8Ci; arc=none smtp.client-ip=209.85.210.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="dWpfJ8Ci" Received: by mail-ot1-f54.google.com with SMTP id 46e09a7af769-6f9b4d69f53so855088a34.0; Thu, 20 Jun 2024 22:06:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718946364; x=1719551164; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gbaKAp4rBypUxqcKQTtYdGFIU8vf1Ad+mxOnv6tDJnw=; b=dWpfJ8CinP2PJphGYSJfJv3d4QeFOnJjdPdJfIQuyULxMvje7IO6qzBYkLY/xoG024 0jPNk1V284eJ/c3KtBZzaaxwd9w8kvgQsS0kUEvZYe4sZT2eukDubwIdhz+yT6+CY4Uk cB6REfKVR+kIGN/GpJfwlPyKbwGt80dLjmJo6T5de5LXEbxOcgpeMDm3zbx7W88YnQy2 P8J/kHiPxBKT+jsCO73Z1KRbRTlhWYo/GCGM4+7eeCYcaMKw70IUUcJuFH9h9Q9zsoDc FXl8JMpUSpAtDMw0iyS9yAONqcxiIhTRtoU7ImmEtfS4oVtGOpRC8aI6vCr4xzfwUFJO iwzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718946364; x=1719551164; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gbaKAp4rBypUxqcKQTtYdGFIU8vf1Ad+mxOnv6tDJnw=; b=BgNLPY1h4p49U2KAPJ2IAR+2y9EcVR5kWHnEqihgb3zbqnzfC16TsmzIPyILSvMSgk JrQ19NSfiZCnzPgcwT1aLFd+kvR7eiROI3Pp2wFv5saUAVXBajWzpoRay00UtcMZNS2I LosbMqxXCxUpE15yhNIotVJvgm2WZfHTC3+0FSV6RSMcp7uYQMAIqC1F2G/Yw/8rA/EF rpoKq1xYmpjOCmunXC7oJ5s+iiL1YzXVxyJq9z6cg/XZsDpLJwXTzIj6rKNq6inlzMzx kB9aKu/uRug0zhNzgpgLYHmnU+oU5V+15TCFgtwkfIugp7ga7VuX6Vlqx8e/68PfxTBJ YPQA== X-Forwarded-Encrypted: i=1; AJvYcCW1sKDJNU360cpOB4PQhiU3HG04lV6oIYxrkqKnFouf/Jzb0OMwyuwLcrhu/7+2VR2nq4zaI8LZzaLloH6jwU2Xz/X9JzL1Uyv2CfvFyMO3N22LH3fxlnyKYMJzMjywutcQgyNcyoqo6ziw7663lFbwhASHf/QvWTbq7ZjrECbWtA== X-Gm-Message-State: AOJu0YzeNiOCyyG+74TY53rrgzD2Is1cMb3ab8WS9a2CS1a8r/VeaDLr WYkEWa6GG9R31Qud++oym6b0wvhEG992GKh5T+ZI3vKfmRGJzyHg X-Google-Smtp-Source: AGHT+IHpJWoyOgKjSQNeAeyQsc97SFuD4xHKSmRBj8Z2m00tZ3ggPULpelvaMwrLn3RSm6gMyiOWdA== X-Received: by 2002:a9d:6558:0:b0:700:8eea:af41 with SMTP id 46e09a7af769-7008eeab075mr6158681a34.1.1718946363694; Thu, 20 Jun 2024 22:06:03 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:fb4e:6cf3:3ec6:9292]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-716c950d71asm371308a12.62.2024.06.20.22.06.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 22:06:03 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Guo-Fu Tseng , "David S. Miller" , Eric Dumazet , Paolo Abeni Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, Allen Pais , netdev@vger.kernel.org Subject: [PATCH 13/15] net: jme: Convert tasklet API to new bottom half workqueue mechanism Date: Thu, 20 Jun 2024 22:05:23 -0700 Message-Id: <20240621050525.3720069-14-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240621050525.3720069-1-allen.lkml@gmail.com> References: <20240621050525.3720069-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the jme driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/jme.c | 72 +++++++++++++++++++------------------- drivers/net/ethernet/jme.h | 8 ++--- 2 files changed, 40 insertions(+), 40 deletions(-) diff --git a/drivers/net/ethernet/jme.c b/drivers/net/ethernet/jme.c index b06e24562973..b1a92b851b3b 100644 --- a/drivers/net/ethernet/jme.c +++ b/drivers/net/ethernet/jme.c @@ -1141,7 +1141,7 @@ jme_dynamic_pcc(struct jme_adapter *jme) if (unlikely(dpi->attempt != dpi->cur && dpi->cnt > 5)) { if (dpi->attempt < dpi->cur) - tasklet_schedule(&jme->rxclean_task); + queue_work(system_bh_wq, &jme->rxclean_bh_work); jme_set_rx_pcc(jme, dpi->attempt); dpi->cur = dpi->attempt; dpi->cnt = 0; @@ -1182,9 +1182,9 @@ jme_shutdown_nic(struct jme_adapter *jme) } static void -jme_pcc_tasklet(struct tasklet_struct *t) +jme_pcc_bh_work(struct work_struct *work) { - struct jme_adapter *jme = from_tasklet(jme, t, pcc_task); + struct jme_adapter *jme = from_work(jme, work, pcc_bh_work); struct net_device *netdev = jme->dev; if (unlikely(test_bit(JME_FLAG_SHUTDOWN, &jme->flags))) { @@ -1282,9 +1282,9 @@ static void jme_link_change_work(struct work_struct *work) jme_stop_shutdown_timer(jme); jme_stop_pcc_timer(jme); - tasklet_disable(&jme->txclean_task); - tasklet_disable(&jme->rxclean_task); - tasklet_disable(&jme->rxempty_task); + disable_work_sync(&jme->txclean_bh_work); + disable_work_sync(&jme->rxclean_bh_work); + disable_work_sync(&jme->rxempty_bh_work); if (netif_carrier_ok(netdev)) { jme_disable_rx_engine(jme); @@ -1304,7 +1304,7 @@ static void jme_link_change_work(struct work_struct *work) rc = jme_setup_rx_resources(jme); if (rc) { pr_err("Allocating resources for RX error, Device STOPPED!\n"); - goto out_enable_tasklet; + goto out_enable_bh_work; } rc = jme_setup_tx_resources(jme); @@ -1326,22 +1326,22 @@ static void jme_link_change_work(struct work_struct *work) jme_start_shutdown_timer(jme); } - goto out_enable_tasklet; + goto out_enable_bh_work; err_out_free_rx_resources: jme_free_rx_resources(jme); -out_enable_tasklet: - tasklet_enable(&jme->txclean_task); - tasklet_enable(&jme->rxclean_task); - tasklet_enable(&jme->rxempty_task); +out_enable_bh_work: + enable_and_queue_work(system_bh_wq, &jme->txclean_bh_work); + enable_and_queue_work(system_bh_wq, &jme->rxclean_bh_work); + enable_and_queue_work(system_bh_wq, &jme->rxempty_bh_work); out: atomic_inc(&jme->link_changing); } static void -jme_rx_clean_tasklet(struct tasklet_struct *t) +jme_rx_clean_bh_work(struct work_struct *work) { - struct jme_adapter *jme = from_tasklet(jme, t, rxclean_task); + struct jme_adapter *jme = from_work(jme, work, rxclean_bh_work); struct dynpcc_info *dpi = &(jme->dpi); jme_process_receive(jme, jme->rx_ring_size); @@ -1374,9 +1374,9 @@ jme_poll(JME_NAPI_HOLDER(holder), JME_NAPI_WEIGHT(budget)) } static void -jme_rx_empty_tasklet(struct tasklet_struct *t) +jme_rx_empty_bh_work(struct work_struct *work) { - struct jme_adapter *jme = from_tasklet(jme, t, rxempty_task); + struct jme_adapter *jme = from_work(jme, work, rxempty_bh_work); if (unlikely(atomic_read(&jme->link_changing) != 1)) return; @@ -1386,7 +1386,7 @@ jme_rx_empty_tasklet(struct tasklet_struct *t) netif_info(jme, rx_status, jme->dev, "RX Queue Full!\n"); - jme_rx_clean_tasklet(&jme->rxclean_task); + jme_rx_clean_bh_work(&jme->rxclean_bh_work); while (atomic_read(&jme->rx_empty) > 0) { atomic_dec(&jme->rx_empty); @@ -1410,9 +1410,9 @@ jme_wake_queue_if_stopped(struct jme_adapter *jme) } -static void jme_tx_clean_tasklet(struct tasklet_struct *t) +static void jme_tx_clean_bh_work(struct work_struct *work) { - struct jme_adapter *jme = from_tasklet(jme, t, txclean_task); + struct jme_adapter *jme = from_work(jme, work, txclean_bh_work); struct jme_ring *txring = &(jme->txring[0]); struct txdesc *txdesc = txring->desc; struct jme_buffer_info *txbi = txring->bufinf, *ctxbi, *ttxbi; @@ -1510,12 +1510,12 @@ jme_intr_msi(struct jme_adapter *jme, u32 intrstat) if (intrstat & INTR_TMINTR) { jwrite32(jme, JME_IEVE, INTR_TMINTR); - tasklet_schedule(&jme->pcc_task); + queue_work(system_bh_wq, &jme->pcc_bh_work); } if (intrstat & (INTR_PCCTXTO | INTR_PCCTX)) { jwrite32(jme, JME_IEVE, INTR_PCCTXTO | INTR_PCCTX | INTR_TX0); - tasklet_schedule(&jme->txclean_task); + queue_work(system_bh_wq, &jme->txclean_bh_work); } if ((intrstat & (INTR_PCCRX0TO | INTR_PCCRX0 | INTR_RX0EMP))) { @@ -1538,9 +1538,9 @@ jme_intr_msi(struct jme_adapter *jme, u32 intrstat) } else { if (intrstat & INTR_RX0EMP) { atomic_inc(&jme->rx_empty); - tasklet_hi_schedule(&jme->rxempty_task); + queue_work(system_bh_highpri_wq, &jme->rxempty_bh_work); } else if (intrstat & (INTR_PCCRX0TO | INTR_PCCRX0)) { - tasklet_hi_schedule(&jme->rxclean_task); + queue_work(system_bh_highpri_wq, &jme->rxclean_bh_work); } } @@ -1826,9 +1826,9 @@ jme_open(struct net_device *netdev) jme_clear_pm_disable_wol(jme); JME_NAPI_ENABLE(jme); - tasklet_setup(&jme->txclean_task, jme_tx_clean_tasklet); - tasklet_setup(&jme->rxclean_task, jme_rx_clean_tasklet); - tasklet_setup(&jme->rxempty_task, jme_rx_empty_tasklet); + INIT_WORK(&jme->txclean_bh_work, jme_tx_clean_bh_work); + INIT_WORK(&jme->rxclean_bh_work, jme_rx_clean_bh_work); + INIT_WORK(&jme->rxempty_bh_work, jme_rx_empty_bh_work); rc = jme_request_irq(jme); if (rc) @@ -1914,9 +1914,9 @@ jme_close(struct net_device *netdev) JME_NAPI_DISABLE(jme); cancel_work_sync(&jme->linkch_task); - tasklet_kill(&jme->txclean_task); - tasklet_kill(&jme->rxclean_task); - tasklet_kill(&jme->rxempty_task); + cancel_work_sync(&jme->txclean_bh_work); + cancel_work_sync(&jme->rxclean_bh_work); + cancel_work_sync(&jme->rxempty_bh_work); jme_disable_rx_engine(jme); jme_disable_tx_engine(jme); @@ -3020,7 +3020,7 @@ jme_init_one(struct pci_dev *pdev, atomic_set(&jme->tx_cleaning, 1); atomic_set(&jme->rx_empty, 1); - tasklet_setup(&jme->pcc_task, jme_pcc_tasklet); + INIT_WORK(&jme->pcc_bh_work, jme_pcc_bh_work); INIT_WORK(&jme->linkch_task, jme_link_change_work); jme->dpi.cur = PCC_P1; @@ -3180,9 +3180,9 @@ jme_suspend(struct device *dev) netif_stop_queue(netdev); jme_stop_irq(jme); - tasklet_disable(&jme->txclean_task); - tasklet_disable(&jme->rxclean_task); - tasklet_disable(&jme->rxempty_task); + disable_work_sync(&jme->txclean_bh_work); + disable_work_sync(&jme->rxclean_bh_work); + disable_work_sync(&jme->rxempty_bh_work); if (netif_carrier_ok(netdev)) { if (test_bit(JME_FLAG_POLL, &jme->flags)) @@ -3198,9 +3198,9 @@ jme_suspend(struct device *dev) jme->phylink = 0; } - tasklet_enable(&jme->txclean_task); - tasklet_enable(&jme->rxclean_task); - tasklet_enable(&jme->rxempty_task); + enable_and_queue_work(system_bh_wq, &jme->txclean_bh_work); + enable_and_queue_work(system_bh_wq, &jme->rxclean_bh_work); + enable_and_queue_work(system_bh_wq, &jme->rxempty_bh_work); jme_powersave_phy(jme); diff --git a/drivers/net/ethernet/jme.h b/drivers/net/ethernet/jme.h index 860494ff3714..73a8a1438340 100644 --- a/drivers/net/ethernet/jme.h +++ b/drivers/net/ethernet/jme.h @@ -406,11 +406,11 @@ struct jme_adapter { spinlock_t phy_lock; spinlock_t macaddr_lock; spinlock_t rxmcs_lock; - struct tasklet_struct rxempty_task; - struct tasklet_struct rxclean_task; - struct tasklet_struct txclean_task; + struct work_struct rxempty_bh_work; + struct work_struct rxclean_bh_work; + struct work_struct txclean_bh_work; struct work_struct linkch_task; - struct tasklet_struct pcc_task; + struct work_struct pcc_bh_work; unsigned long flags; u32 reg_txcs; u32 reg_txpfc; From patchwork Fri Jun 21 05:05:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13706833 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6BB5E16E882; Fri, 21 Jun 2024 05:06:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946367; cv=none; b=ifX4VPQMWj/eBuvb2jDKN3eRmr2DTiKfJsrY285SGQtg8euMAqTUmxR6ZTt07b+jh4/u1RsC9oqXoDfsu6ImoebQt4r1Dh2G/hg5V8YRlfaVw7lNCXLT1QOGmj8hK4Ib2PGZABdwI1ErBjl52nLMDzKCAMLp1l9UJE0oVaf0OXU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946367; c=relaxed/simple; bh=x/RfIjlwpumUmDOu4Tf2V/YAEWwgje+MYgVG2elP/Sw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=bXZqx0R9t9xPAl62xlsucjHmmH3wCnDxQHwkdDDrXpIoJMMvp3aKrQf7Hl2iVl/vfYwPVxrHYV2sp5TCscvtFHWRz8Ka+2ZLSeRD1USMVJan06Y6RUI/ob3wVM9TeHjSa5+fGv5YEeIcGws+uX44n7BXs81Yf0EjViWJ72IJpa8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=cIZVXW9+; arc=none smtp.client-ip=209.85.216.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="cIZVXW9+" Received: by mail-pj1-f49.google.com with SMTP id 98e67ed59e1d1-2c7c61f7ee3so1385124a91.1; Thu, 20 Jun 2024 22:06:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718946366; x=1719551166; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nb9G2dJ7OPjt/ejkd8tb2g/z5QKcxicC+1X1lii3BoE=; b=cIZVXW9+Ae5zckttlTsg85lq7f7jFOVRIWOqG22Voa9Ji3SZR8NoQt9nN+IzOfjdvq pptbiOH6cGQZtewuPd2M0Q0ufzEMOmNHLNMDUbPqDWoyEml52m/9ZrZI/ML+17rQ+4Ip rZ+/8LxtyLURzB0u4/nz0ZaweLlU1VG09f1PKpv52KlqCT4KCO+ZwFDlU9x8eCmSV3Zo 3TJQREZcqbD9pz1DOgixG/OivFDvlIyNQhulr+xD0qC8pkc60W82BqDTzvj7B6+FpYnS RKznyLtD+ixk6UOiICzeeBVW9VOG5tX0oi0g0H6DuKMe+Po7V4o3QanvUUKD3xpAdReV jZZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718946366; x=1719551166; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nb9G2dJ7OPjt/ejkd8tb2g/z5QKcxicC+1X1lii3BoE=; b=Jw3ReE8Fmk/6gpveV0yAfLOujcrzPSMMMGrIvPHEyqJ5jcPheTgAUpBWoT8B8MOYpl UhJkJonXMLpT1w9BT0V/eq04Ymiq328JSYzttBV2dGr0ki6CwwpiVWPOaGQdy7WoADub Xs8i2BAasT0XKKTwjf/mqlrGLXVpGH8PxCe+JcmUJOilC9/WFvFs4S/LGrW4yKtzBNwS LRMi+B5h8dN/3RXkR4RvBdiit98lWl84MwCL+M0FnWfw3w5VE0tUuREd72OFYwYHu7Pj GGC7eoXug2RFGhwiJJQMzhgj+dQoDzBeGYUcKvxjN+h9HDfDY0/oYNZ5yh97PA3B1g9s jdmQ== X-Forwarded-Encrypted: i=1; AJvYcCX8e2Ku43ALsNeuVy5l9wJNmssu/NwR0UptJxO4s9LzshLNC2v3CT1Y2LbDBAjIksabTeO7Q7yrb1gjPW6wmRSZyRVzQ0rn7ddBrxrfCycGqWNM9190irR43zNevlkkKQa4B/ntLpnYgmnD5p12W+yTHF8TYcG61w/+Kbl7aoIGjQ== X-Gm-Message-State: AOJu0YxxGAZ6c0xN8P01NskR8tGxYSUn1wuA0vUBWcqWilP2O2V0G3o6 YBQGfCh50WAj49UgtdO5RpIbkPANFILbQxCEIZCdgnd7yG5AHjVn X-Google-Smtp-Source: AGHT+IFt/DVrB0r5Yt5O6INCZ9XvmjOda1oxh5UnpUp4RYoHaOpA6xAH45TisqRzLqPRUnNT9XKRaA== X-Received: by 2002:a17:90a:43a5:b0:2c2:8d49:5e6c with SMTP id 98e67ed59e1d1-2c7b58fc980mr7060917a91.7.1718946365579; Thu, 20 Jun 2024 22:06:05 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:fb4e:6cf3:3ec6:9292]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-716c950d71asm371308a12.62.2024.06.20.22.06.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 22:06:05 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Marcin Wojtas , Russell King , "David S. Miller" , Eric Dumazet , Paolo Abeni , Mirko Lindner , Stephen Hemminger Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo@kernel.org, matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, Allen Pais , netdev@vger.kernel.org Subject: [PATCH 14/15] net: marvell: Convert tasklet API to new bottom half workqueue mechanism Date: Thu, 20 Jun 2024 22:05:24 -0700 Message-Id: <20240621050525.3720069-15-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240621050525.3720069-1-allen.lkml@gmail.com> References: <20240621050525.3720069-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the marvell driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 4 +--- drivers/net/ethernet/marvell/skge.c | 12 ++++++------ drivers/net/ethernet/marvell/skge.h | 3 ++- 3 files changed, 9 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index 671368d2c77e..47fe71a8f57e 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -2628,9 +2628,7 @@ static u32 mvpp2_txq_desc_csum(int l3_offs, __be16 l3_proto, * The number of sent descriptors is returned. * Per-thread access * - * Called only from mvpp2_txq_done(), called from mvpp2_tx() - * (migration disabled) and from the TX completion tasklet (migration - * disabled) so using smp_processor_id() is OK. + * Called only from mvpp2_txq_done(). */ static inline int mvpp2_txq_sent_desc_proc(struct mvpp2_port *port, struct mvpp2_tx_queue *txq) diff --git a/drivers/net/ethernet/marvell/skge.c b/drivers/net/ethernet/marvell/skge.c index fcfb34561882..4448af079447 100644 --- a/drivers/net/ethernet/marvell/skge.c +++ b/drivers/net/ethernet/marvell/skge.c @@ -3342,13 +3342,13 @@ static void skge_error_irq(struct skge_hw *hw) } /* - * Interrupt from PHY are handled in tasklet (softirq) + * Interrupt from PHY are handled in bh work (softirq) * because accessing phy registers requires spin wait which might * cause excess interrupt latency. */ -static void skge_extirq(struct tasklet_struct *t) +static void skge_extirq(struct work_struct *work) { - struct skge_hw *hw = from_tasklet(hw, t, phy_task); + struct skge_hw *hw = from_work(hw, work, phy_bh_work); int port; for (port = 0; port < hw->ports; port++) { @@ -3389,7 +3389,7 @@ static irqreturn_t skge_intr(int irq, void *dev_id) status &= hw->intr_mask; if (status & IS_EXT_REG) { hw->intr_mask &= ~IS_EXT_REG; - tasklet_schedule(&hw->phy_task); + queue_work(system_bh_wq, &hw->phy_bh_work); } if (status & (IS_XA1_F|IS_R1_F)) { @@ -3937,7 +3937,7 @@ static int skge_probe(struct pci_dev *pdev, const struct pci_device_id *ent) hw->pdev = pdev; spin_lock_init(&hw->hw_lock); spin_lock_init(&hw->phy_lock); - tasklet_setup(&hw->phy_task, skge_extirq); + INIT_WORK(&hw->phy_bh_work, skge_extirq); hw->regs = ioremap(pci_resource_start(pdev, 0), 0x4000); if (!hw->regs) { @@ -4035,7 +4035,7 @@ static void skge_remove(struct pci_dev *pdev) dev0 = hw->dev[0]; unregister_netdev(dev0); - tasklet_kill(&hw->phy_task); + cancel_work_sync(&hw->phy_bh_work); spin_lock_irq(&hw->hw_lock); hw->intr_mask = 0; diff --git a/drivers/net/ethernet/marvell/skge.h b/drivers/net/ethernet/marvell/skge.h index f72217348eb4..0cf77f4b1c57 100644 --- a/drivers/net/ethernet/marvell/skge.h +++ b/drivers/net/ethernet/marvell/skge.h @@ -5,6 +5,7 @@ #ifndef _SKGE_H #define _SKGE_H #include +#include /* PCI config registers */ #define PCI_DEV_REG1 0x40 @@ -2418,7 +2419,7 @@ struct skge_hw { u32 ram_offset; u16 phy_addr; spinlock_t phy_lock; - struct tasklet_struct phy_task; + struct work_struct phy_bh_work; char irq_name[]; /* skge@pci:000:04:00.0 */ }; From patchwork Fri Jun 21 05:05:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allen X-Patchwork-Id: 13706834 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-oo1-f48.google.com (mail-oo1-f48.google.com [209.85.161.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5CA3F16EBF5; Fri, 21 Jun 2024 05:06:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946369; cv=none; b=NUSk228y/zvgfkyx+J724NOVRwKgaNb1rkZ6hxYfQ/qC48vqANu28rwPoVeYrEzcmt3uYU50A0AnAbDT8rtBM19RLBDAUXqhyfKZ3KJLmRkWgdTQukXqRjxQu/GdLaWHZJRbEsrsV6b3jHEc/mHd3hDdO3umHwZO0Ak16oi5RL4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718946369; c=relaxed/simple; bh=cI06cWMeiiaaHvVrivHUrUMAP27szVAGIwpicbm6Rjk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=XzZODQe7pYvHGc1PXaoZGGnKKI+Avg6lroQl6oKvDvaG65Go2Yb8fGssfe7tLGYHlNx0icApj/y2+tiB9VDKvVQ/ZNpmFjneW9Sn2mWt8EdQNummdQt/eM9+wxBlySQkkPKpZRNhsbjFzoT2BC3GAUDzFbNp8a2BFK4nct8uPW8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Oi13HSaT; arc=none smtp.client-ip=209.85.161.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Oi13HSaT" Received: by mail-oo1-f48.google.com with SMTP id 006d021491bc7-5c1bf0649a5so795469eaf.3; Thu, 20 Jun 2024 22:06:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718946367; x=1719551167; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Duz+R7OD5yozyO0xOdsWZmjZsXtc7sydmF+n4fo7yxk=; b=Oi13HSaTOmu7MXJGesAKZblZSYMeUViXGOtGHHE9Bd5FUl1Gnay0Yu4COFGweYO7ck JusiDnq7yeXymvveSfBZt599aK8vpxApebhovn1ixgdu7u9bSI8TvDkQ12dgUPKKCWwf S1c1atuhyPQPjL7YZDa7/9sGZ991K72sW9JUH7ZzsJeiPywohROHz8/jLiq7j4vdzcHr /FH5L2XIP5HOgOhgjJR8t80rWbsfaQjD8fIKDgk7B3Y4XnRp2N5gk5mBeXeOP8XC+4qN 6S4hA9DK0PtvUe0xhv8rn/Kqgup91KnonucW5XAgr3HON8y8KB2rvV1ykZ8JKvdVnkHY gFIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718946367; x=1719551167; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Duz+R7OD5yozyO0xOdsWZmjZsXtc7sydmF+n4fo7yxk=; b=mHVRmD/P7YxgLklci4ZcmGmnzniLGwQD4rWsMP/sWuzxyyZNDMkPKJcc27PL3s7E6Y mmTRPvxNs2aTcyobt5eVYbDqtpE4fdtgyCUxG+fGNNhNiGZSqL/1wP019fgv3TQTjjJz 1vUgdDrzQZUISmBE7yRw0lmCUx40ZBchmlKtxC/aP2ZIpdfWqjgWIQwKRi0/W6yueRij zsaHrGI+FYkthTb1C60Y6zoXxLsb0QvywL+DvnO4D0UfD8fS6OfPmY7x9iK8U3yt5f+a jxORjjQy63D5zRHmDruOdqKpiyuLIOjFEvXTW3knijaDaryLBpU39ocKrNBP0IAUUjlq UhRg== X-Forwarded-Encrypted: i=1; AJvYcCVQZiOWmyB5BxHsTpSoSVHt250j+P+49GaUrzEox2VzfddpUzlTGTMWy+ZIJoX9DRye7ma8N3WSWcW9IycZL2ie9viDMgQ3Q1Ny/3qJAw0Ndh2bnPjEj/Uj/mLyOtK4lyVQM0NSNMcFL41/c0uw5NSaeQBiBn7DS3ydVX+EuysIog== X-Gm-Message-State: AOJu0YyolaxPmrWgLqLOTtjMQinYw9DlRx8XZSDq3U7+1tSLgNfkzxIm 70ZmFEAARx8/yjaqrnw4m10Qszd0g1yqPo2acjIWPEcnK+i0x5L+ X-Google-Smtp-Source: AGHT+IGT9uXfh3BNKu1qNO8/2xY3x6EuWChfPz49EjQAep0tUJhHE+dOOG2sFET4fGBrowHmzjImpA== X-Received: by 2002:a05:6870:16:b0:23c:1f34:730 with SMTP id 586e51a60fabf-25c94d7278cmr6774170fac.49.1718946367326; Thu, 20 Jun 2024 22:06:07 -0700 (PDT) Received: from apais-devbox.. ([2001:569:766d:6500:fb4e:6cf3:3ec6:9292]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-716c950d71asm371308a12.62.2024.06.20.22.06.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jun 2024 22:06:06 -0700 (PDT) From: Allen Pais To: kuba@kernel.org, Felix Fietkau , Sean Wang , Mark Lee , Lorenzo Bianconi , "David S. Miller" , Eric Dumazet , Paolo Abeni , Matthias Brugger , AngeloGioacchino Del Regno Cc: jes@trained-monkey.org, kda@linux-powerpc.org, cai.huoqing@linux.dev, dougmill@linux.ibm.com, npiggin@gmail.com, christophe.leroy@csgroup.eu, aneesh.kumar@kernel.org, naveen.n.rao@linux.ibm.com, nnac123@linux.ibm.com, tlfalcon@linux.ibm.com, cooldavid@cooldavid.org, marcin.s.wojtas@gmail.com, mlindner@marvell.com, stephen@networkplumber.org, borisp@nvidia.com, bryan.whitehead@microchip.com, UNGLinuxDriver@microchip.com, louis.peens@corigine.com, richardcochran@gmail.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-acenic@sunsite.dk, linux-net-drivers@amd.com, Allen Pais , netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org Subject: [PATCH 15/15] net: mtk-wed: Convert tasklet API to new bottom half workqueue mechanism Date: Thu, 20 Jun 2024 22:05:25 -0700 Message-Id: <20240621050525.3720069-16-allen.lkml@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240621050525.3720069-1-allen.lkml@gmail.com> References: <20240621050525.3720069-1-allen.lkml@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Migrate tasklet APIs to the new bottom half workqueue mechanism. It replaces all occurrences of tasklet usage with the appropriate workqueue APIs throughout the mtk-wed driver. This transition ensures compatibility with the latest design and enhances performance. Signed-off-by: Allen Pais --- drivers/net/ethernet/mediatek/mtk_wed_wo.c | 12 ++++++------ drivers/net/ethernet/mediatek/mtk_wed_wo.h | 3 ++- 2 files changed, 8 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ethernet/mediatek/mtk_wed_wo.c index 7063c78bd35f..acca9ec67fcf 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c @@ -71,7 +71,7 @@ static void mtk_wed_wo_irq_enable(struct mtk_wed_wo *wo, u32 mask) { mtk_wed_wo_set_isr_mask(wo, 0, mask, false); - tasklet_schedule(&wo->mmio.irq_tasklet); + queue_work(system_bh_wq, &wo->mmio.irq_bh_work); } static void @@ -227,14 +227,14 @@ mtk_wed_wo_irq_handler(int irq, void *data) struct mtk_wed_wo *wo = data; mtk_wed_wo_set_isr(wo, 0); - tasklet_schedule(&wo->mmio.irq_tasklet); + queue_work(system_bh_wq, &wo->mmio.irq_bh_work); return IRQ_HANDLED; } -static void mtk_wed_wo_irq_tasklet(struct tasklet_struct *t) +static void mtk_wed_wo_irq_bh_work(struct work_struct *work) { - struct mtk_wed_wo *wo = from_tasklet(wo, t, mmio.irq_tasklet); + struct mtk_wed_wo *wo = from_work(wo, work, mmio.irq_bh_work); u32 intr, mask; /* disable interrupts */ @@ -395,7 +395,7 @@ mtk_wed_wo_hardware_init(struct mtk_wed_wo *wo) wo->mmio.irq = irq_of_parse_and_map(np, 0); wo->mmio.irq_mask = MTK_WED_WO_ALL_INT_MASK; spin_lock_init(&wo->mmio.lock); - tasklet_setup(&wo->mmio.irq_tasklet, mtk_wed_wo_irq_tasklet); + INIT_WORK(&wo->mmio.irq_bh_work, mtk_wed_wo_irq_bh_work); ret = devm_request_irq(wo->hw->dev, wo->mmio.irq, mtk_wed_wo_irq_handler, IRQF_TRIGGER_HIGH, @@ -449,7 +449,7 @@ mtk_wed_wo_hw_deinit(struct mtk_wed_wo *wo) /* disable interrupts */ mtk_wed_wo_set_isr(wo, 0); - tasklet_disable(&wo->mmio.irq_tasklet); + disable_work_sync(&wo->mmio.irq_bh_work); disable_irq(wo->mmio.irq); devm_free_irq(wo->hw->dev, wo->mmio.irq, wo); diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.h b/drivers/net/ethernet/mediatek/mtk_wed_wo.h index 87a67fa3868d..50d619fa213a 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.h +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.h @@ -6,6 +6,7 @@ #include #include +#include struct mtk_wed_hw; @@ -247,7 +248,7 @@ struct mtk_wed_wo { struct regmap *regs; spinlock_t lock; - struct tasklet_struct irq_tasklet; + struct work_struct irq_bh_work; int irq; u32 irq_mask; } mmio;