diff mbox series

[net-next,v3,05/15] net: cavium/liquidio: Convert tasklet API to new bottom half workqueue mechanism

Message ID 20240730183403.4176544-6-allen.lkml@gmail.com (mailing list archive)
State Not Applicable
Headers show
Series ethernet: Convert from tasklet to BH workqueue | expand

Commit Message

Allen July 30, 2024, 6:33 p.m. UTC
Migrate tasklet APIs to the new bottom half workqueue mechanism. It
replaces all occurrences of tasklet usage with the appropriate workqueue
APIs throughout the cavium/liquidio driver. This transition ensures
compatibility with the latest design and enhances performance.

Reviewed-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Allen Pais <allen.lkml@gmail.com>
---
 .../net/ethernet/cavium/liquidio/lio_core.c   |  4 ++--
 .../net/ethernet/cavium/liquidio/lio_main.c   | 24 +++++++++----------
 .../ethernet/cavium/liquidio/lio_vf_main.c    | 10 ++++----
 .../ethernet/cavium/liquidio/octeon_droq.c    |  4 ++--
 .../ethernet/cavium/liquidio/octeon_main.h    |  4 ++--
 5 files changed, 23 insertions(+), 23 deletions(-)

Comments

Jakub Kicinski Aug. 1, 2024, 2:08 a.m. UTC | #1
On Tue, 30 Jul 2024 11:33:53 -0700 Allen Pais wrote:
> -	tasklet_enable(&oct_priv->droq_tasklet);
> +	enable_and_queue_work(system_bh_wq, &oct_priv->droq_bh_work);
>  
>  	if (atomic_read(&lio->ifstate) & LIO_IFSTATE_REGISTERED)
>  		unregister_netdev(netdev);

>  		if (OCTEON_CN23XX_PF(oct))
>  			oct->droq[0]->ops.poll_mode = 0;
>  
> -		tasklet_enable(&oct_priv->droq_tasklet);
> +		enable_and_queue_work(system_bh_wq, &oct_priv->droq_bh_work);

Could you shed some light in the cover letter or this patch why
tasklet_enable() is converted to enable_and_queue_work() at 
the face of it those two do not appear to do the same thing?

I'll apply patches 1-4 already.
Allen Aug. 1, 2024, 10 p.m. UTC | #2
Jakub,

> > -     tasklet_enable(&oct_priv->droq_tasklet);
> > +     enable_and_queue_work(system_bh_wq, &oct_priv->droq_bh_work);
> >
> >       if (atomic_read(&lio->ifstate) & LIO_IFSTATE_REGISTERED)
> >               unregister_netdev(netdev);
>
> >               if (OCTEON_CN23XX_PF(oct))
> >                       oct->droq[0]->ops.poll_mode = 0;
> >
> > -             tasklet_enable(&oct_priv->droq_tasklet);
> > +             enable_and_queue_work(system_bh_wq, &oct_priv->droq_bh_work);
>
> Could you shed some light in the cover letter or this patch why
> tasklet_enable() is converted to enable_and_queue_work() at
> the face of it those two do not appear to do the same thing?

With the transition to workqueues, the implementation on the workqueue side is:

tasklet_enable() -> enable_work() + queue_work()

Ref: https://lore.kernel.org/all/20240227172852.2386358-7-tj@kernel.org/

enable_and_queue_work() is a helper which combines the two calls.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=474a549ff4c989427a14fdab851e562c8a63fe24

Hope this answers your question.

Thanks,
Allen

>
> I'll apply patches 1-4 already.
Jakub Kicinski Aug. 2, 2024, 12:57 a.m. UTC | #3
On Thu, 1 Aug 2024 15:00:23 -0700 Allen wrote:
> > Could you shed some light in the cover letter or this patch why
> > tasklet_enable() is converted to enable_and_queue_work() at
> > the face of it those two do not appear to do the same thing?  
> 
> With the transition to workqueues, the implementation on the workqueue side is:
> 
> tasklet_enable() -> enable_work() + queue_work()
> 
> Ref: https://lore.kernel.org/all/20240227172852.2386358-7-tj@kernel.org/
> 
> enable_and_queue_work() is a helper which combines the two calls.
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=474a549ff4c989427a14fdab851e562c8a63fe24
> 
> Hope this answers your question.

To an extent. tj says "unconditionally scheduling the work item after
enable_work() returns %true should work for most users." 
You need to include the explanation of the conversion not being 1:1 
in the commit message, and provide some analysis why it's fine for this
user.
Allen Aug. 5, 2024, 5:23 p.m. UTC | #4
> > > Could you shed some light in the cover letter or this patch why
> > > tasklet_enable() is converted to enable_and_queue_work() at
> > > the face of it those two do not appear to do the same thing?
> >
> > With the transition to workqueues, the implementation on the workqueue side is:
> >
> > tasklet_enable() -> enable_work() + queue_work()
> >
> > Ref: https://lore.kernel.org/all/20240227172852.2386358-7-tj@kernel.org/
> >
> > enable_and_queue_work() is a helper which combines the two calls.
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=474a549ff4c989427a14fdab851e562c8a63fe24
> >
> > Hope this answers your question.
>
> To an extent. tj says "unconditionally scheduling the work item after
> enable_work() returns %true should work for most users."
> You need to include the explanation of the conversion not being 1:1
> in the commit message, and provide some analysis why it's fine for this
> user.


Sure, please review the explanation below and let me
know if it is clear enough:

tasklet_enable() is used to enable a tasklet, which defers
work to be executed in an interrupt context. It relies on the
tasklet mechanism for deferred execution.

enable_and_queue_work() combines enabling the work with
scheduling it on a workqueue. This approach not only enables
the work but also schedules it for execution by the workqueue
system, which is more flexible and suitable for tasks needing
process context rather than interrupt context.

enable_and_queue_work() internally calls enable_work() to enable
the work item and then uses queue_work() to add it to the workqueue.
This ensures that the work item is both enabled and explicitly
scheduled for execution within the workqueue system's context.

As mentioned, "unconditionally scheduling the work item after
enable_work() returns true should work for most users." This
ensures that the work is consistently scheduled for execution,
aligning with the typical workqueue usage pattern. Most users
expect that enabling a work item implies it will be scheduled for
execution without additional conditional logic.

Thanks,
- Allen
Jakub Kicinski Aug. 5, 2024, 7:39 p.m. UTC | #5
On Mon, 5 Aug 2024 10:23:41 -0700 Allen wrote:
> Sure, please review the explanation below and let me
> know if it is clear enough:
> 
> tasklet_enable() is used to enable a tasklet, which defers
> work to be executed in an interrupt context. It relies on the
> tasklet mechanism for deferred execution.
> 
> enable_and_queue_work() combines enabling the work with
> scheduling it on a workqueue. This approach not only enables
> the work but also schedules it for execution by the workqueue
> system, which is more flexible and suitable for tasks needing
> process context rather than interrupt context.
> 
> enable_and_queue_work() internally calls enable_work() to enable
> the work item and then uses queue_work() to add it to the workqueue.
> This ensures that the work item is both enabled and explicitly
> scheduled for execution within the workqueue system's context.
> 
> As mentioned, "unconditionally scheduling the work item after
> enable_work() returns true should work for most users." This
> ensures that the work is consistently scheduled for execution,
> aligning with the typical workqueue usage pattern. Most users
> expect that enabling a work item implies it will be scheduled for
> execution without additional conditional logic.

This looks good for the explanation of the APIs, but you need to 
add another paragraph explaining why the conversion is correct
for the given user. Basically whether the callback is safe to 
be called even if there's no work.
Allen Aug. 7, 2024, 3:15 a.m. UTC | #6
> > Sure, please review the explanation below and let me
> > know if it is clear enough:
> >
> > tasklet_enable() is used to enable a tasklet, which defers
> > work to be executed in an interrupt context. It relies on the
> > tasklet mechanism for deferred execution.
> >
> > enable_and_queue_work() combines enabling the work with
> > scheduling it on a workqueue. This approach not only enables
> > the work but also schedules it for execution by the workqueue
> > system, which is more flexible and suitable for tasks needing
> > process context rather than interrupt context.
> >
> > enable_and_queue_work() internally calls enable_work() to enable
> > the work item and then uses queue_work() to add it to the workqueue.
> > This ensures that the work item is both enabled and explicitly
> > scheduled for execution within the workqueue system's context.
> >
> > As mentioned, "unconditionally scheduling the work item after
> > enable_work() returns true should work for most users." This
> > ensures that the work is consistently scheduled for execution,
> > aligning with the typical workqueue usage pattern. Most users
> > expect that enabling a work item implies it will be scheduled for
> > execution without additional conditional logic.
>
> This looks good for the explanation of the APIs, but you need to
> add another paragraph explaining why the conversion is correct
> for the given user. Basically whether the callback is safe to
> be called even if there's no work.

 Okay.

how about the following:

In the context of of the driver, the conversion from tasklet_enable()
to enable_and_queue_work() is correct because the callback function
associated with the work item is designed to be safe even if there
is no immediate work to process. The callback function can handle
being invoked in such situations without causing errors or undesirable
behavior. This makes the workqueue approach a suitable and safe
replacement for the current tasklet mechanism, as it provides the
necessary flexibility and ensures that the work item is properly
scheduled and executed.

Thanks,
Allen
Jakub Kicinski Aug. 7, 2024, 2:37 p.m. UTC | #7
On Tue, 6 Aug 2024 20:15:50 -0700 Allen wrote:
> In the context of of the driver, the conversion from tasklet_enable()
> to enable_and_queue_work() is correct because the callback function
> associated with the work item is designed to be safe even if there
> is no immediate work to process. The callback function can handle
> being invoked in such situations without causing errors or undesirable
> behavior. This makes the workqueue approach a suitable and safe
> replacement for the current tasklet mechanism, as it provides the
> necessary flexibility and ensures that the work item is properly
> scheduled and executed.

Fewer words, clearer indication that you read the code would be better
for the reviewer. Like actually call out what in the code makes it safe.

Just to be clear -- conversions to enable_and_queue_work() will require
manual inspection in every case.
Allen Aug. 9, 2024, 2:31 a.m. UTC | #8
> > In the context of of the driver, the conversion from tasklet_enable()
> > to enable_and_queue_work() is correct because the callback function
> > associated with the work item is designed to be safe even if there
> > is no immediate work to process. The callback function can handle
> > being invoked in such situations without causing errors or undesirable
> > behavior. This makes the workqueue approach a suitable and safe
> > replacement for the current tasklet mechanism, as it provides the
> > necessary flexibility and ensures that the work item is properly
> > scheduled and executed.
>
> Fewer words, clearer indication that you read the code would be better
> for the reviewer. Like actually call out what in the code makes it safe.
>
Okay.
> Just to be clear -- conversions to enable_and_queue_work() will require
> manual inspection in every case.

Attempting again.

The enable_and_queue_work() only schedules work if it is not already
enabled, similar to how tasklet_enable() would only allow a tasklet to run
if it had been previously scheduled.

In the current driver, where we are attempting conversion, enable_work()
checks whether the work is already enabled and only enables it if it
was disabled. If no new work is queued, queue_work() won't be called.
Hence, the callback is safe even if there's no work.

Thanks,
- Allen
Jakub Kicinski Aug. 10, 2024, 3:36 a.m. UTC | #9
On Thu, 8 Aug 2024 19:31:57 -0700 Allen wrote:
> > > In the context of of the driver, the conversion from tasklet_enable()
> > > to enable_and_queue_work() is correct because the callback function
> > > associated with the work item is designed to be safe even if there
> > > is no immediate work to process. The callback function can handle
> > > being invoked in such situations without causing errors or undesirable
> > > behavior. This makes the workqueue approach a suitable and safe
> > > replacement for the current tasklet mechanism, as it provides the
> > > necessary flexibility and ensures that the work item is properly
> > > scheduled and executed.  
> >
> > Fewer words, clearer indication that you read the code would be better
> > for the reviewer. Like actually call out what in the code makes it safe.
> >  
> Okay.
> > Just to be clear -- conversions to enable_and_queue_work() will require
> > manual inspection in every case.  
> 
> Attempting again.
> 
> The enable_and_queue_work() only schedules work if it is not already
> enabled, similar to how tasklet_enable() would only allow a tasklet to run
> if it had been previously scheduled.
> 
> In the current driver, where we are attempting conversion, enable_work()
> checks whether the work is already enabled and only enables it if it
> was disabled. If no new work is queued, queue_work() won't be called.
> Hence, the callback is safe even if there's no work.

Hm. Let me give you an example of what I was hoping to see for this
patch (in addition to your explanation of the API difference):

 The conversion for oct_priv->droq_bh_work should be safe. While
 the work is per adapter, the callback (octeon_droq_bh()) walks all
 queues, and for each queue checks whether the oct->io_qmask.oq mask
 has a bit set. In case of spurious scheduling of the work - none of
 the bits should be set, making the callback a noop.
Allen Aug. 15, 2024, 4:45 p.m. UTC | #10
> > > > In the context of of the driver, the conversion from tasklet_enable()
> > > > to enable_and_queue_work() is correct because the callback function
> > > > associated with the work item is designed to be safe even if there
> > > > is no immediate work to process. The callback function can handle
> > > > being invoked in such situations without causing errors or undesirable
> > > > behavior. This makes the workqueue approach a suitable and safe
> > > > replacement for the current tasklet mechanism, as it provides the
> > > > necessary flexibility and ensures that the work item is properly
> > > > scheduled and executed.
> > >
> > > Fewer words, clearer indication that you read the code would be better
> > > for the reviewer. Like actually call out what in the code makes it safe.
> > >
> > Okay.
> > > Just to be clear -- conversions to enable_and_queue_work() will require
> > > manual inspection in every case.
> >
> > Attempting again.
> >
> > The enable_and_queue_work() only schedules work if it is not already
> > enabled, similar to how tasklet_enable() would only allow a tasklet to run
> > if it had been previously scheduled.
> >
> > In the current driver, where we are attempting conversion, enable_work()
> > checks whether the work is already enabled and only enables it if it
> > was disabled. If no new work is queued, queue_work() won't be called.
> > Hence, the callback is safe even if there's no work.
>
> Hm. Let me give you an example of what I was hoping to see for this
> patch (in addition to your explanation of the API difference):
>
>  The conversion for oct_priv->droq_bh_work should be safe. While
>  the work is per adapter, the callback (octeon_droq_bh()) walks all
>  queues, and for each queue checks whether the oct->io_qmask.oq mask
>  has a bit set. In case of spurious scheduling of the work - none of
>  the bits should be set, making the callback a noop.

Thank you. Really appreciate your patience and help.

Would you prefer a new series/or update this patch with this
additional info and resend it.

Thanks.
Jakub Kicinski Aug. 15, 2024, 11:49 p.m. UTC | #11
On Thu, 15 Aug 2024 09:45:43 -0700 Allen wrote:
> > Hm. Let me give you an example of what I was hoping to see for this
> > patch (in addition to your explanation of the API difference):
> >
> >  The conversion for oct_priv->droq_bh_work should be safe. While
> >  the work is per adapter, the callback (octeon_droq_bh()) walks all
> >  queues, and for each queue checks whether the oct->io_qmask.oq mask
> >  has a bit set. In case of spurious scheduling of the work - none of
> >  the bits should be set, making the callback a noop.  
> 
> Thank you. Really appreciate your patience and help.
> 
> Would you prefer a new series/or update this patch with this
> additional info and resend it.

Just thinking from the perspective of getting the conversions merged,
could you send out the drivers/net/ethernet conversions which don't
include use of enable_and_queue_work() first? Those should be quick 
to review and marge. And then separately if you could send the rest 
with updated commit messages for each case that would be splendid
Allen Aug. 17, 2024, 4:27 p.m. UTC | #12
> > > Hm. Let me give you an example of what I was hoping to see for this
> > > patch (in addition to your explanation of the API difference):
> > >
> > >  The conversion for oct_priv->droq_bh_work should be safe. While
> > >  the work is per adapter, the callback (octeon_droq_bh()) walks all
> > >  queues, and for each queue checks whether the oct->io_qmask.oq mask
> > >  has a bit set. In case of spurious scheduling of the work - none of
> > >  the bits should be set, making the callback a noop.
> >
> > Thank you. Really appreciate your patience and help.
> >
> > Would you prefer a new series/or update this patch with this
> > additional info and resend it.
>
> Just thinking from the perspective of getting the conversions merged,
> could you send out the drivers/net/ethernet conversions which don't
> include use of enable_and_queue_work() first? Those should be quick
> to review and marge. And then separately if you could send the rest
> with updated commit messages for each case that would be splendid

 Sure, let me send the series again with this driver dropped. We can
revisit drivers with enable_and_queue_work() later.

Thank you.
diff mbox series

Patch

diff --git a/drivers/net/ethernet/cavium/liquidio/lio_core.c b/drivers/net/ethernet/cavium/liquidio/lio_core.c
index 674c54831875..37307e02a6ff 100644
--- a/drivers/net/ethernet/cavium/liquidio/lio_core.c
+++ b/drivers/net/ethernet/cavium/liquidio/lio_core.c
@@ -925,7 +925,7 @@  int liquidio_schedule_msix_droq_pkt_handler(struct octeon_droq *droq, u64 ret)
 			if (OCTEON_CN23XX_VF(oct))
 				dev_err(&oct->pci_dev->dev,
 					"should not come here should not get rx when poll mode = 0 for vf\n");
-			tasklet_schedule(&oct_priv->droq_tasklet);
+			queue_work(system_bh_wq, &oct_priv->droq_bh_work);
 			return 1;
 		}
 		/* this will be flushed periodically by check iq db */
@@ -975,7 +975,7 @@  static void liquidio_schedule_droq_pkt_handlers(struct octeon_device *oct)
 				droq->ops.napi_fn(droq);
 				oct_priv->napi_mask |= BIT_ULL(oq_no);
 			} else {
-				tasklet_schedule(&oct_priv->droq_tasklet);
+				queue_work(system_bh_wq, &oct_priv->droq_bh_work);
 			}
 		}
 	}
diff --git a/drivers/net/ethernet/cavium/liquidio/lio_main.c b/drivers/net/ethernet/cavium/liquidio/lio_main.c
index 1d79f6eaa41f..d348656c2f38 100644
--- a/drivers/net/ethernet/cavium/liquidio/lio_main.c
+++ b/drivers/net/ethernet/cavium/liquidio/lio_main.c
@@ -150,12 +150,12 @@  static int liquidio_set_vf_link_state(struct net_device *netdev, int vfidx,
 static struct handshake handshake[MAX_OCTEON_DEVICES];
 static struct completion first_stage;
 
-static void octeon_droq_bh(struct tasklet_struct *t)
+static void octeon_droq_bh(struct work_struct *work)
 {
 	int q_no;
 	int reschedule = 0;
-	struct octeon_device_priv *oct_priv = from_tasklet(oct_priv, t,
-							  droq_tasklet);
+	struct octeon_device_priv *oct_priv = from_work(oct_priv, work,
+							  droq_bh_work);
 	struct octeon_device *oct = oct_priv->dev;
 
 	for (q_no = 0; q_no < MAX_OCTEON_OUTPUT_QUEUES(oct); q_no++) {
@@ -180,7 +180,7 @@  static void octeon_droq_bh(struct tasklet_struct *t)
 	}
 
 	if (reschedule)
-		tasklet_schedule(&oct_priv->droq_tasklet);
+		queue_work(system_bh_wq, &oct_priv->droq_bh_work);
 }
 
 static int lio_wait_for_oq_pkts(struct octeon_device *oct)
@@ -199,7 +199,7 @@  static int lio_wait_for_oq_pkts(struct octeon_device *oct)
 		}
 		if (pkt_cnt > 0) {
 			pending_pkts += pkt_cnt;
-			tasklet_schedule(&oct_priv->droq_tasklet);
+			queue_work(system_bh_wq, &oct_priv->droq_bh_work);
 		}
 		pkt_cnt = 0;
 		schedule_timeout_uninterruptible(1);
@@ -1130,7 +1130,7 @@  static void octeon_destroy_resources(struct octeon_device *oct)
 		break;
 	}                       /* end switch (oct->status) */
 
-	tasklet_kill(&oct_priv->droq_tasklet);
+	cancel_work_sync(&oct_priv->droq_bh_work);
 }
 
 /**
@@ -1234,7 +1234,7 @@  static void liquidio_destroy_nic_device(struct octeon_device *oct, int ifidx)
 	list_for_each_entry_safe(napi, n, &netdev->napi_list, dev_list)
 		netif_napi_del(napi);
 
-	tasklet_enable(&oct_priv->droq_tasklet);
+	enable_and_queue_work(system_bh_wq, &oct_priv->droq_bh_work);
 
 	if (atomic_read(&lio->ifstate) & LIO_IFSTATE_REGISTERED)
 		unregister_netdev(netdev);
@@ -1770,7 +1770,7 @@  static int liquidio_open(struct net_device *netdev)
 	int ret = 0;
 
 	if (oct->props[lio->ifidx].napi_enabled == 0) {
-		tasklet_disable(&oct_priv->droq_tasklet);
+		disable_work_sync(&oct_priv->droq_bh_work);
 
 		list_for_each_entry_safe(napi, n, &netdev->napi_list, dev_list)
 			napi_enable(napi);
@@ -1896,7 +1896,7 @@  static int liquidio_stop(struct net_device *netdev)
 		if (OCTEON_CN23XX_PF(oct))
 			oct->droq[0]->ops.poll_mode = 0;
 
-		tasklet_enable(&oct_priv->droq_tasklet);
+		enable_and_queue_work(system_bh_wq, &oct_priv->droq_bh_work);
 	}
 
 	dev_info(&oct->pci_dev->dev, "%s interface is stopped\n", netdev->name);
@@ -4204,9 +4204,9 @@  static int octeon_device_init(struct octeon_device *octeon_dev)
 		}
 	}
 
-	/* Initialize the tasklet that handles output queue packet processing.*/
-	dev_dbg(&octeon_dev->pci_dev->dev, "Initializing droq tasklet\n");
-	tasklet_setup(&oct_priv->droq_tasklet, octeon_droq_bh);
+	/* Initialize the bh work that handles output queue packet processing.*/
+	dev_dbg(&octeon_dev->pci_dev->dev, "Initializing droq bh work\n");
+	INIT_WORK(&oct_priv->droq_bh_work, octeon_droq_bh);
 
 	/* Setup the interrupt handler and record the INT SUM register address
 	 */
diff --git a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c
index 62c2eadc33e3..04117625f388 100644
--- a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c
+++ b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c
@@ -87,7 +87,7 @@  static int lio_wait_for_oq_pkts(struct octeon_device *oct)
 		}
 		if (pkt_cnt > 0) {
 			pending_pkts += pkt_cnt;
-			tasklet_schedule(&oct_priv->droq_tasklet);
+			queue_work(system_bh_wq, &oct_priv->droq_bh_work);
 		}
 		pkt_cnt = 0;
 		schedule_timeout_uninterruptible(1);
@@ -584,7 +584,7 @@  static void octeon_destroy_resources(struct octeon_device *oct)
 		break;
 	}
 
-	tasklet_kill(&oct_priv->droq_tasklet);
+	cancel_work_sync(&oct_priv->droq_bh_work);
 }
 
 /**
@@ -687,7 +687,7 @@  static void liquidio_destroy_nic_device(struct octeon_device *oct, int ifidx)
 	list_for_each_entry_safe(napi, n, &netdev->napi_list, dev_list)
 		netif_napi_del(napi);
 
-	tasklet_enable(&oct_priv->droq_tasklet);
+	enable_and_queue_work(system_bh_wq, &oct_priv->droq_bh_work);
 
 	if (atomic_read(&lio->ifstate) & LIO_IFSTATE_REGISTERED)
 		unregister_netdev(netdev);
@@ -911,7 +911,7 @@  static int liquidio_open(struct net_device *netdev)
 	int ret = 0;
 
 	if (!oct->props[lio->ifidx].napi_enabled) {
-		tasklet_disable(&oct_priv->droq_tasklet);
+		disable_work_sync(&oct_priv->droq_bh_work);
 
 		list_for_each_entry_safe(napi, n, &netdev->napi_list, dev_list)
 			napi_enable(napi);
@@ -986,7 +986,7 @@  static int liquidio_stop(struct net_device *netdev)
 
 		oct->droq[0]->ops.poll_mode = 0;
 
-		tasklet_enable(&oct_priv->droq_tasklet);
+		enable_and_queue_work(system_bh_wq, &oct_priv->droq_bh_work);
 	}
 
 	cancel_delayed_work_sync(&lio->stats_wk.work);
diff --git a/drivers/net/ethernet/cavium/liquidio/octeon_droq.c b/drivers/net/ethernet/cavium/liquidio/octeon_droq.c
index eef12fdd246d..4e5f8bbc891b 100644
--- a/drivers/net/ethernet/cavium/liquidio/octeon_droq.c
+++ b/drivers/net/ethernet/cavium/liquidio/octeon_droq.c
@@ -96,7 +96,7 @@  u32 octeon_droq_check_hw_for_pkts(struct octeon_droq *droq)
 	last_count = pkt_count - droq->pkt_count;
 	droq->pkt_count = pkt_count;
 
-	/* we shall write to cnts  at napi irq enable or end of droq tasklet */
+	/* we shall write to cnts  at napi irq enable or end of droq bh_work */
 	if (last_count)
 		atomic_add(last_count, &droq->pkts_pending);
 
@@ -764,7 +764,7 @@  octeon_droq_process_packets(struct octeon_device *oct,
 				(u16)rdisp->rinfo->recv_pkt->rh.r.subcode));
 	}
 
-	/* If there are packets pending. schedule tasklet again */
+	/* If there are packets pending. schedule bh_work again */
 	if (atomic_read(&droq->pkts_pending))
 		return 1;
 
diff --git a/drivers/net/ethernet/cavium/liquidio/octeon_main.h b/drivers/net/ethernet/cavium/liquidio/octeon_main.h
index 5b4cb725f60f..a8f2a0a7b08e 100644
--- a/drivers/net/ethernet/cavium/liquidio/octeon_main.h
+++ b/drivers/net/ethernet/cavium/liquidio/octeon_main.h
@@ -24,6 +24,7 @@ 
 #define  _OCTEON_MAIN_H_
 
 #include <linux/sched/signal.h>
+#include <linux/workqueue.h>
 
 #if BITS_PER_LONG == 32
 #define CVM_CAST64(v) ((long long)(v))
@@ -36,8 +37,7 @@ 
 #define DRV_NAME "LiquidIO"
 
 struct octeon_device_priv {
-	/** Tasklet structures for this device. */
-	struct tasklet_struct droq_tasklet;
+	struct work_struct droq_bh_work;
 	unsigned long napi_mask;
 	struct octeon_device *dev;
 };