Message ID | 20220319004738.1068685-1-Jason@zx2c4.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | net: remove lockdep asserts from ____napi_schedule() | expand |
Hi Jakub, Er, I forgot to mark this as net-next, but as it's connected to the discussion we were just having, I think you get the idea. :) Jason
On Fri, 18 Mar 2022 18:50:08 -0600 Jason A. Donenfeld wrote: > Hi Jakub, > > Er, I forgot to mark this as net-next, but as it's connected to the > discussion we were just having, I think you get the idea. :) Yup, patchwork bot figured it out, too. All good :)
On 2022-03-18 18:47:38 [-0600], Jason A. Donenfeld wrote: > This reverts commit fbd9a2ceba5c ("net: Add lockdep asserts to > ____napi_schedule()."). While good in theory, in practice it causes > issues with various drivers, and so it can be revisited earlier in the > cycle where those drivers can be adjusted if needed. Do you plan to address to address the wireguard warning? > --- a/net/core/dev.c > +++ b/net/core/dev.c > @@ -4277,9 +4277,6 @@ static inline void ____napi_schedule(struct softnet_data *sd, > { > struct task_struct *thread; > > - lockdep_assert_softirq_will_run(); > - lockdep_assert_irqs_disabled(); Could you please keep that lockdep_assert_irqs_disabled()? That is needed regardless of the upper one. Sebastian
Hi Sebastian, On Sat, Mar 19, 2022 at 6:01 AM Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote: > > On 2022-03-18 18:47:38 [-0600], Jason A. Donenfeld wrote: > > This reverts commit fbd9a2ceba5c ("net: Add lockdep asserts to > > ____napi_schedule()."). While good in theory, in practice it causes > > issues with various drivers, and so it can be revisited earlier in the > > cycle where those drivers can be adjusted if needed. > > Do you plan to address to address the wireguard warning? It seemed to me like you had a lot of interesting ideas regarding packet batching and performance and whatnot around when bh is enabled or not. I'm waiting for a patch from you on this, as I mentioned in my previous email. There is definitely a lot of interesting potential performance work here. I am curious to play around with it too, of course, but it sounded to me like you had very specific ideas. I'm not really sure how to determine how many packets to batch, except for through empirical observation or some kind of crazy dql thing. Or maybe there's some optimal quantity due to the way napi works in the first place. Anyway, there's some research to do here. > > > --- a/net/core/dev.c > > +++ b/net/core/dev.c > > @@ -4277,9 +4277,6 @@ static inline void ____napi_schedule(struct softnet_data *sd, > > { > > struct task_struct *thread; > > > > - lockdep_assert_softirq_will_run(); > > - lockdep_assert_irqs_disabled(); > > Could you please keep that lockdep_assert_irqs_disabled()? That is > needed regardless of the upper one. Feel free to send in a more specific revert if you think it's justifiable. I just sent in the thing that reverted the patch that caused the regression - the dumb brute approach. Jason
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index 0cc65d216701..467b94257105 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -329,12 +329,6 @@ extern void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie); #define lockdep_assert_none_held_once() \ lockdep_assert_once(!current->lockdep_depth) -/* - * Ensure that softirq is handled within the callchain and not delayed and - * handled by chance. - */ -#define lockdep_assert_softirq_will_run() \ - lockdep_assert_once(hardirq_count() | softirq_count()) #define lockdep_recursing(tsk) ((tsk)->lockdep_recursion) @@ -420,7 +414,6 @@ extern int lockdep_is_held(const void *); #define lockdep_assert_held_read(l) do { (void)(l); } while (0) #define lockdep_assert_held_once(l) do { (void)(l); } while (0) #define lockdep_assert_none_held_once() do { } while (0) -#define lockdep_assert_softirq_will_run() do { } while (0) #define lockdep_recursing(tsk) (0) diff --git a/net/core/dev.c b/net/core/dev.c index 8e0cc5f2020d..6cad39b73a8e 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4277,9 +4277,6 @@ static inline void ____napi_schedule(struct softnet_data *sd, { struct task_struct *thread; - lockdep_assert_softirq_will_run(); - lockdep_assert_irqs_disabled(); - if (test_bit(NAPI_STATE_THREADED, &napi->state)) { /* Paired with smp_mb__before_atomic() in * napi_enable()/dev_set_threaded(). @@ -4887,7 +4884,7 @@ int __netif_rx(struct sk_buff *skb) { int ret; - lockdep_assert_softirq_will_run(); + lockdep_assert_once(hardirq_count() | softirq_count()); trace_netif_rx_entry(skb); ret = netif_rx_internal(skb);
This reverts commit fbd9a2ceba5c ("net: Add lockdep asserts to ____napi_schedule()."). While good in theory, in practice it causes issues with various drivers, and so it can be revisited earlier in the cycle where those drivers can be adjusted if needed. Link: https://lore.kernel.org/netdev/20220317192145.g23wprums5iunx6c@sx1/ Link: https://lore.kernel.org/netdev/CAHmME9oHFzL6CYVh8nLGkNKOkMeWi2gmxs_f7S8PATWwc6uQsw@mail.gmail.com/ Link: https://lore.kernel.org/wireguard/0000000000000eaff805da869d5b@google.com/ Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Jakub Kicinski <kuba@kernel.org> Cc: Saeed Mahameed <saeed@kernel.org> Cc: Eric Dumazet <edumazet@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> --- include/linux/lockdep.h | 7 ------- net/core/dev.c | 5 +---- 2 files changed, 1 insertion(+), 11 deletions(-)