mbox series

[net-next,v3,0/6,pull,request] idpf: XDP chapter II: convert Tx completion to libeth

Message ID 20240909205323.3110312-1-anthony.l.nguyen@intel.com (mailing list archive)
Headers show
Series idpf: XDP chapter II: convert Tx completion to libeth | expand

Message

Tony Nguyen Sept. 9, 2024, 8:53 p.m. UTC
Alexander Lobakin says:

XDP for idpf is currently 5 chapters:
* convert Rx to libeth;
* convert Tx completion to libeth (this);
* generic XDP and XSk code changes;
* actual XDP for idpf via libeth_xdp;
* XSk for idpf (^).

Part II does the following:
* adds generic libeth Tx completion routines;
* converts idpf to use generic libeth Tx comp routines;
* fixes Tx queue timeouts and robustifies Tx completion in general;
* fixes Tx event/descriptor flushes (writebacks).

Most idpf patches again remove more lines than adds.
Generic Tx completion helpers and structs are needed as libeth_xdp
(Ch. III) makes use of them. WB_ON_ITR is needed since XDPSQs don't
want to work without it at all. Tx queue timeouts fixes are needed
since without them, it's way easier to catch a Tx timeout event when
WB_ON_ITR is enabled.
---
v3:
- drop the stats implementation. It's not generic, uses old Ethtool
  interfaces and is written using macro templates which made it barely
  readable (Kuba).
- replace `/* <multi-line comment>` with `/*\n * <multi-line comment>`
  since the special rule for netdev was removed.

v2: https://lore.kernel.org/netdev/20240819223442.48013-1-anthony.l.nguyen@intel.com
- Rebased

v1: https://lore.kernel.org/netdev/20240814173309.4166149-1-anthony.l.nguyen@intel.com/

iwl: https://lore.kernel.org/intel-wired-lan/20240904154748.2114199-1-aleksander.lobakin@intel.com/

This series contains updates to

The following are changes since commit bfba7bc8b7c2c100b76edb3a646fdce256392129:
  Merge branch 'unmask-dscp-part-four'
and are available in the git repository at:
  git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue 200GbE

Alexander Lobakin (3):
  libeth: add Tx buffer completion helpers
  idpf: convert to libeth Tx buffer completion
  netdevice: add netdev_tx_reset_subqueue() shorthand

Joshua Hay (2):
  idpf: refactor Tx completion routines
  idpf: enable WB_ON_ITR

Michal Kubiak (1):
  idpf: fix netdev Tx queue stop/wake

 drivers/net/ethernet/intel/idpf/idpf_dev.c    |   2 +
 .../ethernet/intel/idpf/idpf_singleq_txrx.c   | 110 +++--
 drivers/net/ethernet/intel/idpf/idpf_txrx.c   | 395 ++++++++----------
 drivers/net/ethernet/intel/idpf/idpf_txrx.h   |  92 ++--
 drivers/net/ethernet/intel/idpf/idpf_vf_dev.c |   2 +
 include/linux/netdevice.h                     |  13 +-
 include/net/libeth/tx.h                       | 129 ++++++
 include/net/libeth/types.h                    |  25 ++
 8 files changed, 442 insertions(+), 326 deletions(-)
 create mode 100644 include/net/libeth/tx.h
 create mode 100644 include/net/libeth/types.h

Comments

Jakub Kicinski Sept. 10, 2024, 2:16 p.m. UTC | #1
On Mon,  9 Sep 2024 13:53:15 -0700 Tony Nguyen wrote:
> Alexander Lobakin says:
> 
> XDP for idpf is currently 5 chapters:
> * convert Rx to libeth;
> * convert Tx completion to libeth (this);
> * generic XDP and XSk code changes;
> * actual XDP for idpf via libeth_xdp;
> * XSk for idpf (^).
> 
> Part II does the following:
> * adds generic libeth Tx completion routines;
> * converts idpf to use generic libeth Tx comp routines;
> * fixes Tx queue timeouts and robustifies Tx completion in general;
> * fixes Tx event/descriptor flushes (writebacks).

You're posting two series at once, again. I was going to merge the
subfunction series yesterday, but since you don't wait why would 
I bother trying to merge your code quickly. And this morning I got
chased by Thorsten about Intel regressions, again:
 https://bugzilla.kernel.org/show_bug.cgi?id=219143

Do you have anything else queued up?
I'm really tempted to ask you to not post anything else for net-next
this week.
Tony Nguyen Sept. 10, 2024, 4:46 p.m. UTC | #2
On 9/10/2024 7:16 AM, Jakub Kicinski wrote:
> On Mon,  9 Sep 2024 13:53:15 -0700 Tony Nguyen wrote:
>> Alexander Lobakin says:
>>
>> XDP for idpf is currently 5 chapters:
>> * convert Rx to libeth;
>> * convert Tx completion to libeth (this);
>> * generic XDP and XSk code changes;
>> * actual XDP for idpf via libeth_xdp;
>> * XSk for idpf (^).
>>
>> Part II does the following:
>> * adds generic libeth Tx completion routines;
>> * converts idpf to use generic libeth Tx comp routines;
>> * fixes Tx queue timeouts and robustifies Tx completion in general;
>> * fixes Tx event/descriptor flushes (writebacks).
> 
> You're posting two series at once, again. I was going to merge the
> subfunction series yesterday, but since you don't wait why would
> I bother trying to merge your code quickly.

I thought last month's vacations were over as I had seen Eric and Paolo 
on the list and that things were returning to normal.

> And this morning I got
> chased by Thorsten about Intel regressions, again:
>   https://bugzilla.kernel.org/show_bug.cgi?id=219143

Our client team, who works on that driver, was working on that issue. I 
will check in with them.

> Do you have anything else queued up?
> I'm really tempted to ask you to not post anything else for net-next
> this week.

I do have more patches that need to be sent, but it's more than can fit 
in the time that's left. There are 1 or 2 more that I was hoping to get 
in before net-next closed or Plumbers starts.

Thanks,
Tony
Jakub Kicinski Sept. 10, 2024, 9:44 p.m. UTC | #3
On Tue, 10 Sep 2024 09:46:57 -0700 Tony Nguyen wrote:
> > You're posting two series at once, again. I was going to merge the
> > subfunction series yesterday, but since you don't wait why would
> > I bother trying to merge your code quickly.  
> 
> I thought last month's vacations were over as I had seen Eric and Paolo 
> on the list and that things were returning to normal.

Stubbornly people continue to take vacations, have babies etc.
But that's besides the point.

Either we are merging stuff quickly, and there's no need to queue two
series, or we're backed up due to absences and you should wait.

The rule of 15 patches at a time is about breaking work up as much as
throttling.  Up to outstanding 15 patches to each tree.
I find it hard to believe you don't know this.

> > And this morning I got
> > chased by Thorsten about Intel regressions, again:
> >   https://bugzilla.kernel.org/show_bug.cgi?id=219143  
> 
> Our client team, who works on that driver, was working on that issue.
> I will check in with them.
> 
> > Do you have anything else queued up?
> > I'm really tempted to ask you to not post anything else for net-next
> > this week.  
> 
> I do have more patches that need to be sent, but it's more than can fit 
> in the time that's left. There are 1 or 2 more that I was hoping to get 
> in before net-next closed or Plumbers starts.

Higher prio stuff (read: exclusively authored by people who were
actively reviewing upstream (non-Intel) code within last 3 months) 
may be able to get applied in time. We have 250 outstanding patches
right now, and just 3 days to go.
Tony Nguyen Sept. 10, 2024, 11:05 p.m. UTC | #4
On 9/10/2024 2:44 PM, Jakub Kicinski wrote:
> On Tue, 10 Sep 2024 09:46:57 -0700 Tony Nguyen wrote:
>>> You're posting two series at once, again. I was going to merge the
>>> subfunction series yesterday, but since you don't wait why would
>>> I bother trying to merge your code quickly.
>>
>> I thought last month's vacations were over as I had seen Eric and Paolo
>> on the list and that things were returning to normal.
> 
> Stubbornly people continue to take vacations, have babies etc.
> But that's besides the point.
> 
> Either we are merging stuff quickly, and there's no need to queue two
> series, or we're backed up due to absences and you should wait.
> 
> The rule of 15 patches at a time is about breaking work up as much as
> throttling.  Up to outstanding 15 patches to each tree.
> I find it hard to believe you don't know this.

Honestly I didn't, but will follow this now that I do.

>>> And this morning I got
>>> chased by Thorsten about Intel regressions, again:
>>>    https://bugzilla.kernel.org/show_bug.cgi?id=219143
>>
>> Our client team, who works on that driver, was working on that issue.
>> I will check in with them.
>>
>>> Do you have anything else queued up?
>>> I'm really tempted to ask you to not post anything else for net-next
>>> this week.
>>
>> I do have more patches that need to be sent, but it's more than can fit
>> in the time that's left. There are 1 or 2 more that I was hoping to get
>> in before net-next closed or Plumbers starts.
> 
> Higher prio stuff (read: exclusively authored by people who were
> actively reviewing upstream (non-Intel) code within last 3 months)
> may be able to get applied in time. We have 250 outstanding patches
> right now, and just 3 days to go.

I'll hold off on sending those then and try to get us more involved in 
the future.

Thanks,
Tony
patchwork-bot+netdevbpf@kernel.org Sept. 12, 2024, 3:50 a.m. UTC | #5
Hello:

This series was applied to netdev/net-next.git (main)
by Tony Nguyen <anthony.l.nguyen@intel.com>:

On Mon,  9 Sep 2024 13:53:15 -0700 you wrote:
> Alexander Lobakin says:
> 
> XDP for idpf is currently 5 chapters:
> * convert Rx to libeth;
> * convert Tx completion to libeth (this);
> * generic XDP and XSk code changes;
> * actual XDP for idpf via libeth_xdp;
> * XSk for idpf (^).
> 
> [...]

Here is the summary with links:
  - [net-next,v3,1/6] libeth: add Tx buffer completion helpers
    https://git.kernel.org/netdev/net-next/c/080d72f471c8
  - [net-next,v3,2/6] idpf: convert to libeth Tx buffer completion
    https://git.kernel.org/netdev/net-next/c/d9028db618a6
  - [net-next,v3,3/6] netdevice: add netdev_tx_reset_subqueue() shorthand
    https://git.kernel.org/netdev/net-next/c/3dc95a3edd0a
  - [net-next,v3,4/6] idpf: refactor Tx completion routines
    https://git.kernel.org/netdev/net-next/c/24eb35b15152
  - [net-next,v3,5/6] idpf: fix netdev Tx queue stop/wake
    https://git.kernel.org/netdev/net-next/c/e4b398dd82f5
  - [net-next,v3,6/6] idpf: enable WB_ON_ITR
    https://git.kernel.org/netdev/net-next/c/9c4a27da0ecc

You are awesome, thank you!