Message ID | 20241005145717.302575-3-jdamato@fastly.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | tg3: Link IRQs, NAPIs, and queues | expand |
On Sat, Oct 5, 2024 at 7:57 AM Joe Damato <jdamato@fastly.com> wrote: > > Link queues to NAPIs using the netdev-genl API so this information is > queryable. > > $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \ > --dump queue-get --json='{"ifindex": 2}' > > [{'id': 0, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'}, > {'id': 1, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'}, > {'id': 2, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'}, > {'id': 3, 'ifindex': 2, 'napi-id': 8197, 'type': 'rx'}, > {'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'tx'}] > > Signed-off-by: Joe Damato <jdamato@fastly.com> > static void tg3_napi_enable(struct tg3 *tp) > { > + int txq_idx = 0, rxq_idx = 0; > + struct tg3_napi *tnapi; > int i; > > - for (i = 0; i < tp->irq_cnt; i++) > - napi_enable(&tp->napi[i].napi); > + for (i = 0; i < tp->irq_cnt; i++) { > + tnapi = &tp->napi[i]; > + napi_enable(&tnapi->napi); > + if (tnapi->tx_buffers) { > + netif_queue_set_napi(tp->dev, txq_idx, > + NETDEV_QUEUE_TYPE_TX, > + &tnapi->napi); > + txq_idx++; > + } else if (tnapi->rx_rcb) { Shouldn't this be "if" instead of "else if" ? A napi can be for both a TX ring and an RX ring in some cases. Thanks. > + netif_queue_set_napi(tp->dev, rxq_idx, > + NETDEV_QUEUE_TYPE_RX, > + &tnapi->napi); > + rxq_idx++; > + } > + } > }
On Mon, Oct 07, 2024 at 12:30:09AM -0700, Michael Chan wrote: > On Sat, Oct 5, 2024 at 7:57 AM Joe Damato <jdamato@fastly.com> wrote: > > > > Link queues to NAPIs using the netdev-genl API so this information is > > queryable. > > > > $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \ > > --dump queue-get --json='{"ifindex": 2}' > > > > [{'id': 0, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'}, > > {'id': 1, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'}, > > {'id': 2, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'}, > > {'id': 3, 'ifindex': 2, 'napi-id': 8197, 'type': 'rx'}, > > {'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'tx'}] > > > > Signed-off-by: Joe Damato <jdamato@fastly.com> > > > static void tg3_napi_enable(struct tg3 *tp) > > { > > + int txq_idx = 0, rxq_idx = 0; > > + struct tg3_napi *tnapi; > > int i; > > > > - for (i = 0; i < tp->irq_cnt; i++) > > - napi_enable(&tp->napi[i].napi); > > + for (i = 0; i < tp->irq_cnt; i++) { > > + tnapi = &tp->napi[i]; > > + napi_enable(&tnapi->napi); > > + if (tnapi->tx_buffers) { > > + netif_queue_set_napi(tp->dev, txq_idx, > > + NETDEV_QUEUE_TYPE_TX, > > + &tnapi->napi); > > + txq_idx++; > > + } else if (tnapi->rx_rcb) { > > Shouldn't this be "if" instead of "else if" ? A napi can be for both > a TX ring and an RX ring in some cases. > Thanks. OK, but is this approach (with the running counters) acceptable for rxq and txq numbering? If not, can you suggest an alternative?
On Mon, Oct 07, 2024 at 12:30:09AM -0700, Michael Chan wrote: > On Sat, Oct 5, 2024 at 7:57 AM Joe Damato <jdamato@fastly.com> wrote: > > + if (tnapi->tx_buffers) { > > + netif_queue_set_napi(tp->dev, txq_idx, > > + NETDEV_QUEUE_TYPE_TX, > > + &tnapi->napi); > > + txq_idx++; > > + } else if (tnapi->rx_rcb) { > > Shouldn't this be "if" instead of "else if" ? A napi can be for both > a TX ring and an RX ring in some cases. > Thanks. BTW: tg3 set_channels doesn't seem to support combined queues; combined_count is not even examined in set_channels. But maybe the link queue can be a combined queue, I don't know. Regardless, I'll still make the change you requested as there is similar code in tg3_request_irq. But what I really would like to get feedback on is the rxq and txq indexing with the running counters, please. That was called out explicitly in the cover letter. Thanks.
On Mon, Oct 7, 2024 at 7:23 AM Joe Damato <jdamato@fastly.com> wrote: > > On Mon, Oct 07, 2024 at 12:30:09AM -0700, Michael Chan wrote: > > On Sat, Oct 5, 2024 at 7:57 AM Joe Damato <jdamato@fastly.com> wrote: > > > + if (tnapi->tx_buffers) { > > > + netif_queue_set_napi(tp->dev, txq_idx, > > > + NETDEV_QUEUE_TYPE_TX, > > > + &tnapi->napi); > > > + txq_idx++; > > > + } else if (tnapi->rx_rcb) { > > > > Shouldn't this be "if" instead of "else if" ? A napi can be for both > > a TX ring and an RX ring in some cases. > > Thanks. > > BTW: tg3 set_channels doesn't seem to support combined queues; > combined_count is not even examined in set_channels. But maybe > the link queue can be a combined queue, I don't know. tg3 is a little odd. When there are more than 1 MSIX, TX starts from vector 0 but RX starts at vector 1. If there are more than 1 TX ring, the 2nd TX ring is combined with the 1st RX ring and so on. > > Regardless, I'll still make the change you requested as there is > similar code in tg3_request_irq. > > But what I really would like to get feedback on is the rxq and txq > indexing with the running counters, please. That was called out > explicitly in the cover letter. > It looks reasonable to me. Thanks.
diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c index 6564072b47ba..12172783b9b6 100644 --- a/drivers/net/ethernet/broadcom/tg3.c +++ b/drivers/net/ethernet/broadcom/tg3.c @@ -7395,18 +7395,47 @@ static int tg3_poll(struct napi_struct *napi, int budget) static void tg3_napi_disable(struct tg3 *tp) { + int txq_idx = tp->txq_cnt - 1; + int rxq_idx = tp->rxq_cnt - 1; + struct tg3_napi *tnapi; int i; - for (i = tp->irq_cnt - 1; i >= 0; i--) - napi_disable(&tp->napi[i].napi); + for (i = tp->irq_cnt - 1; i >= 0; i--) { + tnapi = &tp->napi[i]; + if (tnapi->tx_buffers) { + netif_queue_set_napi(tp->dev, txq_idx, + NETDEV_QUEUE_TYPE_TX, NULL); + txq_idx--; + } else if (tnapi->rx_rcb) { + netif_queue_set_napi(tp->dev, rxq_idx, + NETDEV_QUEUE_TYPE_RX, NULL); + rxq_idx--; + } + napi_disable(&tnapi->napi); + } } static void tg3_napi_enable(struct tg3 *tp) { + int txq_idx = 0, rxq_idx = 0; + struct tg3_napi *tnapi; int i; - for (i = 0; i < tp->irq_cnt; i++) - napi_enable(&tp->napi[i].napi); + for (i = 0; i < tp->irq_cnt; i++) { + tnapi = &tp->napi[i]; + napi_enable(&tnapi->napi); + if (tnapi->tx_buffers) { + netif_queue_set_napi(tp->dev, txq_idx, + NETDEV_QUEUE_TYPE_TX, + &tnapi->napi); + txq_idx++; + } else if (tnapi->rx_rcb) { + netif_queue_set_napi(tp->dev, rxq_idx, + NETDEV_QUEUE_TYPE_RX, + &tnapi->napi); + rxq_idx++; + } + } } static void tg3_napi_init(struct tg3 *tp)
Link queues to NAPIs using the netdev-genl API so this information is queryable. $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \ --dump queue-get --json='{"ifindex": 2}' [{'id': 0, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'}, {'id': 1, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'}, {'id': 2, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'}, {'id': 3, 'ifindex': 2, 'napi-id': 8197, 'type': 'rx'}, {'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'tx'}] Signed-off-by: Joe Damato <jdamato@fastly.com> --- drivers/net/ethernet/broadcom/tg3.c | 37 +++++++++++++++++++++++++---- 1 file changed, 33 insertions(+), 4 deletions(-)