Message ID | 20240909231904.1322387-1-sean.anderson@linux.dev (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [net] net: xilinx: axienet: Schedule NAPI in two steps | expand |
On 9/9/2024 4:19 PM, Sean Anderson wrote: > > As advised by Documentation/networking/napi.rst, masking IRQs after > calling napi_schedule can be racy. Avoid this by only masking/scheduling > if napi_schedule_prep returns true. Additionally, since we are running > in an IRQ context we can use the irqoff variant as well. > > Fixes: 9e2bc267e780 ("net: axienet: Use NAPI for TX completion path") > Fixes: cc37610caaf8 ("net: axienet: implement NAPI and GRO receive") > Signed-off-by: Sean Anderson <sean.anderson@linux.dev> Reviewed-by: Shannon Nelson <shannon.nelson@amd.com> > --- > > drivers/net/ethernet/xilinx/xilinx_axienet_main.c | 14 ++++++++------ > 1 file changed, 8 insertions(+), 6 deletions(-) > > diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c > index 9eb300fc3590..4f67072d5149 100644 > --- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c > +++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c > @@ -1222,9 +1222,10 @@ static irqreturn_t axienet_tx_irq(int irq, void *_ndev) > u32 cr = lp->tx_dma_cr; > > cr &= ~(XAXIDMA_IRQ_IOC_MASK | XAXIDMA_IRQ_DELAY_MASK); > - axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, cr); > - > - napi_schedule(&lp->napi_tx); > + if (napi_schedule_prep(&lp->napi_tx)) { > + axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, cr); > + __napi_schedule_irqoff(&lp->napi_tx); > + } > } > > return IRQ_HANDLED; > @@ -1266,9 +1267,10 @@ static irqreturn_t axienet_rx_irq(int irq, void *_ndev) > u32 cr = lp->rx_dma_cr; > > cr &= ~(XAXIDMA_IRQ_IOC_MASK | XAXIDMA_IRQ_DELAY_MASK); > - axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET, cr); > - > - napi_schedule(&lp->napi_rx); > + if (napi_schedule_prep(&lp->napi_rx)) { > + axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET, cr); > + __napi_schedule_irqoff(&lp->napi_rx); > + } > } > > return IRQ_HANDLED; > -- > 2.35.1.1320.gc452695387.dirty > >
On Mon, 9 Sep 2024 19:19:04 -0400 Sean Anderson wrote: > Additionally, since we are running > in an IRQ context we can use the irqoff variant as well. The _irqoff variant is a bit of a minefield. It causes issues if kernel is built with forced IRQ threading. With datacenter NICs forced threading is never used so we look the other way. Since this is a fix and driver is embedded I reckon we should stick to __napi_schedule().
On 9/10/24 21:58, Jakub Kicinski wrote: > On Mon, 9 Sep 2024 19:19:04 -0400 Sean Anderson wrote: >> Additionally, since we are running >> in an IRQ context we can use the irqoff variant as well. > > The _irqoff variant is a bit of a minefield. It causes issues if kernel > is built with forced IRQ threading. With datacenter NICs forced > threading is never used so we look the other way. Since this is a fix > and driver is embedded I reckon we should stick to __napi_schedule(). Does it? __napi_schedule_irqoff selects between __napi_schedule and ____napi_schedule based on whether PREEMPT_RT is enabled. Is there some other way to force IRQ threading? --Sean
On Thu, 12 Sep 2024 10:23:06 -0400 Sean Anderson wrote: > __napi_schedule_irqoff selects between __napi_schedule and > ____napi_schedule based on whether PREEMPT_RT is enabled. Is there some > other way to force IRQ threading? I think so, IIRC threadirqs= kernel boot option lets you do it. I don't remember all the details now :( LMK if I'm wrong
On 9/12/24 11:43, Jakub Kicinski wrote: > On Thu, 12 Sep 2024 10:23:06 -0400 Sean Anderson wrote: >> __napi_schedule_irqoff selects between __napi_schedule and >> ____napi_schedule based on whether PREEMPT_RT is enabled. Is there some >> other way to force IRQ threading? > > I think so, IIRC threadirqs= kernel boot option lets you do it. > I don't remember all the details now :( LMK if I'm wrong Hm, maybe __napi_schedule_irqoff should be updated to take that into account. I will resend without this change. --Sean
diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c index 9eb300fc3590..4f67072d5149 100644 --- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c +++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c @@ -1222,9 +1222,10 @@ static irqreturn_t axienet_tx_irq(int irq, void *_ndev) u32 cr = lp->tx_dma_cr; cr &= ~(XAXIDMA_IRQ_IOC_MASK | XAXIDMA_IRQ_DELAY_MASK); - axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, cr); - - napi_schedule(&lp->napi_tx); + if (napi_schedule_prep(&lp->napi_tx)) { + axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, cr); + __napi_schedule_irqoff(&lp->napi_tx); + } } return IRQ_HANDLED; @@ -1266,9 +1267,10 @@ static irqreturn_t axienet_rx_irq(int irq, void *_ndev) u32 cr = lp->rx_dma_cr; cr &= ~(XAXIDMA_IRQ_IOC_MASK | XAXIDMA_IRQ_DELAY_MASK); - axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET, cr); - - napi_schedule(&lp->napi_rx); + if (napi_schedule_prep(&lp->napi_rx)) { + axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET, cr); + __napi_schedule_irqoff(&lp->napi_rx); + } } return IRQ_HANDLED;
As advised by Documentation/networking/napi.rst, masking IRQs after calling napi_schedule can be racy. Avoid this by only masking/scheduling if napi_schedule_prep returns true. Additionally, since we are running in an IRQ context we can use the irqoff variant as well. Fixes: 9e2bc267e780 ("net: axienet: Use NAPI for TX completion path") Fixes: cc37610caaf8 ("net: axienet: implement NAPI and GRO receive") Signed-off-by: Sean Anderson <sean.anderson@linux.dev> --- drivers/net/ethernet/xilinx/xilinx_axienet_main.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-)