Message ID | 20201119083024.119566-1-bjorn.topel@gmail.com (mailing list archive) |
---|---|
Headers | show |
Series | Introduce preferred busy-polling | expand |
On Thu, 19 Nov 2020 at 09:30, Björn Töpel <bjorn.topel@gmail.com> wrote: > > This series introduces three new features: > > 1. A new "heavy traffic" busy-polling variant that works in concert > with the existing napi_defer_hard_irqs and gro_flush_timeout knobs. > > 2. A new socket option that let a user change the busy-polling NAPI > budget. > > 3. Allow busy-polling to be performed on XDP sockets. > > The existing busy-polling mode, enabled by the SO_BUSY_POLL socket > option or system-wide using the /proc/sys/net/core/busy_read knob, is > an opportunistic. That means that if the NAPI context is not > scheduled, it will poll it. If, after busy-polling, the budget is > exceeded the busy-polling logic will schedule the NAPI onto the > regular softirq handling. > > One implication of the behavior above is that a busy/heavy loaded NAPI > context will never enter/allow for busy-polling. Some applications > prefer that most NAPI processing would be done by busy-polling. > > This series adds a new socket option, SO_PREFER_BUSY_POLL, that works > in concert with the napi_defer_hard_irqs and gro_flush_timeout > knobs. The napi_defer_hard_irqs and gro_flush_timeout knobs were > introduced in commit 6f8b12d661d0 ("net: napi: add hard irqs deferral > feature"), and allows for a user to defer interrupts to be enabled and > instead schedule the NAPI context from a watchdog timer. When a user > enables the SO_PREFER_BUSY_POLL, again with the other knobs enabled, > and the NAPI context is being processed by a softirq, the softirq NAPI > processing will exit early to allow the busy-polling to be performed. > > If the application stops performing busy-polling via a system call, > the watchdog timer defined by gro_flush_timeout will timeout, and > regular softirq handling will resume. > > In summary; Heavy traffic applications that prefer busy-polling over > softirq processing should use this option. > Eric/Jakub, any more thoughts/input? Tomatoes? :-P Thank you, Björn
On Mon, 23 Nov 2020 14:31:14 +0100 Björn Töpel wrote:
> Eric/Jakub, any more thoughts/input? Tomatoes? :-P
Looking now, sorry for the delay. Somehow patches without net in their
tag feel like they can wait..
On Thu, 19 Nov 2020 09:30:14 +0100 Björn Töpel wrote: > Performance netperf UDP_RR: > > Note that netperf UDP_RR is not a heavy traffic tests, and preferred > busy-polling is not typically something we want to use here. > > $ echo 20 | sudo tee /proc/sys/net/core/busy_read > $ netperf -H 192.168.1.1 -l 30 -t UDP_RR -v 2 -- \ > -o min_latency,mean_latency,max_latency,stddev_latency,transaction_rate > > busy-polling blocking sockets: 12,13.33,224,0.63,74731.177 > > I hacked netperf to use non-blocking sockets and re-ran: > > busy-polling non-blocking sockets: 12,13.46,218,0.72,73991.172 > prefer busy-polling non-blocking sockets: 12,13.62,221,0.59,73138.448 > > Using the preferred busy-polling mode does not impact performance. > > The above tests was done for the 'ice' driver. Any interest in this work form ADQ folks? I recall they were using memcache with busy polling for their tests, it'd cool to see how much this helps memcache on P99+ latency!