diff mbox series

[net-next,v6,3/4] eventpoll: Add per-epoll prefer busy poll option

Message ID 20240205210453.11301-4-jdamato@fastly.com (mailing list archive)
State New, archived
Headers show
Series Per epoll context busy poll support | expand

Commit Message

Joe Damato Feb. 5, 2024, 9:04 p.m. UTC
When using epoll-based busy poll, the prefer_busy_poll option is hardcoded
to false. Users may want to enable prefer_busy_poll to be used in
conjunction with gro_flush_timeout and defer_hard_irqs_count to keep device
IRQs masked.

Other busy poll methods allow enabling or disabling prefer busy poll via
SO_PREFER_BUSY_POLL, but epoll-based busy polling uses a hardcoded value.

Fix this edge case by adding support for a per-epoll context
prefer_busy_poll option. The default is false, as it was hardcoded before
this change.

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 fs/eventpoll.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

Comments

Jakub Kicinski Feb. 7, 2024, 7:04 p.m. UTC | #1
On Mon,  5 Feb 2024 21:04:48 +0000 Joe Damato wrote:
> When using epoll-based busy poll, the prefer_busy_poll option is hardcoded
> to false. Users may want to enable prefer_busy_poll to be used in
> conjunction with gro_flush_timeout and defer_hard_irqs_count to keep device
> IRQs masked.
> 
> Other busy poll methods allow enabling or disabling prefer busy poll via
> SO_PREFER_BUSY_POLL, but epoll-based busy polling uses a hardcoded value.
> 
> Fix this edge case by adding support for a per-epoll context
> prefer_busy_poll option. The default is false, as it was hardcoded before
> this change.

Reviewed-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet Feb. 8, 2024, 5:49 p.m. UTC | #2
On Wed, Feb 7, 2024 at 8:04 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Mon,  5 Feb 2024 21:04:48 +0000 Joe Damato wrote:
> > When using epoll-based busy poll, the prefer_busy_poll option is hardcoded
> > to false. Users may want to enable prefer_busy_poll to be used in
> > conjunction with gro_flush_timeout and defer_hard_irqs_count to keep device
> > IRQs masked.
> >
> > Other busy poll methods allow enabling or disabling prefer busy poll via
> > SO_PREFER_BUSY_POLL, but epoll-based busy polling uses a hardcoded value.
> >
> > Fix this edge case by adding support for a per-epoll context
> > prefer_busy_poll option. The default is false, as it was hardcoded before
> > this change.
>
> Reviewed-by: Jakub Kicinski <kuba@kernel.org>

Reviewed-by: Eric Dumazet <edumazet@google.com>
diff mbox series

Patch

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 3985434df527..a69ee11682b9 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -231,6 +231,7 @@  struct eventpoll {
 	u64 busy_poll_usecs;
 	/* busy poll packet budget */
 	u16 busy_poll_budget;
+	bool prefer_busy_poll;
 #endif
 
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
@@ -440,13 +441,14 @@  static bool ep_busy_loop(struct eventpoll *ep, int nonblock)
 {
 	unsigned int napi_id = READ_ONCE(ep->napi_id);
 	u16 budget = READ_ONCE(ep->busy_poll_budget);
+	bool prefer_busy_poll = READ_ONCE(ep->prefer_busy_poll);
 
 	if (!budget)
 		budget = BUSY_POLL_BUDGET;
 
 	if ((napi_id >= MIN_NAPI_ID) && ep_busy_loop_on(ep)) {
-		napi_busy_loop(napi_id, nonblock ? NULL : ep_busy_loop_end, ep, false,
-			       budget);
+		napi_busy_loop(napi_id, nonblock ? NULL : ep_busy_loop_end,
+				ep, prefer_busy_poll, budget);
 		if (ep_events_available(ep))
 			return true;
 		/*
@@ -2105,6 +2107,7 @@  static int do_epoll_create(int flags)
 #ifdef CONFIG_NET_RX_BUSY_POLL
 	ep->busy_poll_usecs = 0;
 	ep->busy_poll_budget = 0;
+	ep->prefer_busy_poll = false;
 #endif
 	ep->file = file;
 	fd_install(fd, file);