diff mbox series

[v4] fs/epoll: Enable non-blocking busypoll when epoll timeout is 0

Message ID 1594447781-27115-1-git-send-email-sridhar.samudrala@intel.com (mailing list archive)
State New, archived
Headers show
Series [v4] fs/epoll: Enable non-blocking busypoll when epoll timeout is 0 | expand

Commit Message

Samudrala, Sridhar July 11, 2020, 6:09 a.m. UTC
This patch triggers non-blocking busy poll when busy_poll is enabled,
epoll is called with a timeout of 0 and is associated with a napi_id.
This enables an app thread to go through napi poll routine once by
calling epoll with a 0 timeout.

poll/select with a 0 timeout behave in a similar manner.

Signed-off-by: Sridhar Samudrala <sridhar.samudrala@intel.com>

v4:
- Fix a typo (Andy)
v3:
- reset napi_id if no event available after busy poll (Alex)
v2: 
- Added net_busy_loop_on() check (Eric)
---
 fs/eventpoll.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

Comments

Alexander Duyck July 11, 2020, 4:05 p.m. UTC | #1
On Fri, Jul 10, 2020 at 11:13 PM Sridhar Samudrala
<sridhar.samudrala@intel.com> wrote:
>
> This patch triggers non-blocking busy poll when busy_poll is enabled,
> epoll is called with a timeout of 0 and is associated with a napi_id.
> This enables an app thread to go through napi poll routine once by
> calling epoll with a 0 timeout.
>
> poll/select with a 0 timeout behave in a similar manner.
>
> Signed-off-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
>
> v4:
> - Fix a typo (Andy)
> v3:
> - reset napi_id if no event available after busy poll (Alex)
> v2:
> - Added net_busy_loop_on() check (Eric)
> ---
>  fs/eventpoll.c | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
>
> diff --git a/fs/eventpoll.c b/fs/eventpoll.c
> index 12eebcdea9c8..10da7a8e1c2b 100644
> --- a/fs/eventpoll.c
> +++ b/fs/eventpoll.c
> @@ -1847,6 +1847,22 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
>                 eavail = ep_events_available(ep);
>                 write_unlock_irq(&ep->lock);
>
> +               /*
> +                * Trigger non-blocking busy poll if timeout is 0 and there are
> +                * no events available. Passing timed_out(1) to ep_busy_loop
> +                * will make sure that busy polling is triggered only once.
> +                */
> +               if (!eavail && net_busy_loop_on()) {
> +                       ep_busy_loop(ep, timed_out);
> +
> +                       write_lock_irq(&ep->lock);
> +                       eavail = ep_events_available(ep);
> +                       write_unlock_irq(&ep->lock);
> +
> +                       if (!eavail)
> +                               ep_reset_busy_poll_napi_id(ep);
> +               }
> +
>                 goto send_events;
>         }
>

This addresses the concern that I had.

Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
diff mbox series

Patch

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 12eebcdea9c8..10da7a8e1c2b 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -1847,6 +1847,22 @@  static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
 		eavail = ep_events_available(ep);
 		write_unlock_irq(&ep->lock);
 
+		/*
+		 * Trigger non-blocking busy poll if timeout is 0 and there are
+		 * no events available. Passing timed_out(1) to ep_busy_loop
+		 * will make sure that busy polling is triggered only once.
+		 */
+		if (!eavail && net_busy_loop_on()) {
+			ep_busy_loop(ep, timed_out);
+
+			write_lock_irq(&ep->lock);
+			eavail = ep_events_available(ep);
+			write_unlock_irq(&ep->lock);
+
+			if (!eavail)
+				ep_reset_busy_poll_napi_id(ep);
+		}
+
 		goto send_events;
 	}