diff mbox series

Input: evdev - per-client waitgroups

Message ID 20200429184126.2155-1-kl@kl.wtf (mailing list archive)
State Accepted
Commit 4ba8b8aec58bf8de3ca29ea08d7eb11a127e7b90
Headers show
Series Input: evdev - per-client waitgroups | expand

Commit Message

Kenny Levinsen April 29, 2020, 6:41 p.m. UTC
All evdev clients share a common waitgroup. On new input events, this
waitgroup is woken once for every client that did not filter the events,
leading to duplicated and unwanted wakeups.

Split the shared waitgroup into per-client waitgroups for more
fine-grained wakeups.

Signed-off-by: Kenny Levinsen <kl@kl.wtf>
---
 drivers/input/evdev.c | 19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

Comments

Kenny Levinsen May 29, 2020, 2:36 p.m. UTC | #1
April 29, 2020 8:41 PM, "Kenny Levinsen" <kl@kl.wtf> wrote:

> All evdev clients share a common waitgroup. On new input events, this
> waitgroup is woken once for every client that did not filter the events,
> leading to duplicated and unwanted wakeups.
> 
> Split the shared waitgroup into per-client waitgroups for more
> fine-grained wakeups.
> 
> Signed-off-by: Kenny Levinsen <kl@kl.wtf>
> ---
> drivers/input/evdev.c | 19 +++++++++----------
> 1 file changed, 9 insertions(+), 10 deletions(-)

Here's a 1-month ping for lack of better idea. Apologies if that's not the right thing to do, just worried that things might have been lost to the great inbox event horizon.
Dmitry Torokhov Oct. 5, 2020, 11:35 p.m. UTC | #2
Hi Kenny,

On Wed, Apr 29, 2020 at 08:41:26PM +0200, Kenny Levinsen wrote:
> All evdev clients share a common waitgroup. On new input events, this
> waitgroup is woken once for every client that did not filter the events,

I am having trouble parsing the changelog (I think I agree with the
change itself).  Did you mean to say "this waitqueue wakes up every
client, even ones that requested to filter out events that are being
delivered, leading to duplicated and unwanted wakeups"?

> leading to duplicated and unwanted wakeups.
> 
> Split the shared waitgroup into per-client waitgroups for more
> fine-grained wakeups.
> 

Thanks.
Kenny Levinsen Oct. 6, 2020, 9:15 a.m. UTC | #3
October 6, 2020 1:35 AM, dmitry.torokhov@gmail.com wrote:

> On Wed, Apr 29, 2020 at 08:41:26PM +0200, Kenny Levinsen wrote:
> 
>> All evdev clients share a common waitgroup. On new input events, this
>> waitgroup is woken once for every client that did not filter the events,
> 
> I am having trouble parsing the changelog (I think I agree with the
> change itself). Did you mean to say "this waitqueue wakes up every
> client, even ones that requested to filter out events that are being
> delivered, leading to duplicated and unwanted wakeups"?

Ah, I suppose my original wording was a bit convoluted. Perhaps the following
is clearer:

	All evdev clients share a common waitgroup. On new input events, all
	clients waiting on this waitgroup are woken up, even those filtering
	out the events, possibly more than once per event. This leads to
	duplicated and unwanted wakeups.

What I tried to say is that not only do all clients polling the device/blocked
on read end up woken up, instead of being woken just once, they are woken once
for every client that was interested in the event.

So, if you have two clients interested and one uninterested, then the shared
waitgroup that all three clients are waiting on is woken up twice in a row.

Should I send an updated patch with the new wording? I'm also fine with your
suggested wording if you prefer that.

Best regards,
Kenny Levinsen
Dmitry Torokhov Oct. 7, 2020, 1:35 a.m. UTC | #4
On Tue, Oct 06, 2020 at 09:15:32AM +0000, kl@kl.wtf wrote:
> October 6, 2020 1:35 AM, dmitry.torokhov@gmail.com wrote:
> 
> > On Wed, Apr 29, 2020 at 08:41:26PM +0200, Kenny Levinsen wrote:
> > 
> >> All evdev clients share a common waitgroup. On new input events, this
> >> waitgroup is woken once for every client that did not filter the events,
> > 
> > I am having trouble parsing the changelog (I think I agree with the
> > change itself). Did you mean to say "this waitqueue wakes up every
> > client, even ones that requested to filter out events that are being
> > delivered, leading to duplicated and unwanted wakeups"?
> 
> Ah, I suppose my original wording was a bit convoluted. Perhaps the following
> is clearer:
> 
> 	All evdev clients share a common waitgroup. On new input events, all
> 	clients waiting on this waitgroup are woken up, even those filtering
> 	out the events, possibly more than once per event. This leads to
> 	duplicated and unwanted wakeups.
> 
> What I tried to say is that not only do all clients polling the device/blocked
> on read end up woken up, instead of being woken just once, they are woken once
> for every client that was interested in the event.
> 
> So, if you have two clients interested and one uninterested, then the shared
> waitgroup that all three clients are waiting on is woken up twice in a row.
> 
> Should I send an updated patch with the new wording? I'm also fine with your
> suggested wording if you prefer that.

I used the new description from above and applied, thank you.
diff mbox series

Patch

diff --git a/drivers/input/evdev.c b/drivers/input/evdev.c
index f54d3d31f61d..e9a8ba36e53e 100644
--- a/drivers/input/evdev.c
+++ b/drivers/input/evdev.c
@@ -28,7 +28,6 @@ 
 struct evdev {
 	int open;
 	struct input_handle handle;
-	wait_queue_head_t wait;
 	struct evdev_client __rcu *grab;
 	struct list_head client_list;
 	spinlock_t client_lock; /* protects client_list */
@@ -43,6 +42,7 @@  struct evdev_client {
 	unsigned int tail;
 	unsigned int packet_head; /* [future] position of the first element of next packet */
 	spinlock_t buffer_lock; /* protects access to buffer, head and tail */
+	wait_queue_head_t wait;
 	struct fasync_struct *fasync;
 	struct evdev *evdev;
 	struct list_head node;
@@ -245,7 +245,6 @@  static void evdev_pass_values(struct evdev_client *client,
 			const struct input_value *vals, unsigned int count,
 			ktime_t *ev_time)
 {
-	struct evdev *evdev = client->evdev;
 	const struct input_value *v;
 	struct input_event event;
 	struct timespec64 ts;
@@ -282,7 +281,7 @@  static void evdev_pass_values(struct evdev_client *client,
 	spin_unlock(&client->buffer_lock);
 
 	if (wakeup)
-		wake_up_interruptible_poll(&evdev->wait,
+		wake_up_interruptible_poll(&client->wait,
 			EPOLLIN | EPOLLOUT | EPOLLRDNORM | EPOLLWRNORM);
 }
 
@@ -440,11 +439,11 @@  static void evdev_hangup(struct evdev *evdev)
 	struct evdev_client *client;
 
 	spin_lock(&evdev->client_lock);
-	list_for_each_entry(client, &evdev->client_list, node)
+	list_for_each_entry(client, &evdev->client_list, node) {
 		kill_fasync(&client->fasync, SIGIO, POLL_HUP);
+		wake_up_interruptible_poll(&client->wait, EPOLLHUP | EPOLLERR);
+	}
 	spin_unlock(&evdev->client_lock);
-
-	wake_up_interruptible_poll(&evdev->wait, EPOLLHUP | EPOLLERR);
 }
 
 static int evdev_release(struct inode *inode, struct file *file)
@@ -492,6 +491,7 @@  static int evdev_open(struct inode *inode, struct file *file)
 	if (!client)
 		return -ENOMEM;
 
+	init_waitqueue_head(&client->wait);
 	client->bufsize = bufsize;
 	spin_lock_init(&client->buffer_lock);
 	client->evdev = evdev;
@@ -608,7 +608,7 @@  static ssize_t evdev_read(struct file *file, char __user *buffer,
 			break;
 
 		if (!(file->f_flags & O_NONBLOCK)) {
-			error = wait_event_interruptible(evdev->wait,
+			error = wait_event_interruptible(client->wait,
 					client->packet_head != client->tail ||
 					!evdev->exist || client->revoked);
 			if (error)
@@ -626,7 +626,7 @@  static __poll_t evdev_poll(struct file *file, poll_table *wait)
 	struct evdev *evdev = client->evdev;
 	__poll_t mask;
 
-	poll_wait(file, &evdev->wait, wait);
+	poll_wait(file, &client->wait, wait);
 
 	if (evdev->exist && !client->revoked)
 		mask = EPOLLOUT | EPOLLWRNORM;
@@ -959,7 +959,7 @@  static int evdev_revoke(struct evdev *evdev, struct evdev_client *client,
 	client->revoked = true;
 	evdev_ungrab(evdev, client);
 	input_flush_device(&evdev->handle, file);
-	wake_up_interruptible_poll(&evdev->wait, EPOLLHUP | EPOLLERR);
+	wake_up_interruptible_poll(&client->wait, EPOLLHUP | EPOLLERR);
 
 	return 0;
 }
@@ -1372,7 +1372,6 @@  static int evdev_connect(struct input_handler *handler, struct input_dev *dev,
 	INIT_LIST_HEAD(&evdev->client_list);
 	spin_lock_init(&evdev->client_lock);
 	mutex_init(&evdev->mutex);
-	init_waitqueue_head(&evdev->wait);
 	evdev->exist = true;
 
 	dev_no = minor;