diff mbox

[RFC] Input: evdev - drop redundant list-locking

Message ID 1405882092-7005-1-git-send-email-dh.herrmann@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

David Herrmann July 20, 2014, 6:48 p.m. UTC
evdev->client_list is rcu-protected. There is no need to have a
separate spinlock just for the list. Either one is good enough, so lets
drop the spinlock.

Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
---
Hi

I stumbled across this one when doing some evdev reviews. Maybe I'm missing
something obvious and I should stop coding on Sundays. But the RCU-protection
should be enough here, right?

We _could_ do a synchronize_rcu() during evdev_attach_client() to guarantee that
new events are really delivered once it returns. But that seems rather pedantic
to me.

I'm also not sure why we use RCU here, anyway. I mean, there's no high
contention so a spinlock should be fine and would get rid of the very expensive
synchronize_rcu during close(). But then again, I don't really care enough to
write benchmarks for that..

Thanks
David

 drivers/input/evdev.c | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

Comments

Dmitry Torokhov July 20, 2014, 6:54 p.m. UTC | #1
On Sun, Jul 20, 2014 at 08:48:12PM +0200, David Herrmann wrote:
> evdev->client_list is rcu-protected. There is no need to have a
> separate spinlock just for the list. Either one is good enough, so lets
> drop the spinlock.
> 
> Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
> ---
> Hi
> 
> I stumbled across this one when doing some evdev reviews. Maybe I'm missing
> something obvious and I should stop coding on Sundays. But the RCU-protection
> should be enough here, right?

RCU protection is for traversing list only, writes (as is adding and removing
elements from client_list) still have to be mutually exclusive.

Thanks.
David Herrmann July 20, 2014, 7 p.m. UTC | #2
Hi

On Sun, Jul 20, 2014 at 8:54 PM, Dmitry Torokhov
<dmitry.torokhov@gmail.com> wrote:
> On Sun, Jul 20, 2014 at 08:48:12PM +0200, David Herrmann wrote:
>> evdev->client_list is rcu-protected. There is no need to have a
>> separate spinlock just for the list. Either one is good enough, so lets
>> drop the spinlock.
>>
>> Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
>> ---
>> Hi
>>
>> I stumbled across this one when doing some evdev reviews. Maybe I'm missing
>> something obvious and I should stop coding on Sundays. But the RCU-protection
>> should be enough here, right?
>
> RCU protection is for traversing list only, writes (as is adding and removing
> elements from client_list) still have to be mutually exclusive.

Oh, you mean to protect against concurrent writes? Right, but we could
just use evdev->mutex for that. I mean all paths that call
attach_client() or detach_client() already lock evdev->mutex at some
point. It would allow us to get rid of the lock.

Thanks
David
--
To unsubscribe from this list: send the line "unsubscribe linux-input" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dmitry Torokhov July 20, 2014, 7:53 p.m. UTC | #3
On Sun, Jul 20, 2014 at 09:00:02PM +0200, David Herrmann wrote:
> Hi
> 
> On Sun, Jul 20, 2014 at 8:54 PM, Dmitry Torokhov
> <dmitry.torokhov@gmail.com> wrote:
> > On Sun, Jul 20, 2014 at 08:48:12PM +0200, David Herrmann wrote:
> >> evdev->client_list is rcu-protected. There is no need to have a
> >> separate spinlock just for the list. Either one is good enough, so lets
> >> drop the spinlock.
> >>
> >> Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
> >> ---
> >> Hi
> >>
> >> I stumbled across this one when doing some evdev reviews. Maybe I'm missing
> >> something obvious and I should stop coding on Sundays. But the RCU-protection
> >> should be enough here, right?
> >
> > RCU protection is for traversing list only, writes (as is adding and removing
> > elements from client_list) still have to be mutually exclusive.
> 
> Oh, you mean to protect against concurrent writes? Right, but we could
> just use evdev->mutex for that. I mean all paths that call
> attach_client() or detach_client() already lock evdev->mutex at some
> point. It would allow us to get rid of the lock.

Right, we probably could do it by pulling taking/releasing evdev->mutex into
evdev_pen and evdev_release.

Thanks.
diff mbox

Patch

diff --git a/drivers/input/evdev.c b/drivers/input/evdev.c
index 7a25a7a..1f38bd1 100644
--- a/drivers/input/evdev.c
+++ b/drivers/input/evdev.c
@@ -34,7 +34,6 @@  struct evdev {
 	wait_queue_head_t wait;
 	struct evdev_client __rcu *grab;
 	struct list_head client_list;
-	spinlock_t client_lock; /* protects client_list */
 	struct mutex mutex;
 	struct device dev;
 	struct cdev cdev;
@@ -433,17 +432,13 @@  static int evdev_ungrab(struct evdev *evdev, struct evdev_client *client)
 static void evdev_attach_client(struct evdev *evdev,
 				struct evdev_client *client)
 {
-	spin_lock(&evdev->client_lock);
 	list_add_tail_rcu(&client->node, &evdev->client_list);
-	spin_unlock(&evdev->client_lock);
 }
 
 static void evdev_detach_client(struct evdev *evdev,
 				struct evdev_client *client)
 {
-	spin_lock(&evdev->client_lock);
 	list_del_rcu(&client->node);
-	spin_unlock(&evdev->client_lock);
 	synchronize_rcu();
 }
 
@@ -485,10 +480,10 @@  static void evdev_hangup(struct evdev *evdev)
 {
 	struct evdev_client *client;
 
-	spin_lock(&evdev->client_lock);
-	list_for_each_entry(client, &evdev->client_list, node)
+	rcu_read_lock();
+	list_for_each_entry_rcu(client, &evdev->client_list, node)
 		kill_fasync(&client->fasync, SIGIO, POLL_HUP);
-	spin_unlock(&evdev->client_lock);
+	rcu_read_unlock();
 
 	wake_up_interruptible(&evdev->wait);
 }
@@ -1438,7 +1433,6 @@  static int evdev_connect(struct input_handler *handler, struct input_dev *dev,
 	}
 
 	INIT_LIST_HEAD(&evdev->client_list);
-	spin_lock_init(&evdev->client_lock);
 	mutex_init(&evdev->mutex);
 	init_waitqueue_head(&evdev->wait);
 	evdev->exist = true;