From patchwork Sun Jul 20 18:48:12 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Herrmann X-Patchwork-Id: 4592091 Return-Path: X-Original-To: patchwork-linux-input@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 4388EC0514 for ; Sun, 20 Jul 2014 18:48:37 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 404A62010B for ; Sun, 20 Jul 2014 18:48:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5378720109 for ; Sun, 20 Jul 2014 18:48:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752463AbaGTSsc (ORCPT ); Sun, 20 Jul 2014 14:48:32 -0400 Received: from mail-wg0-f52.google.com ([74.125.82.52]:41052 "EHLO mail-wg0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751401AbaGTSsb (ORCPT ); Sun, 20 Jul 2014 14:48:31 -0400 Received: by mail-wg0-f52.google.com with SMTP id a1so5439060wgh.11 for ; Sun, 20 Jul 2014 11:48:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=Q+tIZ9uPJQuleNc8C6weUD2/aXICjCH9c25jqbaJMLk=; b=tX8yPQ1BZagUuAc6JzzgrmImYZp9CuRgqh8Okbm7eqzepjCI1aiMexBBjqWNKR3BwA sswIGnlzvuYHa9f/AZ2rB3cTcA0M2d79HsxrGGqr06tBlAhA8cz04qauVINsMb3ndjS/ JkyHv1gE7nnuoCoXU/yCqBdwtKnigSia0kmd4pPsJB949CO1aB2eLsL7qR+CyOsaF8PB bnSCyPmlLQSLTLFooW+nN3WS/acXogEdEj/Fr4rWhkE6jM5DUnA0lEpPEsHC2BPp+Sd+ G4LAzgtFceuBDYU/8DYCRp53VWxpI0sKYzGANkLTD9HePQI25Wz4iNJjUJ/skY7ams3o jlQQ== X-Received: by 10.180.73.115 with SMTP id k19mr26760691wiv.35.1405882109616; Sun, 20 Jul 2014 11:48:29 -0700 (PDT) Received: from david-tp.localdomain (stgt-4d025847.pool.mediaWays.net. [77.2.88.71]) by mx.google.com with ESMTPSA id fc7sm31582259wjc.37.2014.07.20.11.48.27 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 20 Jul 2014 11:48:28 -0700 (PDT) From: David Herrmann To: linux-input@vger.kernel.org Cc: Dmitry Torokhov , Henrik Rydberg , David Herrmann Subject: [RFC PATCH] Input: evdev - drop redundant list-locking Date: Sun, 20 Jul 2014 20:48:12 +0200 Message-Id: <1405882092-7005-1-git-send-email-dh.herrmann@gmail.com> X-Mailer: git-send-email 2.0.2 Sender: linux-input-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-input@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP evdev->client_list is rcu-protected. There is no need to have a separate spinlock just for the list. Either one is good enough, so lets drop the spinlock. Signed-off-by: David Herrmann --- Hi I stumbled across this one when doing some evdev reviews. Maybe I'm missing something obvious and I should stop coding on Sundays. But the RCU-protection should be enough here, right? We _could_ do a synchronize_rcu() during evdev_attach_client() to guarantee that new events are really delivered once it returns. But that seems rather pedantic to me. I'm also not sure why we use RCU here, anyway. I mean, there's no high contention so a spinlock should be fine and would get rid of the very expensive synchronize_rcu during close(). But then again, I don't really care enough to write benchmarks for that.. Thanks David drivers/input/evdev.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/drivers/input/evdev.c b/drivers/input/evdev.c index 7a25a7a..1f38bd1 100644 --- a/drivers/input/evdev.c +++ b/drivers/input/evdev.c @@ -34,7 +34,6 @@ struct evdev { wait_queue_head_t wait; struct evdev_client __rcu *grab; struct list_head client_list; - spinlock_t client_lock; /* protects client_list */ struct mutex mutex; struct device dev; struct cdev cdev; @@ -433,17 +432,13 @@ static int evdev_ungrab(struct evdev *evdev, struct evdev_client *client) static void evdev_attach_client(struct evdev *evdev, struct evdev_client *client) { - spin_lock(&evdev->client_lock); list_add_tail_rcu(&client->node, &evdev->client_list); - spin_unlock(&evdev->client_lock); } static void evdev_detach_client(struct evdev *evdev, struct evdev_client *client) { - spin_lock(&evdev->client_lock); list_del_rcu(&client->node); - spin_unlock(&evdev->client_lock); synchronize_rcu(); } @@ -485,10 +480,10 @@ static void evdev_hangup(struct evdev *evdev) { struct evdev_client *client; - spin_lock(&evdev->client_lock); - list_for_each_entry(client, &evdev->client_list, node) + rcu_read_lock(); + list_for_each_entry_rcu(client, &evdev->client_list, node) kill_fasync(&client->fasync, SIGIO, POLL_HUP); - spin_unlock(&evdev->client_lock); + rcu_read_unlock(); wake_up_interruptible(&evdev->wait); } @@ -1438,7 +1433,6 @@ static int evdev_connect(struct input_handler *handler, struct input_dev *dev, } INIT_LIST_HEAD(&evdev->client_list); - spin_lock_init(&evdev->client_lock); mutex_init(&evdev->mutex); init_waitqueue_head(&evdev->wait); evdev->exist = true;