From patchwork Sat Jul 31 14:59:16 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 116199 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.4/8.14.3) with ESMTP id o6VG7kTe027084 for ; Sat, 31 Jul 2010 16:10:07 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756153Ab0GaO7v (ORCPT ); Sat, 31 Jul 2010 10:59:51 -0400 Received: from mail-bw0-f46.google.com ([209.85.214.46]:55010 "EHLO mail-bw0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756052Ab0GaO7k (ORCPT ); Sat, 31 Jul 2010 10:59:40 -0400 Received: by mail-bw0-f46.google.com with SMTP id 1so870874bwz.19 for ; Sat, 31 Jul 2010 07:59:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:from:to:cc:subject:date :message-id:x-mailer:in-reply-to:references; bh=TllZLqNMYOev6YQ4FMAZX1irT8AqBDJzi9qmjrWqglg=; b=XuWv6ysSiFf5lVw8f8zsfBUmJoHdz6UL7jyF4A+ZIzLxTf6CsnP3pERi4MvobJSVn7 Qxb3JvZ76HM0MYq6a5w3Yp6SaEWKnmlvGNOZm6EsQhWmTa/83j9cet7Uq1rexB/NxA+F fYfR8pZ5KpK9PRwYTJzQql39wlRdzPluT8/V4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; b=wqrGJZTsoUqqlZIMSZtahKWOgjGxdWZg+Shk37EYYgk7jDR76j4gprfwjOFwvQllIU AqyZJUPYSIDA0V+Ai/qhyWN92yvbsGUMHNWI4PCcoKGxt4tp0vzGqCyYr4EnLktExnkV JYy9L9gRLj8i24Y17hQMU+sLmIaqgNZA34Wcg= Received: by 10.204.126.82 with SMTP id b18mr2244601bks.124.1280588379627; Sat, 31 Jul 2010 07:59:39 -0700 (PDT) Received: from localhost.localdomain (IGLD-84-228-252-15.inter.net.il [84.228.252.15]) by mx.google.com with ESMTPS id a9sm2445428bky.23.2010.07.31.07.59.37 (version=SSLv3 cipher=RC4-MD5); Sat, 31 Jul 2010 07:59:39 -0700 (PDT) From: Maxim Levitsky To: lirc-list@lists.sourceforge.net Cc: Jarod Wilson , linux-input@vger.kernel.org, linux-media@vger.kernel.org, Mauro Carvalho Chehab , Christoph Bartelmus , Maxim Levitsky Subject: [PATCH 03/13] IR: replace spinlock with mutex. Date: Sat, 31 Jul 2010 17:59:16 +0300 Message-Id: <1280588366-26101-4-git-send-email-maximlevitsky@gmail.com> X-Mailer: git-send-email 1.7.0.4 In-Reply-To: <1280588366-26101-1-git-send-email-maximlevitsky@gmail.com> References: <1280588366-26101-1-git-send-email-maximlevitsky@gmail.com> Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Sat, 31 Jul 2010 16:10:07 +0000 (UTC) diff --git a/drivers/media/IR/ir-raw-event.c b/drivers/media/IR/ir-raw-event.c index 51f65da..9d5c029 100644 --- a/drivers/media/IR/ir-raw-event.c +++ b/drivers/media/IR/ir-raw-event.c @@ -13,7 +13,7 @@ */ #include -#include +#include #include #include "ir-core-priv.h" @@ -24,7 +24,7 @@ static LIST_HEAD(ir_raw_client_list); /* Used to handle IR raw handler extensions */ -static DEFINE_SPINLOCK(ir_raw_handler_lock); +static DEFINE_MUTEX(ir_raw_handler_lock); static LIST_HEAD(ir_raw_handler_list); static u64 available_protocols; @@ -41,10 +41,10 @@ static void ir_raw_event_work(struct work_struct *work) container_of(work, struct ir_raw_event_ctrl, rx_work); while (kfifo_out(&raw->kfifo, &ev, sizeof(ev)) == sizeof(ev)) { - spin_lock(&ir_raw_handler_lock); + mutex_lock(&ir_raw_handler_lock); list_for_each_entry(handler, &ir_raw_handler_list, list) handler->decode(raw->input_dev, ev); - spin_unlock(&ir_raw_handler_lock); + mutex_unlock(&ir_raw_handler_lock); raw->prev_ev = ev; } } @@ -150,9 +150,9 @@ u64 ir_raw_get_allowed_protocols() { u64 protocols; - spin_lock(&ir_raw_handler_lock); + mutex_lock(&ir_raw_handler_lock); protocols = available_protocols; - spin_unlock(&ir_raw_handler_lock); + mutex_unlock(&ir_raw_handler_lock); return protocols; } @@ -180,12 +180,12 @@ int ir_raw_event_register(struct input_dev *input_dev) return rc; } - spin_lock(&ir_raw_handler_lock); + mutex_lock(&ir_raw_handler_lock); list_add_tail(&ir->raw->list, &ir_raw_client_list); list_for_each_entry(handler, &ir_raw_handler_list, list) if (handler->raw_register) handler->raw_register(ir->raw->input_dev); - spin_unlock(&ir_raw_handler_lock); + mutex_unlock(&ir_raw_handler_lock); return 0; } @@ -200,12 +200,12 @@ void ir_raw_event_unregister(struct input_dev *input_dev) cancel_work_sync(&ir->raw->rx_work); - spin_lock(&ir_raw_handler_lock); + mutex_lock(&ir_raw_handler_lock); list_del(&ir->raw->list); list_for_each_entry(handler, &ir_raw_handler_list, list) if (handler->raw_unregister) handler->raw_unregister(ir->raw->input_dev); - spin_unlock(&ir_raw_handler_lock); + mutex_unlock(&ir_raw_handler_lock); kfifo_free(&ir->raw->kfifo); kfree(ir->raw); @@ -220,13 +220,13 @@ int ir_raw_handler_register(struct ir_raw_handler *ir_raw_handler) { struct ir_raw_event_ctrl *raw; - spin_lock(&ir_raw_handler_lock); + mutex_lock(&ir_raw_handler_lock); list_add_tail(&ir_raw_handler->list, &ir_raw_handler_list); if (ir_raw_handler->raw_register) list_for_each_entry(raw, &ir_raw_client_list, list) ir_raw_handler->raw_register(raw->input_dev); available_protocols |= ir_raw_handler->protocols; - spin_unlock(&ir_raw_handler_lock); + mutex_unlock(&ir_raw_handler_lock); return 0; } @@ -236,13 +236,13 @@ void ir_raw_handler_unregister(struct ir_raw_handler *ir_raw_handler) { struct ir_raw_event_ctrl *raw; - spin_lock(&ir_raw_handler_lock); + mutex_lock(&ir_raw_handler_lock); list_del(&ir_raw_handler->list); if (ir_raw_handler->raw_unregister) list_for_each_entry(raw, &ir_raw_client_list, list) ir_raw_handler->raw_unregister(raw->input_dev); available_protocols &= ~ir_raw_handler->protocols; - spin_unlock(&ir_raw_handler_lock); + mutex_unlock(&ir_raw_handler_lock); } EXPORT_SYMBOL(ir_raw_handler_unregister);