From patchwork Tue Jun 23 12:10:54 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gui Jianfeng X-Patchwork-Id: 31961 Received: from hormel.redhat.com (hormel1.redhat.com [209.132.177.33]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n5NCDWmv008383 for ; Tue, 23 Jun 2009 12:13:32 GMT Received: from listman.util.phx.redhat.com (listman.util.phx.redhat.com [10.8.4.110]) by hormel.redhat.com (Postfix) with ESMTP id 74D67618AAB; Tue, 23 Jun 2009 08:13:31 -0400 (EDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by listman.util.phx.redhat.com (8.13.1/8.13.1) with ESMTP id n5NCDQvm008801 for ; Tue, 23 Jun 2009 08:13:26 -0400 Received: from mx3.redhat.com (mx3.redhat.com [172.16.48.32]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id n5NCDHmS023908; Tue, 23 Jun 2009 08:13:17 -0400 Received: from song.cn.fujitsu.com (cn.fujitsu.com [222.73.24.84] (may be forged)) by mx3.redhat.com (8.13.8/8.13.8) with ESMTP id n5NCD2vk031279; Tue, 23 Jun 2009 08:13:03 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id 997B5170044; Tue, 23 Jun 2009 20:13:03 +0800 (CST) Received: from fnst.cn.fujitsu.com (localhost.localdomain [127.0.0.1]) by tang.cn.fujitsu.com (8.13.1/8.13.1) with ESMTP id n5NCZRbF022549; Tue, 23 Jun 2009 20:35:28 +0800 Received: from [127.0.0.1] (unknown [10.167.141.226]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id A5333D4016; Tue, 23 Jun 2009 20:14:32 +0800 (CST) Message-ID: <4A40C64E.8040305@cn.fujitsu.com> Date: Tue, 23 Jun 2009 20:10:54 +0800 From: Gui Jianfeng User-Agent: Thunderbird 2.0.0.5 (Windows/20070716) MIME-Version: 1.0 To: Vivek Goyal References: <1245443858-8487-1-git-send-email-vgoyal@redhat.com> <1245443858-8487-8-git-send-email-vgoyal@redhat.com> In-Reply-To: <1245443858-8487-8-git-send-email-vgoyal@redhat.com> X-RedHat-Spam-Score: -0.749 X-Scanned-By: MIMEDefang 2.58 on 172.16.52.254 X-Scanned-By: MIMEDefang 2.63 on 172.16.48.32 X-loop: dm-devel@redhat.com Cc: dhaval@linux.vnet.ibm.com, snitzer@redhat.com, peterz@infradead.org, dm-devel@redhat.com, dpshah@google.com, jens.axboe@oracle.com, agk@redhat.com, balbir@linux.vnet.ibm.com, paolo.valente@unimore.it, fernando@oss.ntt.co.jp, mikew@google.com, jmoyer@redhat.com, nauman@google.com, m-ikeda@ds.jp.nec.com, lizf@cn.fujitsu.com, fchecconi@gmail.com, akpm@linux-foundation.org, jbaron@redhat.com, linux-kernel@vger.kernel.org, s-uchida@ap.jp.nec.com, righi.andrea@gmail.com, containers@lists.linux-foundation.org Subject: [dm-devel] Re: [PATCH 07/20] io-controller: Export disk time used and nr sectors dipatched through cgroups X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.5 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com Vivek Goyal wrote: ... > + > +static int io_cgroup_disk_sectors_read(struct cgroup *cgroup, > + struct cftype *cftype, struct seq_file *m) > +{ > + struct io_cgroup *iocg; > + struct io_group *iog; > + struct hlist_node *n; > + > + if (!cgroup_lock_live_group(cgroup)) > + return -ENODEV; > + > + iocg = cgroup_to_io_cgroup(cgroup); > + > + spin_lock_irq(&iocg->lock); It's better to make use of rcu_read_lock instead since it's a read action. Signed-off-by: Gui Jianfeng --- block/elevator-fq.c | 12 ++++++------ 1 files changed, 6 insertions(+), 6 deletions(-) diff --git a/block/elevator-fq.c b/block/elevator-fq.c index 2ad40eb..d779282 100644 --- a/block/elevator-fq.c +++ b/block/elevator-fq.c @@ -1418,7 +1418,7 @@ static int io_cgroup_disk_time_read(struct cgroup *cgroup, iocg = cgroup_to_io_cgroup(cgroup); - spin_lock_irq(&iocg->lock); + rcu_read_lock(); hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) { /* * There might be groups which are not functional and @@ -1430,7 +1430,7 @@ static int io_cgroup_disk_time_read(struct cgroup *cgroup, iog->entity.total_service); } } - spin_unlock_irq(&iocg->lock); + rcu_read_unlock(); cgroup_unlock(); return 0; @@ -1448,7 +1448,7 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup, iocg = cgroup_to_io_cgroup(cgroup); - spin_lock_irq(&iocg->lock); + rcu_read_lock(); hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) { /* * There might be groups which are not functional and @@ -1460,7 +1460,7 @@ static int io_cgroup_disk_sectors_read(struct cgroup *cgroup, iog->entity.total_sector_service); } } - spin_unlock_irq(&iocg->lock); + rcu_read_unlock(); cgroup_unlock(); return 0; @@ -1478,7 +1478,7 @@ static int io_cgroup_disk_queue_read(struct cgroup *cgroup, return -ENODEV; iocg = cgroup_to_io_cgroup(cgroup); - spin_lock_irq(&iocg->lock); + rcu_read_lock(); /* Loop through all the io groups and print statistics */ hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) { /* @@ -1491,7 +1491,7 @@ static int io_cgroup_disk_queue_read(struct cgroup *cgroup, iog->queue_duration); } } - spin_unlock_irq(&iocg->lock); + rcu_read_unlock(); cgroup_unlock(); return 0;