From patchwork Fri Apr 15 06:34:10 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 710181 Received: from mx4-phx2.redhat.com (mx4-phx2.redhat.com [209.132.183.25]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p3F6aqnO000755 for ; Fri, 15 Apr 2011 06:37:13 GMT Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx4-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id p3F6YNpC021308; Fri, 15 Apr 2011 02:34:24 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id p3F6YMrN015516 for ; Fri, 15 Apr 2011 02:34:22 -0400 Received: from mx1.redhat.com (ext-mx11.extmail.prod.ext.phx2.redhat.com [10.5.110.16]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id p3F6YHSP006980 for ; Fri, 15 Apr 2011 02:34:17 -0400 Received: from mx2.fusionio.com (mx2.fusionio.com [64.244.102.31]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p3F6YDYP010943 for ; Fri, 15 Apr 2011 02:34:13 -0400 X-ASG-Debug-ID: 1302849251-01de284cf814e240001-pvc9sj Received: from mail1.int.fusionio.com (mail1.int.fusionio.com [10.101.1.21]) by mx2.fusionio.com with ESMTP id 9nGYba5jNLrelXoG; Fri, 15 Apr 2011 00:34:11 -0600 (MDT) X-Barracuda-Envelope-From: JAxboe@fusionio.com Received: from [192.168.0.33] (95.166.99.235) by mail.fusionio.com (10.101.1.19) with Microsoft SMTP Server (TLS) id 8.1.393.1; Fri, 15 Apr 2011 00:34:10 -0600 Message-ID: <4DA7E6E2.5000902@fusionio.com> Date: Fri, 15 Apr 2011 08:34:10 +0200 From: Jens Axboe MIME-Version: 1.0 To: "hch@infradead.org" References: <20110310005810.GA17911@redhat.com> <20110405130541.6c2b5f86@notabene.brown> <20110411145022.710c30e9@notabene.brown> <4DA2C7BE.6060804@fusionio.com> <20110411205928.13915719@notabene.brown> <4DA2E03A.2080607@fusionio.com> <20110411212635.7959de70@notabene.brown> <4DA2E7F0.9010904@fusionio.com> <20110411220505.1028816e@notabene.brown> <4DA2F00E.6010907@fusionio.com> <20110415042611.GA3116@infradead.org> X-ASG-Orig-Subj: Re: [PATCH 05/10] block: remove per-queue plugging In-Reply-To: <20110415042611.GA3116@infradead.org> X-Barracuda-Connect: mail1.int.fusionio.com[10.101.1.21] X-Barracuda-Start-Time: 1302849251 X-Barracuda-URL: http://10.101.1.181:8000/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at fusionio.com X-Barracuda-Spam-Score: 0.00 X-Barracuda-Spam-Status: No, SCORE=0.00 using global scores of TAG_LEVEL=1000.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=9.0 tests= X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.2.60903 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- X-RedHat-Spam-Score: -0.01 (T_RP_MATCHES_RCVD) X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 X-Scanned-By: MIMEDefang 2.68 on 10.5.110.16 X-loop: dm-devel@redhat.com Cc: "linux-raid@vger.kernel.org" , "dm-devel@redhat.com" , "linux-kernel@vger.kernel.org" , Mike Snitzer Subject: Re: [dm-devel] [PATCH 05/10] block: remove per-queue plugging X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Fri, 15 Apr 2011 06:37:13 +0000 (UTC) On 2011-04-15 06:26, hch@infradead.org wrote: > Btw, "block: move queue run on unplug to kblockd" currently moves > the __blk_run_queue call to kblockd unconditionally currently. But > I'm not sure that's correct - if we do an explicit blk_finish_plug > there's no point in forcing the context switch. It's correct, but yes it's not optimal for the explicit unplug. Well I think it really depends - for the single sync case, it's not ideal to punt to kblockd. But if you have a bunch of threads doing IO, you probably DO want to punt it to kblockd to avoid too many threads hammering on the queue lock at the same time. Would need testing to be sure, the below would a way to accomplish that. diff --git a/block/blk-core.c b/block/blk-core.c index b598fa7..995e995 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -2662,16 +2662,16 @@ static int plug_rq_cmp(void *priv, struct list_head *a, struct list_head *b) return !(rqa->q <= rqb->q); } -static void queue_unplugged(struct request_queue *q, unsigned int depth) +static void queue_unplugged(struct request_queue *q, unsigned int depth, bool run_from_wq) { trace_block_unplug_io(q, depth); - __blk_run_queue(q, true); + __blk_run_queue(q, run_from_wq); if (q->unplugged_fn) q->unplugged_fn(q); } -void blk_flush_plug_list(struct blk_plug *plug) +void blk_flush_plug_list(struct blk_plug *plug, bool run_from_wq) { struct request_queue *q; unsigned long flags; @@ -2706,7 +2706,7 @@ void blk_flush_plug_list(struct blk_plug *plug) BUG_ON(!rq->q); if (rq->q != q) { if (q) { - queue_unplugged(q, depth); + queue_unplugged(q, depth, run_from_wq); spin_unlock(q->queue_lock); } q = rq->q; @@ -2727,7 +2727,7 @@ void blk_flush_plug_list(struct blk_plug *plug) } if (q) { - queue_unplugged(q, depth); + queue_unplugged(q, depth, run_from_wq); spin_unlock(q->queue_lock); } @@ -2737,7 +2737,7 @@ EXPORT_SYMBOL(blk_flush_plug_list); void blk_finish_plug(struct blk_plug *plug) { - blk_flush_plug_list(plug); + blk_flush_plug_list(plug, false); if (plug == current->plug) current->plug = NULL; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index ffe48ff..1c76506 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -865,14 +865,14 @@ struct blk_plug { extern void blk_start_plug(struct blk_plug *); extern void blk_finish_plug(struct blk_plug *); -extern void blk_flush_plug_list(struct blk_plug *); +extern void blk_flush_plug_list(struct blk_plug *, bool); static inline void blk_flush_plug(struct task_struct *tsk) { struct blk_plug *plug = tsk->plug; if (plug) - blk_flush_plug_list(plug); + blk_flush_plug_list(plug, true); } static inline bool blk_needs_flush_plug(struct task_struct *tsk)