From patchwork Tue Aug 11 09:32:36 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikanth Karthikesan X-Patchwork-Id: 40550 Received: from hormel.redhat.com (hormel1.redhat.com [209.132.177.33]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n7B9VS1H006088 for ; Tue, 11 Aug 2009 09:31:29 GMT Received: from listman.util.phx.redhat.com (listman.util.phx.redhat.com [10.8.4.110]) by hormel.redhat.com (Postfix) with ESMTP id A4850619BB7; Tue, 11 Aug 2009 05:31:28 -0400 (EDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by listman.util.phx.redhat.com (8.13.1/8.13.1) with ESMTP id n7B9VQiW023331 for ; Tue, 11 Aug 2009 05:31:26 -0400 Received: from mx1.redhat.com (mx1.redhat.com [172.16.48.31]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id n7B9VPG4021004; Tue, 11 Aug 2009 05:31:25 -0400 Received: from mx1.suse.de (cantor.suse.de [195.135.220.2]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id n7B9VAVc013731; Tue, 11 Aug 2009 05:31:10 -0400 Received: from relay1.suse.de (mail2.suse.de [195.135.221.8]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.suse.de (Postfix) with ESMTP id 0423993B18; Tue, 11 Aug 2009 11:31:10 +0200 (CEST) From: Nikanth Karthikesan Organization: suse.de To: Jens Axboe Date: Tue, 11 Aug 2009 15:02:36 +0530 User-Agent: KMail/1.11.1 (Linux/2.6.27.23-0.1-default; KDE/4.2.1; x86_64; ; ) References: <200908081025.58865.knikanth@suse.de> <200908101551.08605.knikanth@suse.de> <200908101618.18085.knikanth@suse.de> In-Reply-To: <200908101618.18085.knikanth@suse.de> MIME-Version: 1.0 Content-Disposition: inline Message-Id: <200908111502.36570.knikanth@suse.de> X-RedHat-Spam-Score: -5.919 X-Scanned-By: MIMEDefang 2.58 on 172.16.52.254 X-Scanned-By: MIMEDefang 2.63 on 172.16.48.31 X-loop: dm-devel@redhat.com Cc: Kiyoshi Ueda , Mike Snitzer , linux-kernel@vger.kernel.org, dm-devel@redhat.com, Alasdair G Kergon Subject: [dm-devel] [PATCH-v3 1/2] Allow delaying initialization of queue after allocation X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.5 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com Export a way to delay initializing a request_queue after allocating it. This is needed by device-mapper devices, as they create the queue on device creation time, but they decide whether it would use the elevator and requests only after first successful table load. Only request-based dm-devices use the elevator and requests. Without this either one needs to initialize and free the mempool and elevator, if it was a bio-based dm-device or leave it allocated, as it is currently done. This slightly changes the behaviour of block_init_queue_node() such that blk_put_queue() would be called, even if blk_init_free_list() fails. Also export elevator_register_queue() to modules. Signed-off-by: Nikanth Karthikesan --- -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel diff --git a/block/blk-core.c b/block/blk-core.c index e3299a7..8b05b3b 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -495,6 +495,8 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id) if (!q) return NULL; + q->node = node_id; + q->backing_dev_info.unplug_io_fn = blk_backing_dev_unplug; q->backing_dev_info.unplug_io_data = q; q->backing_dev_info.ra_pages = @@ -569,12 +571,25 @@ blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id) if (!q) return NULL; - q->node = node_id; - if (blk_init_free_list(q)) { + if (blk_init_allocated_queue(q, rfn, lock)) { + blk_put_queue(q); kmem_cache_free(blk_requestq_cachep, q); return NULL; } + return q; +} +EXPORT_SYMBOL(blk_init_queue_node); + +int blk_init_allocated_queue(struct request_queue *q, request_fn_proc *rfn, + spinlock_t *lock) +{ + int err = 0; + + err = blk_init_free_list(q); + if (err) + goto out; + q->request_fn = rfn; q->prep_rq_fn = NULL; q->unplug_fn = generic_unplug_device; @@ -591,15 +606,23 @@ blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id) /* * all done */ - if (!elevator_init(q, NULL)) { - blk_queue_congestion_threshold(q); - return q; - } + err = elevator_init(q, NULL); + if (err) + goto free_and_out; - blk_put_queue(q); - return NULL; + blk_queue_congestion_threshold(q); + + return 0; + +free_and_out: + /* + * Cleanup mempool allocated by blk_init_free_list + */ + mempool_destroy(q->rq.rq_pool); +out: + return err; } -EXPORT_SYMBOL(blk_init_queue_node); +EXPORT_SYMBOL(blk_init_allocated_queue); int blk_get_queue(struct request_queue *q) { diff --git a/block/elevator.c b/block/elevator.c index 2d511f9..0827cd3 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -930,6 +930,7 @@ int elv_register_queue(struct request_queue *q) } return error; } +EXPORT_SYMBOL(elv_register_queue); static void __elv_unregister_queue(struct elevator_queue *e) { diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 69103e0..4a26fc1 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -901,6 +901,8 @@ extern void blk_abort_queue(struct request_queue *); extern struct request_queue *blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id); extern struct request_queue *blk_init_queue(request_fn_proc *, spinlock_t *); +extern int blk_init_allocated_queue(struct request_queue *q, + request_fn_proc *rfn, spinlock_t *lock); extern void blk_cleanup_queue(struct request_queue *); extern void blk_queue_make_request(struct request_queue *, make_request_fn *); extern void blk_queue_bounce_limit(struct request_queue *, u64);