From patchwork Sat Aug 8 04:56:00 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikanth Karthikesan X-Patchwork-Id: 40089 Received: from hormel.redhat.com (hormel1.redhat.com [209.132.177.33]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n784sxGT005354 for ; Sat, 8 Aug 2009 04:54:59 GMT Received: from listman.util.phx.redhat.com (listman.util.phx.redhat.com [10.8.4.110]) by hormel.redhat.com (Postfix) with ESMTP id 587448E0004; Sat, 8 Aug 2009 00:54:58 -0400 (EDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by listman.util.phx.redhat.com (8.13.1/8.13.1) with ESMTP id n784ss6W026657 for ; Sat, 8 Aug 2009 00:54:54 -0400 Received: from mx3.redhat.com (mx3.redhat.com [172.16.48.32]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id n784ssJH012652; Sat, 8 Aug 2009 00:54:54 -0400 Received: from mx1.suse.de (cantor.suse.de [195.135.220.2]) by mx3.redhat.com (8.13.8/8.13.8) with ESMTP id n784sdrb017001; Sat, 8 Aug 2009 00:54:39 -0400 Received: from relay1.suse.de (relay-ext.suse.de [195.135.221.8]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.suse.de (Postfix) with ESMTP id C0AA190847; Sat, 8 Aug 2009 06:54:38 +0200 (CEST) From: Nikanth Karthikesan Organization: suse.de To: Jens Axboe , Alasdair G Kergon Date: Sat, 8 Aug 2009 10:26:00 +0530 User-Agent: KMail/1.11.1 (Linux/2.6.27.23-0.1-default; KDE/4.2.1; x86_64; ; ) MIME-Version: 1.0 Content-Disposition: inline Message-Id: <200908081026.01097.knikanth@suse.de> X-RedHat-Spam-Score: -5.71 X-Scanned-By: MIMEDefang 2.58 on 172.16.52.254 X-Scanned-By: MIMEDefang 2.63 on 172.16.48.32 X-loop: dm-devel@redhat.com Cc: Kiyoshi Ueda , dm-devel@redhat.com, linux-kernel@vger.kernel.org Subject: [dm-devel] [PATCH 2/2] Initialize mempool and elevator only for request-based dm devices X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.5 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com Intialize the request_queue and elevator only when the device is marked as a request-based device. This avoids unnecessary creation of mempool for requests. Also we wrongly initialize the elevator even for bio-based devices. As the /sys/block/dm-*/queue/scheduler is exported for device-mapper devices, it is possible to confuse with scheduler options for bio-based devices where scheduler is not at all used. Signed-off-by: Nikanth Karthikesan --- -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 8a311ea..b01dfbe 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1749,22 +1749,21 @@ static struct mapped_device *alloc_dev(int minor) INIT_LIST_HEAD(&md->uevent_list); spin_lock_init(&md->uevent_lock); - md->queue = blk_init_queue(dm_request_fn, NULL); + md->queue = blk_alloc_queue(GFP_KERNEL); if (!md->queue) goto bad_queue; /* * Request-based dm devices cannot be stacked on top of bio-based dm - * devices. The type of this dm device has not been decided yet, - * although we initialized the queue using blk_init_queue(). + * devices. The type of this dm device has not been decided yet. * The type is decided at the first table loading time. * To prevent problematic device stacking, clear the queue flag * for request stacking support until then. * * This queue is new, so no concurrency on the queue_flags. */ + md->queue->queue_flags = QUEUE_FLAG_DEFAULT; queue_flag_clear_unlocked(QUEUE_FLAG_STACKABLE, md->queue); - md->saved_make_request_fn = md->queue->make_request_fn; md->queue->queuedata = md; md->queue->backing_dev_info.congested_fn = dm_any_congested; md->queue->backing_dev_info.congested_data = md; @@ -1772,9 +1771,6 @@ static struct mapped_device *alloc_dev(int minor) blk_queue_bounce_limit(md->queue, BLK_BOUNCE_ANY); md->queue->unplug_fn = dm_unplug_all; blk_queue_merge_bvec(md->queue, dm_merge_bvec); - blk_queue_softirq_done(md->queue, dm_softirq_done); - blk_queue_prep_rq(md->queue, dm_prep_fn); - blk_queue_lld_busy(md->queue, dm_lld_busy); md->disk = alloc_disk(1); if (!md->disk) @@ -2203,7 +2199,25 @@ int dm_swap_table(struct mapped_device *md, struct dm_table *table) goto out; } - __unbind(md); + if (md->map) + __unbind(md); + else { + /* new device is being marked as either request-based or bio-based */ + if (dm_table_request_based(table)) { + /* Initialize queue for request-based dm */ + r = blk_init_allocated_queue(md->queue, dm_request_fn, + NULL); + if (r) + goto out; + md->saved_make_request_fn = md->queue->make_request_fn; + blk_queue_make_request(md->queue, dm_request); + blk_queue_softirq_done(md->queue, dm_softirq_done); + blk_queue_prep_rq(md->queue, dm_prep_fn); + blk_queue_lld_busy(md->queue, dm_lld_busy); + + } + } + r = __bind(md, table, &limits); out: