From patchwork Wed Jun 2 06:53:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12293189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75C1FC47092 for ; Wed, 2 Jun 2021 07:03:40 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 498D2610CB for ; Wed, 2 Jun 2021 07:03:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 498D2610CB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.135553.251873 (Exim 4.92) (envelope-from ) id 1loKuc-0001p3-En; Wed, 02 Jun 2021 07:03:34 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 135553.251873; Wed, 02 Jun 2021 07:03:34 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1loKub-0001k4-Q4; Wed, 02 Jun 2021 07:03:33 +0000 Received: by outflank-mailman (input) for mailman id 135553; Wed, 02 Jun 2021 07:03:30 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1loKo7-0007A2-7X for xen-devel@lists.xenproject.org; Wed, 02 Jun 2021 06:56:51 +0000 Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id ccbb73db-3b46-42b2-b0b6-505b9c0b6592; Wed, 02 Jun 2021 06:55:30 +0000 (UTC) Received: from shol69.static.otenet.gr ([83.235.170.67] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1loKmR-0025lm-Ax; Wed, 02 Jun 2021 06:55:08 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ccbb73db-3b46-42b2-b0b6-505b9c0b6592 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=JLPBr5a4Ln5tpVtARufaz5Rx9DFRvnAc8JDCRXj5CBM=; b=FBI391AmGpCGer8nPkf12dCy+o eIEdbU42pvJv3rPD9dFUDNG20907L9ASE2fIQkUHxjUoIi8PgAItnVYsfQZ6hIztO2P1/v5yAnQCY AEIEL8vqjQuYY/yLZNb4GnSP5PvUwFzpizHw+fRWs4gY52j0aZwRyeKYcxL1PkeJAHD+Jt16XML9Y 6k/bwePGimCyBX6ZWlmiXus1eyEpeartBYIhWxPwfdF5ffOhB85DbtrRd+aBJwu9lstkJ64aPi30u wP4A8hbv0iVNdk/AGNcCWJHaTuex2I57tAI1/KazjM7mVhow4NChmgEaGqXE5WnMhD7z0Peh8k1rH SZflGEZA==; From: Christoph Hellwig To: Jens Axboe Cc: Justin Sanders , Denis Efremov , Josef Bacik , Tim Waugh , Geoff Levand , Ilya Dryomov , "Md. Haris Iqbal" , Jack Wang , "Michael S. Tsirkin" , Jason Wang , Konrad Rzeszutek Wilk , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= , Mike Snitzer , Maxim Levitsky , Alex Dubov , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , dm-devel@redhat.com, linux-block@vger.kernel.org, nbd@other.debian.org, linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org, linux-s390@vger.kernel.org Subject: [PATCH 16/30] aoe: use blk_mq_alloc_disk and blk_cleanup_disk Date: Wed, 2 Jun 2021 09:53:31 +0300 Message-Id: <20210602065345.355274-17-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210602065345.355274-1-hch@lst.de> References: <20210602065345.355274-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Use blk_mq_alloc_disk and blk_cleanup_disk to simplify the gendisk and request_queue allocation. Signed-off-by: Christoph Hellwig --- drivers/block/aoe/aoeblk.c | 33 ++++++++++++--------------------- drivers/block/aoe/aoedev.c | 3 +-- 2 files changed, 13 insertions(+), 23 deletions(-) diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c index c34e71b0c4a9..06b360f7123a 100644 --- a/drivers/block/aoe/aoeblk.c +++ b/drivers/block/aoe/aoeblk.c @@ -338,14 +338,13 @@ static const struct blk_mq_ops aoeblk_mq_ops = { .queue_rq = aoeblk_queue_rq, }; -/* alloc_disk and add_disk can sleep */ +/* blk_mq_alloc_disk and add_disk can sleep */ void aoeblk_gdalloc(void *vp) { struct aoedev *d = vp; struct gendisk *gd; mempool_t *mp; - struct request_queue *q; struct blk_mq_tag_set *set; ulong flags; int late = 0; @@ -362,19 +361,12 @@ aoeblk_gdalloc(void *vp) if (late) return; - gd = alloc_disk(AOE_PARTITIONS); - if (gd == NULL) { - pr_err("aoe: cannot allocate disk structure for %ld.%d\n", - d->aoemajor, d->aoeminor); - goto err; - } - mp = mempool_create(MIN_BUFS, mempool_alloc_slab, mempool_free_slab, buf_pool_cache); if (mp == NULL) { printk(KERN_ERR "aoe: cannot allocate bufpool for %ld.%d\n", d->aoemajor, d->aoeminor); - goto err_disk; + goto err; } set = &d->tag_set; @@ -391,12 +383,11 @@ aoeblk_gdalloc(void *vp) goto err_mempool; } - q = blk_mq_init_queue(set); - if (IS_ERR(q)) { + gd = blk_mq_alloc_disk(set, d); + if (IS_ERR(gd)) { pr_err("aoe: cannot allocate block queue for %ld.%d\n", d->aoemajor, d->aoeminor); - blk_mq_free_tag_set(set); - goto err_mempool; + goto err_tagset; } spin_lock_irqsave(&d->lock, flags); @@ -405,16 +396,16 @@ aoeblk_gdalloc(void *vp) WARN_ON(d->flags & DEVFL_TKILL); WARN_ON(d->gd); WARN_ON(d->flags & DEVFL_UP); - blk_queue_max_hw_sectors(q, BLK_DEF_MAX_SECTORS); - blk_queue_io_opt(q, SZ_2M); + blk_queue_max_hw_sectors(gd->queue, BLK_DEF_MAX_SECTORS); + blk_queue_io_opt(gd->queue, SZ_2M); d->bufpool = mp; - d->blkq = gd->queue = q; - q->queuedata = d; + d->blkq = gd->queue; d->gd = gd; if (aoe_maxsectors) - blk_queue_max_hw_sectors(q, aoe_maxsectors); + blk_queue_max_hw_sectors(gd->queue, aoe_maxsectors); gd->major = AOE_MAJOR; gd->first_minor = d->sysminor; + gd->minors = AOE_PARTITIONS; gd->fops = &aoe_bdops; gd->private_data = d; set_capacity(gd, d->ssize); @@ -435,10 +426,10 @@ aoeblk_gdalloc(void *vp) spin_unlock_irqrestore(&d->lock, flags); return; +err_tagset: + blk_mq_free_tag_set(set); err_mempool: mempool_destroy(mp); -err_disk: - put_disk(gd); err: spin_lock_irqsave(&d->lock, flags); d->flags &= ~DEVFL_GD_NOW; diff --git a/drivers/block/aoe/aoedev.c b/drivers/block/aoe/aoedev.c index e2ea2356da06..c5753c6bfe80 100644 --- a/drivers/block/aoe/aoedev.c +++ b/drivers/block/aoe/aoedev.c @@ -277,9 +277,8 @@ freedev(struct aoedev *d) if (d->gd) { aoedisk_rm_debugfs(d); del_gendisk(d->gd); - put_disk(d->gd); + blk_cleanup_disk(d->gd); blk_mq_free_tag_set(&d->tag_set); - blk_cleanup_queue(d->blkq); } t = d->targets; e = t + d->ntargets;