From patchwork Tue Mar 24 11:02:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Liu X-Patchwork-Id: 11455159 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9A7F814B4 for ; Tue, 24 Mar 2020 11:03:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6099D2080C for ; Tue, 24 Mar 2020 11:03:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="npRvybfi" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727111AbgCXLDm (ORCPT ); Tue, 24 Mar 2020 07:03:42 -0400 Received: from aserp2120.oracle.com ([141.146.126.78]:40172 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726524AbgCXLDm (ORCPT ); Tue, 24 Mar 2020 07:03:42 -0400 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 02OArxMs143299; Tue, 24 Mar 2020 11:03:33 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=1grhTE7k2E8+cEm/HT1rKs3wTZT1iXtP4BLa7JiCpwA=; b=npRvybfif59xHr0kb5oLPuKVJaSjffGgJVTnF1zxbQzWhAtFuMPePb99nviODXi/nxSp 6SBjjlqQu0hE0d0kT3fv3PV31hIs1YMfIBsR/TYbXW2dTpbj6VqO4U3ZY3o3ot/hh8qT Qhac8RssuzePU3L4/aq9Wt/Hdua/5yidC06p/dRwxCxZ3xGcfQKVHqYabRBwCvNlzSYh FU8jEyVc1dXxELfRWj1t+eSqN3Po5XdSphWKb5+Dr8GgcSIvGo5sDfeyWNn+oU7hlN8F cA7uURCBhNp80+cH6CgE9NLtDhcuwf7t+ayjekOJA2KB01PUPLI5nnxf/HzSXhSD+JdP rQ== Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by aserp2120.oracle.com with ESMTP id 2ywavm3m5d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Mar 2020 11:03:33 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 02OAq6Re016846; Tue, 24 Mar 2020 11:03:32 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by userp3030.oracle.com with ESMTP id 2yxw4p19g7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Mar 2020 11:03:32 +0000 Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 02OB3Vgu025952; Tue, 24 Mar 2020 11:03:31 GMT Received: from localhost.localdomain (/114.88.246.185) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Tue, 24 Mar 2020 04:03:30 -0700 From: Bob Liu To: dm-devel@redhat.com Cc: Damien.LeMoal@wdc.com, linux-block@vger.kernel.org, Dmitry.Fomichev@wdc.com, hare@suse.de, Bob Liu Subject: [RFC PATCH v2 1/3] dm zoned: rename dev name to zoned_dev Date: Tue, 24 Mar 2020 19:02:53 +0800 Message-Id: <20200324110255.8385-2-bob.liu@oracle.com> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20200324110255.8385-1-bob.liu@oracle.com> References: <20200324110255.8385-1-bob.liu@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9569 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 suspectscore=3 spamscore=0 mlxlogscore=999 adultscore=0 phishscore=0 mlxscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2003240059 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9569 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 malwarescore=0 priorityscore=1501 mlxscore=0 bulkscore=0 clxscore=1015 impostorscore=0 phishscore=0 suspectscore=3 mlxlogscore=999 spamscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2003240059 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This is a prepare patch, no function change. Since will introduce regular device, rename dev name to zoned_dev to make things clear. Signed-off-by: Bob Liu --- drivers/md/dm-zoned-metadata.c | 112 ++++++++++++++++++++--------------------- drivers/md/dm-zoned-target.c | 62 +++++++++++------------ 2 files changed, 87 insertions(+), 87 deletions(-) diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c index 369de15..e0e8be0 100644 --- a/drivers/md/dm-zoned-metadata.c +++ b/drivers/md/dm-zoned-metadata.c @@ -130,7 +130,7 @@ struct dmz_sb { * In-memory metadata. */ struct dmz_metadata { - struct dmz_dev *dev; + struct dmz_dev *zoned_dev; sector_t zone_bitmap_size; unsigned int zone_nr_bitmap_blocks; @@ -194,12 +194,12 @@ unsigned int dmz_id(struct dmz_metadata *zmd, struct dm_zone *zone) sector_t dmz_start_sect(struct dmz_metadata *zmd, struct dm_zone *zone) { - return (sector_t)dmz_id(zmd, zone) << zmd->dev->zone_nr_sectors_shift; + return (sector_t)dmz_id(zmd, zone) << zmd->zoned_dev->zone_nr_sectors_shift; } sector_t dmz_start_block(struct dmz_metadata *zmd, struct dm_zone *zone) { - return (sector_t)dmz_id(zmd, zone) << zmd->dev->zone_nr_blocks_shift; + return (sector_t)dmz_id(zmd, zone) << zmd->zoned_dev->zone_nr_blocks_shift; } unsigned int dmz_nr_chunks(struct dmz_metadata *zmd) @@ -404,7 +404,7 @@ static struct dmz_mblock *dmz_get_mblock_slow(struct dmz_metadata *zmd, sector_t block = zmd->sb[zmd->mblk_primary].block + mblk_no; struct bio *bio; - if (dmz_bdev_is_dying(zmd->dev)) + if (dmz_bdev_is_dying(zmd->zoned_dev)) return ERR_PTR(-EIO); /* Get a new block and a BIO to read it */ @@ -440,7 +440,7 @@ static struct dmz_mblock *dmz_get_mblock_slow(struct dmz_metadata *zmd, /* Submit read BIO */ bio->bi_iter.bi_sector = dmz_blk2sect(block); - bio_set_dev(bio, zmd->dev->bdev); + bio_set_dev(bio, zmd->zoned_dev->bdev); bio->bi_private = mblk; bio->bi_end_io = dmz_mblock_bio_end_io; bio_set_op_attrs(bio, REQ_OP_READ, REQ_META | REQ_PRIO); @@ -555,7 +555,7 @@ static struct dmz_mblock *dmz_get_mblock(struct dmz_metadata *zmd, TASK_UNINTERRUPTIBLE); if (test_bit(DMZ_META_ERROR, &mblk->state)) { dmz_release_mblock(zmd, mblk); - dmz_check_bdev(zmd->dev); + dmz_check_bdev(zmd->zoned_dev); return ERR_PTR(-EIO); } @@ -582,7 +582,7 @@ static int dmz_write_mblock(struct dmz_metadata *zmd, struct dmz_mblock *mblk, sector_t block = zmd->sb[set].block + mblk->no; struct bio *bio; - if (dmz_bdev_is_dying(zmd->dev)) + if (dmz_bdev_is_dying(zmd->zoned_dev)) return -EIO; bio = bio_alloc(GFP_NOIO, 1); @@ -594,7 +594,7 @@ static int dmz_write_mblock(struct dmz_metadata *zmd, struct dmz_mblock *mblk, set_bit(DMZ_META_WRITING, &mblk->state); bio->bi_iter.bi_sector = dmz_blk2sect(block); - bio_set_dev(bio, zmd->dev->bdev); + bio_set_dev(bio, zmd->zoned_dev->bdev); bio->bi_private = mblk; bio->bi_end_io = dmz_mblock_bio_end_io; bio_set_op_attrs(bio, REQ_OP_WRITE, REQ_META | REQ_PRIO); @@ -613,7 +613,7 @@ static int dmz_rdwr_block(struct dmz_metadata *zmd, int op, sector_t block, struct bio *bio; int ret; - if (dmz_bdev_is_dying(zmd->dev)) + if (dmz_bdev_is_dying(zmd->zoned_dev)) return -EIO; bio = bio_alloc(GFP_NOIO, 1); @@ -621,14 +621,14 @@ static int dmz_rdwr_block(struct dmz_metadata *zmd, int op, sector_t block, return -ENOMEM; bio->bi_iter.bi_sector = dmz_blk2sect(block); - bio_set_dev(bio, zmd->dev->bdev); + bio_set_dev(bio, zmd->zoned_dev->bdev); bio_set_op_attrs(bio, op, REQ_SYNC | REQ_META | REQ_PRIO); bio_add_page(bio, page, DMZ_BLOCK_SIZE, 0); ret = submit_bio_wait(bio); bio_put(bio); if (ret) - dmz_check_bdev(zmd->dev); + dmz_check_bdev(zmd->zoned_dev); return ret; } @@ -661,7 +661,7 @@ static int dmz_write_sb(struct dmz_metadata *zmd, unsigned int set) ret = dmz_rdwr_block(zmd, REQ_OP_WRITE, block, mblk->page); if (ret == 0) - ret = blkdev_issue_flush(zmd->dev->bdev, GFP_NOIO, NULL); + ret = blkdev_issue_flush(zmd->zoned_dev->bdev, GFP_NOIO, NULL); return ret; } @@ -695,7 +695,7 @@ static int dmz_write_dirty_mblocks(struct dmz_metadata *zmd, TASK_UNINTERRUPTIBLE); if (test_bit(DMZ_META_ERROR, &mblk->state)) { clear_bit(DMZ_META_ERROR, &mblk->state); - dmz_check_bdev(zmd->dev); + dmz_check_bdev(zmd->zoned_dev); ret = -EIO; } nr_mblks_submitted--; @@ -703,7 +703,7 @@ static int dmz_write_dirty_mblocks(struct dmz_metadata *zmd, /* Flush drive cache (this will also sync data) */ if (ret == 0) - ret = blkdev_issue_flush(zmd->dev->bdev, GFP_NOIO, NULL); + ret = blkdev_issue_flush(zmd->zoned_dev->bdev, GFP_NOIO, NULL); return ret; } @@ -760,7 +760,7 @@ int dmz_flush_metadata(struct dmz_metadata *zmd) */ dmz_lock_flush(zmd); - if (dmz_bdev_is_dying(zmd->dev)) { + if (dmz_bdev_is_dying(zmd->zoned_dev)) { ret = -EIO; goto out; } @@ -772,7 +772,7 @@ int dmz_flush_metadata(struct dmz_metadata *zmd) /* If there are no dirty metadata blocks, just flush the device cache */ if (list_empty(&write_list)) { - ret = blkdev_issue_flush(zmd->dev->bdev, GFP_NOIO, NULL); + ret = blkdev_issue_flush(zmd->zoned_dev->bdev, GFP_NOIO, NULL); goto err; } @@ -821,7 +821,7 @@ int dmz_flush_metadata(struct dmz_metadata *zmd) list_splice(&write_list, &zmd->mblk_dirty_list); spin_unlock(&zmd->mblk_lock); } - if (!dmz_check_bdev(zmd->dev)) + if (!dmz_check_bdev(zmd->zoned_dev)) ret = -EIO; goto out; } @@ -832,7 +832,7 @@ int dmz_flush_metadata(struct dmz_metadata *zmd) static int dmz_check_sb(struct dmz_metadata *zmd, struct dmz_super *sb) { unsigned int nr_meta_zones, nr_data_zones; - struct dmz_dev *dev = zmd->dev; + struct dmz_dev *dev = zmd->zoned_dev; u32 crc, stored_crc; u64 gen; @@ -908,7 +908,7 @@ static int dmz_read_sb(struct dmz_metadata *zmd, unsigned int set) */ static int dmz_lookup_secondary_sb(struct dmz_metadata *zmd) { - unsigned int zone_nr_blocks = zmd->dev->zone_nr_blocks; + unsigned int zone_nr_blocks = zmd->zoned_dev->zone_nr_blocks; struct dmz_mblock *mblk; int i; @@ -972,13 +972,13 @@ static int dmz_recover_mblocks(struct dmz_metadata *zmd, unsigned int dst_set) struct page *page; int i, ret; - dmz_dev_warn(zmd->dev, "Metadata set %u invalid: recovering", dst_set); + dmz_dev_warn(zmd->zoned_dev, "Metadata set %u invalid: recovering", dst_set); if (dst_set == 0) zmd->sb[0].block = dmz_start_block(zmd, zmd->sb_zone); else { zmd->sb[1].block = zmd->sb[0].block + - (zmd->nr_meta_zones << zmd->dev->zone_nr_blocks_shift); + (zmd->nr_meta_zones << zmd->zoned_dev->zone_nr_blocks_shift); } page = alloc_page(GFP_NOIO); @@ -1027,7 +1027,7 @@ static int dmz_load_sb(struct dmz_metadata *zmd) zmd->sb[0].block = dmz_start_block(zmd, zmd->sb_zone); ret = dmz_get_sb(zmd, 0); if (ret) { - dmz_dev_err(zmd->dev, "Read primary super block failed"); + dmz_dev_err(zmd->zoned_dev, "Read primary super block failed"); return ret; } @@ -1037,13 +1037,13 @@ static int dmz_load_sb(struct dmz_metadata *zmd) if (ret == 0) { sb_good[0] = true; zmd->sb[1].block = zmd->sb[0].block + - (zmd->nr_meta_zones << zmd->dev->zone_nr_blocks_shift); + (zmd->nr_meta_zones << zmd->zoned_dev->zone_nr_blocks_shift); ret = dmz_get_sb(zmd, 1); } else ret = dmz_lookup_secondary_sb(zmd); if (ret) { - dmz_dev_err(zmd->dev, "Read secondary super block failed"); + dmz_dev_err(zmd->zoned_dev, "Read secondary super block failed"); return ret; } @@ -1053,7 +1053,7 @@ static int dmz_load_sb(struct dmz_metadata *zmd) /* Use highest generation sb first */ if (!sb_good[0] && !sb_good[1]) { - dmz_dev_err(zmd->dev, "No valid super block found"); + dmz_dev_err(zmd->zoned_dev, "No valid super block found"); return -EIO; } @@ -1068,7 +1068,7 @@ static int dmz_load_sb(struct dmz_metadata *zmd) ret = dmz_recover_mblocks(zmd, 1); if (ret) { - dmz_dev_err(zmd->dev, "Recovery failed"); + dmz_dev_err(zmd->zoned_dev, "Recovery failed"); return -EIO; } @@ -1080,7 +1080,7 @@ static int dmz_load_sb(struct dmz_metadata *zmd) zmd->mblk_primary = 1; } - dmz_dev_debug(zmd->dev, "Using super block %u (gen %llu)", + dmz_dev_debug(zmd->zoned_dev, "Using super block %u (gen %llu)", zmd->mblk_primary, zmd->sb_gen); return 0; @@ -1093,7 +1093,7 @@ static int dmz_init_zone(struct blk_zone *blkz, unsigned int idx, void *data) { struct dmz_metadata *zmd = data; struct dm_zone *zone = &zmd->zones[idx]; - struct dmz_dev *dev = zmd->dev; + struct dmz_dev *dev = zmd->zoned_dev; /* Ignore the eventual last runt (smaller) zone */ if (blkz->len != dev->zone_nr_sectors) { @@ -1156,7 +1156,7 @@ static void dmz_drop_zones(struct dmz_metadata *zmd) */ static int dmz_init_zones(struct dmz_metadata *zmd) { - struct dmz_dev *dev = zmd->dev; + struct dmz_dev *dev = zmd->zoned_dev; int ret; /* Init */ @@ -1223,16 +1223,16 @@ static int dmz_update_zone(struct dmz_metadata *zmd, struct dm_zone *zone) * GFP_NOIO was specified. */ noio_flag = memalloc_noio_save(); - ret = blkdev_report_zones(zmd->dev->bdev, dmz_start_sect(zmd, zone), 1, + ret = blkdev_report_zones(zmd->zoned_dev->bdev, dmz_start_sect(zmd, zone), 1, dmz_update_zone_cb, zone); memalloc_noio_restore(noio_flag); if (ret == 0) ret = -EIO; if (ret < 0) { - dmz_dev_err(zmd->dev, "Get zone %u report failed", + dmz_dev_err(zmd->zoned_dev, "Get zone %u report failed", dmz_id(zmd, zone)); - dmz_check_bdev(zmd->dev); + dmz_check_bdev(zmd->zoned_dev); return ret; } @@ -1254,7 +1254,7 @@ static int dmz_handle_seq_write_err(struct dmz_metadata *zmd, if (ret) return ret; - dmz_dev_warn(zmd->dev, "Processing zone %u write error (zone wp %u/%u)", + dmz_dev_warn(zmd->zoned_dev, "Processing zone %u write error (zone wp %u/%u)", dmz_id(zmd, zone), zone->wp_block, wp); if (zone->wp_block < wp) { @@ -1287,7 +1287,7 @@ static int dmz_reset_zone(struct dmz_metadata *zmd, struct dm_zone *zone) return 0; if (!dmz_is_empty(zone) || dmz_seq_write_err(zone)) { - struct dmz_dev *dev = zmd->dev; + struct dmz_dev *dev = zmd->zoned_dev; ret = blkdev_zone_mgmt(dev->bdev, REQ_OP_ZONE_RESET, dmz_start_sect(zmd, zone), @@ -1313,7 +1313,7 @@ static void dmz_get_zone_weight(struct dmz_metadata *zmd, struct dm_zone *zone); */ static int dmz_load_mapping(struct dmz_metadata *zmd) { - struct dmz_dev *dev = zmd->dev; + struct dmz_dev *dev = zmd->zoned_dev; struct dm_zone *dzone, *bzone; struct dmz_mblock *dmap_mblk = NULL; struct dmz_map *dmap; @@ -1632,7 +1632,7 @@ struct dm_zone *dmz_get_chunk_mapping(struct dmz_metadata *zmd, unsigned int chu /* Allocate a random zone */ dzone = dmz_alloc_zone(zmd, DMZ_ALLOC_RND); if (!dzone) { - if (dmz_bdev_is_dying(zmd->dev)) { + if (dmz_bdev_is_dying(zmd->zoned_dev)) { dzone = ERR_PTR(-EIO); goto out; } @@ -1733,7 +1733,7 @@ struct dm_zone *dmz_get_chunk_buffer(struct dmz_metadata *zmd, /* Allocate a random zone */ bzone = dmz_alloc_zone(zmd, DMZ_ALLOC_RND); if (!bzone) { - if (dmz_bdev_is_dying(zmd->dev)) { + if (dmz_bdev_is_dying(zmd->zoned_dev)) { bzone = ERR_PTR(-EIO); goto out; } @@ -1795,7 +1795,7 @@ struct dm_zone *dmz_alloc_zone(struct dmz_metadata *zmd, unsigned long flags) atomic_dec(&zmd->unmap_nr_seq); if (dmz_is_offline(zone)) { - dmz_dev_warn(zmd->dev, "Zone %u is offline", dmz_id(zmd, zone)); + dmz_dev_warn(zmd->zoned_dev, "Zone %u is offline", dmz_id(zmd, zone)); zone = NULL; goto again; } @@ -1943,7 +1943,7 @@ int dmz_copy_valid_blocks(struct dmz_metadata *zmd, struct dm_zone *from_zone, sector_t chunk_block = 0; /* Get the zones bitmap blocks */ - while (chunk_block < zmd->dev->zone_nr_blocks) { + while (chunk_block < zmd->zoned_dev->zone_nr_blocks) { from_mblk = dmz_get_bitmap(zmd, from_zone, chunk_block); if (IS_ERR(from_mblk)) return PTR_ERR(from_mblk); @@ -1978,7 +1978,7 @@ int dmz_merge_valid_blocks(struct dmz_metadata *zmd, struct dm_zone *from_zone, int ret; /* Get the zones bitmap blocks */ - while (chunk_block < zmd->dev->zone_nr_blocks) { + while (chunk_block < zmd->zoned_dev->zone_nr_blocks) { /* Get a valid region from the source zone */ ret = dmz_first_valid_block(zmd, from_zone, &chunk_block); if (ret <= 0) @@ -2002,11 +2002,11 @@ int dmz_validate_blocks(struct dmz_metadata *zmd, struct dm_zone *zone, sector_t chunk_block, unsigned int nr_blocks) { unsigned int count, bit, nr_bits; - unsigned int zone_nr_blocks = zmd->dev->zone_nr_blocks; + unsigned int zone_nr_blocks = zmd->zoned_dev->zone_nr_blocks; struct dmz_mblock *mblk; unsigned int n = 0; - dmz_dev_debug(zmd->dev, "=> VALIDATE zone %u, block %llu, %u blocks", + dmz_dev_debug(zmd->zoned_dev, "=> VALIDATE zone %u, block %llu, %u blocks", dmz_id(zmd, zone), (unsigned long long)chunk_block, nr_blocks); @@ -2036,7 +2036,7 @@ int dmz_validate_blocks(struct dmz_metadata *zmd, struct dm_zone *zone, if (likely(zone->weight + n <= zone_nr_blocks)) zone->weight += n; else { - dmz_dev_warn(zmd->dev, "Zone %u: weight %u should be <= %u", + dmz_dev_warn(zmd->zoned_dev, "Zone %u: weight %u should be <= %u", dmz_id(zmd, zone), zone->weight, zone_nr_blocks - n); zone->weight = zone_nr_blocks; @@ -2086,10 +2086,10 @@ int dmz_invalidate_blocks(struct dmz_metadata *zmd, struct dm_zone *zone, struct dmz_mblock *mblk; unsigned int n = 0; - dmz_dev_debug(zmd->dev, "=> INVALIDATE zone %u, block %llu, %u blocks", + dmz_dev_debug(zmd->zoned_dev, "=> INVALIDATE zone %u, block %llu, %u blocks", dmz_id(zmd, zone), (u64)chunk_block, nr_blocks); - WARN_ON(chunk_block + nr_blocks > zmd->dev->zone_nr_blocks); + WARN_ON(chunk_block + nr_blocks > zmd->zoned_dev->zone_nr_blocks); while (nr_blocks) { /* Get bitmap block */ @@ -2116,7 +2116,7 @@ int dmz_invalidate_blocks(struct dmz_metadata *zmd, struct dm_zone *zone, if (zone->weight >= n) zone->weight -= n; else { - dmz_dev_warn(zmd->dev, "Zone %u: weight %u should be >= %u", + dmz_dev_warn(zmd->zoned_dev, "Zone %u: weight %u should be >= %u", dmz_id(zmd, zone), zone->weight, n); zone->weight = 0; } @@ -2133,7 +2133,7 @@ static int dmz_test_block(struct dmz_metadata *zmd, struct dm_zone *zone, struct dmz_mblock *mblk; int ret; - WARN_ON(chunk_block >= zmd->dev->zone_nr_blocks); + WARN_ON(chunk_block >= zmd->zoned_dev->zone_nr_blocks); /* Get bitmap block */ mblk = dmz_get_bitmap(zmd, zone, chunk_block); @@ -2163,7 +2163,7 @@ static int dmz_to_next_set_block(struct dmz_metadata *zmd, struct dm_zone *zone, unsigned long *bitmap; int n = 0; - WARN_ON(chunk_block + nr_blocks > zmd->dev->zone_nr_blocks); + WARN_ON(chunk_block + nr_blocks > zmd->zoned_dev->zone_nr_blocks); while (nr_blocks) { /* Get bitmap block */ @@ -2207,7 +2207,7 @@ int dmz_block_valid(struct dmz_metadata *zmd, struct dm_zone *zone, /* The block is valid: get the number of valid blocks from block */ return dmz_to_next_set_block(zmd, zone, chunk_block, - zmd->dev->zone_nr_blocks - chunk_block, 0); + zmd->zoned_dev->zone_nr_blocks - chunk_block, 0); } /* @@ -2223,7 +2223,7 @@ int dmz_first_valid_block(struct dmz_metadata *zmd, struct dm_zone *zone, int ret; ret = dmz_to_next_set_block(zmd, zone, start_block, - zmd->dev->zone_nr_blocks - start_block, 1); + zmd->zoned_dev->zone_nr_blocks - start_block, 1); if (ret < 0) return ret; @@ -2231,7 +2231,7 @@ int dmz_first_valid_block(struct dmz_metadata *zmd, struct dm_zone *zone, *chunk_block = start_block; return dmz_to_next_set_block(zmd, zone, start_block, - zmd->dev->zone_nr_blocks - start_block, 0); + zmd->zoned_dev->zone_nr_blocks - start_block, 0); } /* @@ -2270,7 +2270,7 @@ static void dmz_get_zone_weight(struct dmz_metadata *zmd, struct dm_zone *zone) struct dmz_mblock *mblk; sector_t chunk_block = 0; unsigned int bit, nr_bits; - unsigned int nr_blocks = zmd->dev->zone_nr_blocks; + unsigned int nr_blocks = zmd->zoned_dev->zone_nr_blocks; void *bitmap; int n = 0; @@ -2326,7 +2326,7 @@ static void dmz_cleanup_metadata(struct dmz_metadata *zmd) while (!list_empty(&zmd->mblk_dirty_list)) { mblk = list_first_entry(&zmd->mblk_dirty_list, struct dmz_mblock, link); - dmz_dev_warn(zmd->dev, "mblock %llu still in dirty list (ref %u)", + dmz_dev_warn(zmd->zoned_dev, "mblock %llu still in dirty list (ref %u)", (u64)mblk->no, mblk->ref); list_del_init(&mblk->link); rb_erase(&mblk->node, &zmd->mblk_rbtree); @@ -2344,7 +2344,7 @@ static void dmz_cleanup_metadata(struct dmz_metadata *zmd) /* Sanity checks: the mblock rbtree should now be empty */ root = &zmd->mblk_rbtree; rbtree_postorder_for_each_entry_safe(mblk, next, root, node) { - dmz_dev_warn(zmd->dev, "mblock %llu ref %u still in rbtree", + dmz_dev_warn(zmd->zoned_dev, "mblock %llu ref %u still in rbtree", (u64)mblk->no, mblk->ref); mblk->ref = 0; dmz_free_mblock(zmd, mblk); @@ -2371,7 +2371,7 @@ int dmz_ctr_metadata(struct dmz_dev *dev, struct dmz_metadata **metadata) if (!zmd) return -ENOMEM; - zmd->dev = dev; + zmd->zoned_dev = dev; zmd->mblk_rbtree = RB_ROOT; init_rwsem(&zmd->mblk_sem); mutex_init(&zmd->mblk_flush_lock); @@ -2488,7 +2488,7 @@ void dmz_dtr_metadata(struct dmz_metadata *zmd) */ int dmz_resume_metadata(struct dmz_metadata *zmd) { - struct dmz_dev *dev = zmd->dev; + struct dmz_dev *dev = zmd->zoned_dev; struct dm_zone *zone; sector_t wp_block; unsigned int i; diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c index 70a1063..28f4d00 100644 --- a/drivers/md/dm-zoned-target.c +++ b/drivers/md/dm-zoned-target.c @@ -43,7 +43,7 @@ struct dmz_target { unsigned long flags; /* Zoned block device information */ - struct dmz_dev *dev; + struct dmz_dev *zoned_dev; /* For metadata handling */ struct dmz_metadata *metadata; @@ -81,7 +81,7 @@ static inline void dmz_bio_endio(struct bio *bio, blk_status_t status) if (status != BLK_STS_OK && bio->bi_status == BLK_STS_OK) bio->bi_status = status; if (bio->bi_status != BLK_STS_OK) - bioctx->target->dev->flags |= DMZ_CHECK_BDEV; + bioctx->target->zoned_dev->flags |= DMZ_CHECK_BDEV; if (refcount_dec_and_test(&bioctx->ref)) { struct dm_zone *zone = bioctx->zone; @@ -125,7 +125,7 @@ static int dmz_submit_bio(struct dmz_target *dmz, struct dm_zone *zone, if (!clone) return -ENOMEM; - bio_set_dev(clone, dmz->dev->bdev); + bio_set_dev(clone, dmz->zoned_dev->bdev); clone->bi_iter.bi_sector = dmz_start_sect(dmz->metadata, zone) + dmz_blk2sect(chunk_block); clone->bi_iter.bi_size = dmz_blk2sect(nr_blocks) << SECTOR_SHIFT; @@ -165,7 +165,7 @@ static void dmz_handle_read_zero(struct dmz_target *dmz, struct bio *bio, static int dmz_handle_read(struct dmz_target *dmz, struct dm_zone *zone, struct bio *bio) { - sector_t chunk_block = dmz_chunk_block(dmz->dev, dmz_bio_block(bio)); + sector_t chunk_block = dmz_chunk_block(dmz->zoned_dev, dmz_bio_block(bio)); unsigned int nr_blocks = dmz_bio_blocks(bio); sector_t end_block = chunk_block + nr_blocks; struct dm_zone *rzone, *bzone; @@ -177,8 +177,8 @@ static int dmz_handle_read(struct dmz_target *dmz, struct dm_zone *zone, return 0; } - dmz_dev_debug(dmz->dev, "READ chunk %llu -> %s zone %u, block %llu, %u blocks", - (unsigned long long)dmz_bio_chunk(dmz->dev, bio), + dmz_dev_debug(dmz->zoned_dev, "READ chunk %llu -> %s zone %u, block %llu, %u blocks", + (unsigned long long)dmz_bio_chunk(dmz->zoned_dev, bio), (dmz_is_rnd(zone) ? "RND" : "SEQ"), dmz_id(dmz->metadata, zone), (unsigned long long)chunk_block, nr_blocks); @@ -308,14 +308,14 @@ static int dmz_handle_buffered_write(struct dmz_target *dmz, static int dmz_handle_write(struct dmz_target *dmz, struct dm_zone *zone, struct bio *bio) { - sector_t chunk_block = dmz_chunk_block(dmz->dev, dmz_bio_block(bio)); + sector_t chunk_block = dmz_chunk_block(dmz->zoned_dev, dmz_bio_block(bio)); unsigned int nr_blocks = dmz_bio_blocks(bio); if (!zone) return -ENOSPC; - dmz_dev_debug(dmz->dev, "WRITE chunk %llu -> %s zone %u, block %llu, %u blocks", - (unsigned long long)dmz_bio_chunk(dmz->dev, bio), + dmz_dev_debug(dmz->zoned_dev, "WRITE chunk %llu -> %s zone %u, block %llu, %u blocks", + (unsigned long long)dmz_bio_chunk(dmz->zoned_dev, bio), (dmz_is_rnd(zone) ? "RND" : "SEQ"), dmz_id(dmz->metadata, zone), (unsigned long long)chunk_block, nr_blocks); @@ -345,7 +345,7 @@ static int dmz_handle_discard(struct dmz_target *dmz, struct dm_zone *zone, struct dmz_metadata *zmd = dmz->metadata; sector_t block = dmz_bio_block(bio); unsigned int nr_blocks = dmz_bio_blocks(bio); - sector_t chunk_block = dmz_chunk_block(dmz->dev, block); + sector_t chunk_block = dmz_chunk_block(dmz->zoned_dev, block); int ret = 0; /* For unmapped chunks, there is nothing to do */ @@ -355,8 +355,8 @@ static int dmz_handle_discard(struct dmz_target *dmz, struct dm_zone *zone, if (dmz_is_readonly(zone)) return -EROFS; - dmz_dev_debug(dmz->dev, "DISCARD chunk %llu -> zone %u, block %llu, %u blocks", - (unsigned long long)dmz_bio_chunk(dmz->dev, bio), + dmz_dev_debug(dmz->zoned_dev, "DISCARD chunk %llu -> zone %u, block %llu, %u blocks", + (unsigned long long)dmz_bio_chunk(dmz->zoned_dev, bio), dmz_id(zmd, zone), (unsigned long long)chunk_block, nr_blocks); @@ -392,7 +392,7 @@ static void dmz_handle_bio(struct dmz_target *dmz, struct dm_chunk_work *cw, dmz_lock_metadata(zmd); - if (dmz->dev->flags & DMZ_BDEV_DYING) { + if (dmz->zoned_dev->flags & DMZ_BDEV_DYING) { ret = -EIO; goto out; } @@ -402,7 +402,7 @@ static void dmz_handle_bio(struct dmz_target *dmz, struct dm_chunk_work *cw, * mapping for read and discard. If a mapping is obtained, + the zone returned will be set to active state. */ - zone = dmz_get_chunk_mapping(zmd, dmz_bio_chunk(dmz->dev, bio), + zone = dmz_get_chunk_mapping(zmd, dmz_bio_chunk(dmz->zoned_dev, bio), bio_op(bio)); if (IS_ERR(zone)) { ret = PTR_ERR(zone); @@ -427,7 +427,7 @@ static void dmz_handle_bio(struct dmz_target *dmz, struct dm_chunk_work *cw, ret = dmz_handle_discard(dmz, zone, bio); break; default: - dmz_dev_err(dmz->dev, "Unsupported BIO operation 0x%x", + dmz_dev_err(dmz->zoned_dev, "Unsupported BIO operation 0x%x", bio_op(bio)); ret = -EIO; } @@ -502,7 +502,7 @@ static void dmz_flush_work(struct work_struct *work) /* Flush dirty metadata blocks */ ret = dmz_flush_metadata(dmz->metadata); if (ret) - dmz_dev_debug(dmz->dev, "Metadata flush failed, rc=%d\n", ret); + dmz_dev_debug(dmz->zoned_dev, "Metadata flush failed, rc=%d\n", ret); /* Process queued flush requests */ while (1) { @@ -525,7 +525,7 @@ static void dmz_flush_work(struct work_struct *work) */ static int dmz_queue_chunk_work(struct dmz_target *dmz, struct bio *bio) { - unsigned int chunk = dmz_bio_chunk(dmz->dev, bio); + unsigned int chunk = dmz_bio_chunk(dmz->zoned_dev, bio); struct dm_chunk_work *cw; int ret = 0; @@ -618,20 +618,20 @@ bool dmz_check_bdev(struct dmz_dev *dmz_dev) static int dmz_map(struct dm_target *ti, struct bio *bio) { struct dmz_target *dmz = ti->private; - struct dmz_dev *dev = dmz->dev; + struct dmz_dev *dev = dmz->zoned_dev; struct dmz_bioctx *bioctx = dm_per_bio_data(bio, sizeof(struct dmz_bioctx)); sector_t sector = bio->bi_iter.bi_sector; unsigned int nr_sectors = bio_sectors(bio); sector_t chunk_sector; int ret; - if (dmz_bdev_is_dying(dmz->dev)) + if (dmz_bdev_is_dying(dmz->zoned_dev)) return DM_MAPIO_KILL; dmz_dev_debug(dev, "BIO op %d sector %llu + %u => chunk %llu, block %llu, %u blocks", bio_op(bio), (unsigned long long)sector, nr_sectors, - (unsigned long long)dmz_bio_chunk(dmz->dev, bio), - (unsigned long long)dmz_chunk_block(dmz->dev, dmz_bio_block(bio)), + (unsigned long long)dmz_bio_chunk(dmz->zoned_dev, bio), + (unsigned long long)dmz_chunk_block(dmz->zoned_dev, dmz_bio_block(bio)), (unsigned int)dmz_bio_blocks(bio)); bio_set_dev(bio, dev->bdev); @@ -666,9 +666,9 @@ static int dmz_map(struct dm_target *ti, struct bio *bio) /* Now ready to handle this BIO */ ret = dmz_queue_chunk_work(dmz, bio); if (ret) { - dmz_dev_debug(dmz->dev, + dmz_dev_debug(dmz->zoned_dev, "BIO op %d, can't process chunk %llu, err %i\n", - bio_op(bio), (u64)dmz_bio_chunk(dmz->dev, bio), + bio_op(bio), (u64)dmz_bio_chunk(dmz->zoned_dev, bio), ret); return DM_MAPIO_REQUEUE; } @@ -729,7 +729,7 @@ static int dmz_get_zoned_device(struct dm_target *ti, char *path) dev->nr_zones = blkdev_nr_zones(dev->bdev->bd_disk); - dmz->dev = dev; + dmz->zoned_dev = dev; return 0; err: @@ -747,8 +747,8 @@ static void dmz_put_zoned_device(struct dm_target *ti) struct dmz_target *dmz = ti->private; dm_put_device(ti, dmz->ddev); - kfree(dmz->dev); - dmz->dev = NULL; + kfree(dmz->zoned_dev); + dmz->zoned_dev = NULL; } /* @@ -782,7 +782,7 @@ static int dmz_ctr(struct dm_target *ti, unsigned int argc, char **argv) } /* Initialize metadata */ - dev = dmz->dev; + dev = dmz->zoned_dev; ret = dmz_ctr_metadata(dev, &dmz->metadata); if (ret) { ti->error = "Metadata initialization failed"; @@ -895,7 +895,7 @@ static void dmz_dtr(struct dm_target *ti) static void dmz_io_hints(struct dm_target *ti, struct queue_limits *limits) { struct dmz_target *dmz = ti->private; - unsigned int chunk_sectors = dmz->dev->zone_nr_sectors; + unsigned int chunk_sectors = dmz->zoned_dev->zone_nr_sectors; limits->logical_block_size = DMZ_BLOCK_SIZE; limits->physical_block_size = DMZ_BLOCK_SIZE; @@ -924,10 +924,10 @@ static int dmz_prepare_ioctl(struct dm_target *ti, struct block_device **bdev) { struct dmz_target *dmz = ti->private; - if (!dmz_check_bdev(dmz->dev)) + if (!dmz_check_bdev(dmz->zoned_dev)) return -EIO; - *bdev = dmz->dev->bdev; + *bdev = dmz->zoned_dev->bdev; return 0; } @@ -959,7 +959,7 @@ static int dmz_iterate_devices(struct dm_target *ti, iterate_devices_callout_fn fn, void *data) { struct dmz_target *dmz = ti->private; - struct dmz_dev *dev = dmz->dev; + struct dmz_dev *dev = dmz->zoned_dev; sector_t capacity = dev->capacity & ~(dev->zone_nr_sectors - 1); return fn(ti, dmz->ddev, 0, capacity, data); From patchwork Tue Mar 24 11:02:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Liu X-Patchwork-Id: 11455161 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D2B0492A for ; Tue, 24 Mar 2020 11:03:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id ABD102080C for ; Tue, 24 Mar 2020 11:03:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="YY2OzMM/" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727124AbgCXLDq (ORCPT ); Tue, 24 Mar 2020 07:03:46 -0400 Received: from userp2120.oracle.com ([156.151.31.85]:33786 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727095AbgCXLDp (ORCPT ); Tue, 24 Mar 2020 07:03:45 -0400 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 02OAsDo3039499; Tue, 24 Mar 2020 11:03:41 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=Oznc4vBoSAEko3HiHu3AOYwVKYCt8v1qgAilLeqZf+w=; b=YY2OzMM/z9QvILVsKs7UDMWoTfurl5ui9uqrUUr7i7s2GvM1pIF6ClLQhAj5SPZa6P2R Qbv4RgGBobZh5TEZWYBOxDQ/rEY/7264UDZaC2unea5h738T56kVgrbhU/lPXCjZYk55 H5BXD1iJQFtcH47C8EHyEQUploU2x/+3xNv2kzusTficEvjQkcbagp2I2YL27F8MzSXm gfRNdFz0YRGBRnD8NgB2i67qcC0qlStNO+a70fdLglmjwIrejXWLMhUQRFo7CgL0iz+v eSEIyb8f+BgoMjZOrbhkUMTuSu1L1UfJXfAUoqhJallPPphiAUyFSPMhMe7XI/T0yNbj xg== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by userp2120.oracle.com with ESMTP id 2yx8ac0jwp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Mar 2020 11:03:41 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 02OApjbL181986; Tue, 24 Mar 2020 11:03:40 GMT Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserp3030.oracle.com with ESMTP id 2yxw92f3ge-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Mar 2020 11:03:40 +0000 Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 02OB3cxO001832; Tue, 24 Mar 2020 11:03:38 GMT Received: from localhost.localdomain (/114.88.246.185) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Tue, 24 Mar 2020 04:03:38 -0700 From: Bob Liu To: dm-devel@redhat.com Cc: Damien.LeMoal@wdc.com, linux-block@vger.kernel.org, Dmitry.Fomichev@wdc.com, hare@suse.de, Bob Liu Subject: [RFC PATCH v2 2/3] dm zoned: introduce regular device to dm-zoned-target Date: Tue, 24 Mar 2020 19:02:54 +0800 Message-Id: <20200324110255.8385-3-bob.liu@oracle.com> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20200324110255.8385-1-bob.liu@oracle.com> References: <20200324110255.8385-1-bob.liu@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9569 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 spamscore=0 bulkscore=0 adultscore=0 mlxscore=0 malwarescore=0 mlxlogscore=999 suspectscore=1 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2003240059 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9569 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 clxscore=1015 lowpriorityscore=0 suspectscore=1 priorityscore=1501 malwarescore=0 mlxscore=0 adultscore=0 phishscore=0 impostorscore=0 mlxlogscore=999 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2003240059 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Introduce a regular device for storing metadata and buffer write, zoned device is used by default if no regular device was set by dmsetup. The corresponding dmsetup cmd is: echo "0 $size zoned $regular_device $zoned_device" | dmsetup create $dm-zoned-name Signed-off-by: Bob Liu --- drivers/md/dm-zoned-target.c | 141 +++++++++++++++++++++++++------------------ drivers/md/dm-zoned.h | 50 +++++++++++++-- 2 files changed, 127 insertions(+), 64 deletions(-) diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c index 28f4d00..cae4bfe 100644 --- a/drivers/md/dm-zoned-target.c +++ b/drivers/md/dm-zoned-target.c @@ -35,38 +35,6 @@ struct dm_chunk_work { }; /* - * Target descriptor. - */ -struct dmz_target { - struct dm_dev *ddev; - - unsigned long flags; - - /* Zoned block device information */ - struct dmz_dev *zoned_dev; - - /* For metadata handling */ - struct dmz_metadata *metadata; - - /* For reclaim */ - struct dmz_reclaim *reclaim; - - /* For chunk work */ - struct radix_tree_root chunk_rxtree; - struct workqueue_struct *chunk_wq; - struct mutex chunk_lock; - - /* For cloned BIOs to zones */ - struct bio_set bio_set; - - /* For flush */ - spinlock_t flush_lock; - struct bio_list flush_list; - struct delayed_work flush_work; - struct workqueue_struct *flush_wq; -}; - -/* * Flush intervals (seconds). */ #define DMZ_FLUSH_PERIOD (10 * HZ) @@ -679,7 +647,7 @@ static int dmz_map(struct dm_target *ti, struct bio *bio) /* * Get zoned device information. */ -static int dmz_get_zoned_device(struct dm_target *ti, char *path) +static int dmz_get_device(struct dm_target *ti, char *path, bool zoned) { struct dmz_target *dmz = ti->private; struct request_queue *q; @@ -688,11 +656,22 @@ static int dmz_get_zoned_device(struct dm_target *ti, char *path) int ret; /* Get the target device */ - ret = dm_get_device(ti, path, dm_table_get_mode(ti->table), &dmz->ddev); - if (ret) { - ti->error = "Get target device failed"; - dmz->ddev = NULL; - return ret; + if (zoned) { + ret = dm_get_device(ti, path, dm_table_get_mode(ti->table), + &dmz->ddev); + if (ret) { + ti->error = "Get target device failed"; + dmz->ddev = NULL; + return ret; + } + } else { + ret = dm_get_device(ti, path, dm_table_get_mode(ti->table), + &dmz->regu_dm_dev); + if (ret) { + ti->error = "Get target device failed"; + dmz->regu_dm_dev = NULL; + return ret; + } } dev = kzalloc(sizeof(struct dmz_dev), GFP_KERNEL); @@ -701,39 +680,61 @@ static int dmz_get_zoned_device(struct dm_target *ti, char *path) goto err; } - dev->bdev = dmz->ddev->bdev; - (void)bdevname(dev->bdev, dev->name); - - if (bdev_zoned_model(dev->bdev) == BLK_ZONED_NONE) { - ti->error = "Not a zoned block device"; - ret = -EINVAL; - goto err; + if (zoned) { + dev->bdev = dmz->ddev->bdev; + if (bdev_zoned_model(dev->bdev) == BLK_ZONED_NONE) { + ti->error = "Not a zoned block device"; + ret = -EINVAL; + goto err; + } } + else + dev->bdev = dmz->regu_dm_dev->bdev; + + (void)bdevname(dev->bdev, dev->name); + dev->target = dmz; q = bdev_get_queue(dev->bdev); dev->capacity = i_size_read(dev->bdev->bd_inode) >> SECTOR_SHIFT; aligned_capacity = dev->capacity & ~((sector_t)blk_queue_zone_sectors(q) - 1); - if (ti->begin || - ((ti->len != dev->capacity) && (ti->len != aligned_capacity))) { - ti->error = "Partial mapping not supported"; - ret = -EINVAL; - goto err; - } - dev->zone_nr_sectors = blk_queue_zone_sectors(q); - dev->zone_nr_sectors_shift = ilog2(dev->zone_nr_sectors); + if (zoned) { + if (ti->begin || ((ti->len != dev->capacity) && + (ti->len != aligned_capacity))) { + ti->error = "Partial mapping not supported"; + ret = -EINVAL; + goto err; + } + dev->zone_nr_sectors = blk_queue_zone_sectors(q); + dev->zone_nr_sectors_shift = ilog2(dev->zone_nr_sectors); + + dev->zone_nr_blocks = dmz_sect2blk(dev->zone_nr_sectors); + dev->zone_nr_blocks_shift = ilog2(dev->zone_nr_blocks); - dev->zone_nr_blocks = dmz_sect2blk(dev->zone_nr_sectors); - dev->zone_nr_blocks_shift = ilog2(dev->zone_nr_blocks); + dev->nr_zones = blkdev_nr_zones(dev->bdev->bd_disk); - dev->nr_zones = blkdev_nr_zones(dev->bdev->bd_disk); + dmz->zoned_dev = dev; + } else { + /* Emulate regular device zone info by using the same zone size.*/ + dev->zone_nr_sectors = dmz->zoned_dev->zone_nr_sectors; + dev->zone_nr_sectors_shift = ilog2(dev->zone_nr_sectors); - dmz->zoned_dev = dev; + dev->zone_nr_blocks = dmz_sect2blk(dev->zone_nr_sectors); + dev->zone_nr_blocks_shift = ilog2(dev->zone_nr_blocks); + + dev->nr_zones = (get_capacity(dev->bdev->bd_disk) >> + ilog2(dev->zone_nr_sectors)); + + dmz->regu_dmz_dev = dev; + } return 0; err: - dm_put_device(ti, dmz->ddev); + if (zoned) + dm_put_device(ti, dmz->ddev); + else + dm_put_device(ti, dmz->regu_dm_dev); kfree(dev); return ret; @@ -746,6 +747,12 @@ static void dmz_put_zoned_device(struct dm_target *ti) { struct dmz_target *dmz = ti->private; + if (dmz->regu_dm_dev) + dm_put_device(ti, dmz->regu_dm_dev); + if (dmz->regu_dmz_dev) { + kfree(dmz->regu_dmz_dev); + dmz->regu_dmz_dev = NULL; + } dm_put_device(ti, dmz->ddev); kfree(dmz->zoned_dev); dmz->zoned_dev = NULL; @@ -761,7 +768,7 @@ static int dmz_ctr(struct dm_target *ti, unsigned int argc, char **argv) int ret; /* Check arguments */ - if (argc != 1) { + if ((argc != 1) && (argc != 2)) { ti->error = "Invalid argument count"; return -EINVAL; } @@ -775,12 +782,25 @@ static int dmz_ctr(struct dm_target *ti, unsigned int argc, char **argv) ti->private = dmz; /* Get the target zoned block device */ - ret = dmz_get_zoned_device(ti, argv[0]); + ret = dmz_get_device(ti, argv[0], 1); if (ret) { dmz->ddev = NULL; goto err; } + snprintf(dmz->name, BDEVNAME_SIZE, "%s", dmz->zoned_dev->name); + dmz->nr_zones = dmz->zoned_dev->nr_zones; + if (argc == 2) { + ret = dmz_get_device(ti, argv[1], 0); + if (ret) { + dmz->regu_dm_dev = NULL; + goto err; + } + snprintf(dmz->name, BDEVNAME_SIZE * 2, "%s:%s", + dmz->zoned_dev->name, dmz->regu_dmz_dev->name); + dmz->nr_zones += dmz->regu_dmz_dev->nr_zones; + } + /* Initialize metadata */ dev = dmz->zoned_dev; ret = dmz_ctr_metadata(dev, &dmz->metadata); @@ -962,6 +982,7 @@ static int dmz_iterate_devices(struct dm_target *ti, struct dmz_dev *dev = dmz->zoned_dev; sector_t capacity = dev->capacity & ~(dev->zone_nr_sectors - 1); + /* Todo: fn(dmz->regu_dm_dev) */ return fn(ti, dmz->ddev, 0, capacity, data); } diff --git a/drivers/md/dm-zoned.h b/drivers/md/dm-zoned.h index 5b5e493..a3535bc 100644 --- a/drivers/md/dm-zoned.h +++ b/drivers/md/dm-zoned.h @@ -46,9 +46,51 @@ #define dmz_bio_blocks(bio) dmz_sect2blk(bio_sectors(bio)) /* + * Target descriptor. + */ +struct dmz_target { + struct dm_dev *ddev; + /* + * Regular device for store metdata and buffer write, use zoned device + * by default if no regular device was set. + */ + struct dm_dev *regu_dm_dev; + struct dmz_dev *regu_dmz_dev; + /* Total nr_zones. */ + unsigned int nr_zones; + char name[BDEVNAME_SIZE * 2]; + + unsigned long flags; + + /* Zoned block device information */ + struct dmz_dev *zoned_dev; + + /* For metadata handling */ + struct dmz_metadata *metadata; + + /* For reclaim */ + struct dmz_reclaim *reclaim; + + /* For chunk work */ + struct radix_tree_root chunk_rxtree; + struct workqueue_struct *chunk_wq; + struct mutex chunk_lock; + + /* For cloned BIOs to zones */ + struct bio_set bio_set; + + /* For flush */ + spinlock_t flush_lock; + struct bio_list flush_list; + struct delayed_work flush_work; + struct workqueue_struct *flush_wq; +}; + +/* * Zoned block device information. */ struct dmz_dev { + struct dmz_target *target; struct block_device *bdev; char name[BDEVNAME_SIZE]; @@ -147,16 +189,16 @@ enum { * Message functions. */ #define dmz_dev_info(dev, format, args...) \ - DMINFO("(%s): " format, (dev)->name, ## args) + DMINFO("(%s): " format, (dev)->target->name, ## args) #define dmz_dev_err(dev, format, args...) \ - DMERR("(%s): " format, (dev)->name, ## args) + DMERR("(%s): " format, (dev)->target->name, ## args) #define dmz_dev_warn(dev, format, args...) \ - DMWARN("(%s): " format, (dev)->name, ## args) + DMWARN("(%s): " format, (dev)->target->name, ## args) #define dmz_dev_debug(dev, format, args...) \ - DMDEBUG("(%s): " format, (dev)->name, ## args) + DMDEBUG("(%s): " format, (dev)->target->name, ## args) struct dmz_metadata; struct dmz_reclaim; From patchwork Tue Mar 24 11:02:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Liu X-Patchwork-Id: 11455163 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B738B14B4 for ; Tue, 24 Mar 2020 11:03:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8768620870 for ; Tue, 24 Mar 2020 11:03:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="WetVkHDq" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727130AbgCXLD5 (ORCPT ); Tue, 24 Mar 2020 07:03:57 -0400 Received: from userp2130.oracle.com ([156.151.31.86]:38400 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727095AbgCXLD4 (ORCPT ); Tue, 24 Mar 2020 07:03:56 -0400 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 02OArOSt146740; Tue, 24 Mar 2020 11:03:51 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=R6c8gELASMjwQhz1DLpjJnRSpwJeylhIZp8ycKWet2g=; b=WetVkHDqqN+87yJc12zgAshdXJ7ybTgjBSFeeWx8xWS0wVFT0CyZtHh7HgHiGc/695hm K37E8NEtPweuELkIqqa8EAkYk1MPQ/NEVv+0VnXeFnYC9jZzHYD6MHfH22e7qWJj10+D 9nGZv2fOStzfj55WdEql1b6TqjtBfkyAz4jhHRm1NSLuBNma/Dt2wRfVFkq0c5cCMp+U lZp6l8FaRfbWJKN3rqH73biBmJ0L5cjxC/QbO3MKW55iIuaKgAX3lkV2Xy6Ak1MpOCoU XYLSF8L7EKliASGc1OY7CoTLXf5LdFEbM834mUTPV7FMg6kekJ4dVRBQbnWbIFwx+m7P wg== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by userp2130.oracle.com with ESMTP id 2ywabr3mqc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Mar 2020 11:03:51 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 02OAq6Kr095186; Tue, 24 Mar 2020 11:03:50 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by aserp3020.oracle.com with ESMTP id 2yyd9vtjq8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Mar 2020 11:03:50 +0000 Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 02OB3nhH026014; Tue, 24 Mar 2020 11:03:49 GMT Received: from localhost.localdomain (/114.88.246.185) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Tue, 24 Mar 2020 04:03:48 -0700 From: Bob Liu To: dm-devel@redhat.com Cc: Damien.LeMoal@wdc.com, linux-block@vger.kernel.org, Dmitry.Fomichev@wdc.com, hare@suse.de, Bob Liu Subject: [RFC PATCH v2 3/3] dm zoned: add regular device info to metadata Date: Tue, 24 Mar 2020 19:02:55 +0800 Message-Id: <20200324110255.8385-4-bob.liu@oracle.com> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20200324110255.8385-1-bob.liu@oracle.com> References: <20200324110255.8385-1-bob.liu@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9569 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 phishscore=0 suspectscore=1 mlxscore=0 bulkscore=0 mlxlogscore=999 adultscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2003240059 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9569 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=1 lowpriorityscore=0 malwarescore=0 phishscore=0 priorityscore=1501 clxscore=1015 adultscore=0 mlxscore=0 mlxlogscore=999 bulkscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2003240059 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This patch implemented metadata support for regular device by: - Emulated zone information for regular device. - Store metadata at the beginning of regular device. | --- zoned device --- | -- regular device || ^ ^ | |Metadata zone 0 Signed-off-by: Bob Liu --- drivers/md/dm-zoned-metadata.c | 135 +++++++++++++++++++++++++++++++---------- drivers/md/dm-zoned-target.c | 6 +- drivers/md/dm-zoned.h | 3 +- 3 files changed, 108 insertions(+), 36 deletions(-) diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c index e0e8be0..a96158a 100644 --- a/drivers/md/dm-zoned-metadata.c +++ b/drivers/md/dm-zoned-metadata.c @@ -131,6 +131,7 @@ struct dmz_sb { */ struct dmz_metadata { struct dmz_dev *zoned_dev; + struct dmz_dev *regu_dmz_dev; sector_t zone_bitmap_size; unsigned int zone_nr_bitmap_blocks; @@ -187,6 +188,15 @@ struct dmz_metadata { /* * Various accessors */ +static inline struct dmz_dev *zmd_mdev(struct dmz_metadata *zmd) +{ + /* Metadata always stores in regular device if there is. */ + if (zmd->regu_dmz_dev) + return zmd->regu_dmz_dev; + else + return zmd->zoned_dev; +} + unsigned int dmz_id(struct dmz_metadata *zmd, struct dm_zone *zone) { return ((unsigned int)(zone - zmd->zones)); @@ -194,12 +204,33 @@ unsigned int dmz_id(struct dmz_metadata *zmd, struct dm_zone *zone) sector_t dmz_start_sect(struct dmz_metadata *zmd, struct dm_zone *zone) { - return (sector_t)dmz_id(zmd, zone) << zmd->zoned_dev->zone_nr_sectors_shift; + int dmz_real_id; + + dmz_real_id = dmz_id(zmd, zone); + if (dmz_real_id >= zmd->zoned_dev->nr_zones) { + /* Regular dev. */ + dmz_real_id -= zmd->zoned_dev->nr_zones; + WARN_ON(!zmd->regu_dmz_dev); + + return (sector_t)dmz_real_id << zmd->zoned_dev->zone_nr_sectors_shift; + } + return (sector_t)dmz_real_id << zmd->zoned_dev->zone_nr_sectors_shift; } sector_t dmz_start_block(struct dmz_metadata *zmd, struct dm_zone *zone) { - return (sector_t)dmz_id(zmd, zone) << zmd->zoned_dev->zone_nr_blocks_shift; + int dmz_real_id; + + dmz_real_id = dmz_id(zmd, zone); + if (dmz_real_id >= zmd->zoned_dev->nr_zones) { + /* Regular dev. */ + dmz_real_id -= zmd->zoned_dev->nr_zones; + WARN_ON(!zmd->regu_dmz_dev); + + return (sector_t)dmz_real_id << zmd->zoned_dev->zone_nr_blocks_shift; + } + + return (sector_t)dmz_real_id << zmd->zoned_dev->zone_nr_blocks_shift; } unsigned int dmz_nr_chunks(struct dmz_metadata *zmd) @@ -403,8 +434,10 @@ static struct dmz_mblock *dmz_get_mblock_slow(struct dmz_metadata *zmd, struct dmz_mblock *mblk, *m; sector_t block = zmd->sb[zmd->mblk_primary].block + mblk_no; struct bio *bio; + struct dmz_dev *mdev; - if (dmz_bdev_is_dying(zmd->zoned_dev)) + mdev = zmd_mdev(zmd); + if (dmz_bdev_is_dying(mdev)) return ERR_PTR(-EIO); /* Get a new block and a BIO to read it */ @@ -440,7 +473,7 @@ static struct dmz_mblock *dmz_get_mblock_slow(struct dmz_metadata *zmd, /* Submit read BIO */ bio->bi_iter.bi_sector = dmz_blk2sect(block); - bio_set_dev(bio, zmd->zoned_dev->bdev); + bio_set_dev(bio, mdev->bdev); bio->bi_private = mblk; bio->bi_end_io = dmz_mblock_bio_end_io; bio_set_op_attrs(bio, REQ_OP_READ, REQ_META | REQ_PRIO); @@ -555,7 +588,7 @@ static struct dmz_mblock *dmz_get_mblock(struct dmz_metadata *zmd, TASK_UNINTERRUPTIBLE); if (test_bit(DMZ_META_ERROR, &mblk->state)) { dmz_release_mblock(zmd, mblk); - dmz_check_bdev(zmd->zoned_dev); + dmz_check_bdev(zmd_mdev(zmd)); return ERR_PTR(-EIO); } @@ -581,8 +614,10 @@ static int dmz_write_mblock(struct dmz_metadata *zmd, struct dmz_mblock *mblk, { sector_t block = zmd->sb[set].block + mblk->no; struct bio *bio; + struct dmz_dev *mdev; - if (dmz_bdev_is_dying(zmd->zoned_dev)) + mdev = zmd_mdev(zmd); + if (dmz_bdev_is_dying(mdev)) return -EIO; bio = bio_alloc(GFP_NOIO, 1); @@ -594,7 +629,7 @@ static int dmz_write_mblock(struct dmz_metadata *zmd, struct dmz_mblock *mblk, set_bit(DMZ_META_WRITING, &mblk->state); bio->bi_iter.bi_sector = dmz_blk2sect(block); - bio_set_dev(bio, zmd->zoned_dev->bdev); + bio_set_dev(bio, mdev->bdev); bio->bi_private = mblk; bio->bi_end_io = dmz_mblock_bio_end_io; bio_set_op_attrs(bio, REQ_OP_WRITE, REQ_META | REQ_PRIO); @@ -612,8 +647,10 @@ static int dmz_rdwr_block(struct dmz_metadata *zmd, int op, sector_t block, { struct bio *bio; int ret; + struct dmz_dev *mdev; - if (dmz_bdev_is_dying(zmd->zoned_dev)) + mdev = zmd_mdev(zmd); + if (dmz_bdev_is_dying(mdev)) return -EIO; bio = bio_alloc(GFP_NOIO, 1); @@ -621,14 +658,14 @@ static int dmz_rdwr_block(struct dmz_metadata *zmd, int op, sector_t block, return -ENOMEM; bio->bi_iter.bi_sector = dmz_blk2sect(block); - bio_set_dev(bio, zmd->zoned_dev->bdev); + bio_set_dev(bio, mdev->bdev); bio_set_op_attrs(bio, op, REQ_SYNC | REQ_META | REQ_PRIO); bio_add_page(bio, page, DMZ_BLOCK_SIZE, 0); ret = submit_bio_wait(bio); bio_put(bio); if (ret) - dmz_check_bdev(zmd->zoned_dev); + dmz_check_bdev(mdev); return ret; } @@ -661,7 +698,7 @@ static int dmz_write_sb(struct dmz_metadata *zmd, unsigned int set) ret = dmz_rdwr_block(zmd, REQ_OP_WRITE, block, mblk->page); if (ret == 0) - ret = blkdev_issue_flush(zmd->zoned_dev->bdev, GFP_NOIO, NULL); + ret = blkdev_issue_flush(zmd_mdev(zmd)->bdev, GFP_NOIO, NULL); return ret; } @@ -695,15 +732,20 @@ static int dmz_write_dirty_mblocks(struct dmz_metadata *zmd, TASK_UNINTERRUPTIBLE); if (test_bit(DMZ_META_ERROR, &mblk->state)) { clear_bit(DMZ_META_ERROR, &mblk->state); - dmz_check_bdev(zmd->zoned_dev); + dmz_check_bdev(zmd_mdev(zmd)); ret = -EIO; } nr_mblks_submitted--; } /* Flush drive cache (this will also sync data) */ - if (ret == 0) - ret = blkdev_issue_flush(zmd->zoned_dev->bdev, GFP_NOIO, NULL); + if (ret == 0) { + /* Flush metadata device */ + ret = blkdev_issue_flush(zmd_mdev(zmd)->bdev, GFP_NOIO, NULL); + if ((ret == 0) && zmd->regu_dmz_dev) + /* Flush data device. */ + ret = blkdev_issue_flush(zmd->zoned_dev->bdev, GFP_NOIO, NULL); + } return ret; } @@ -760,7 +802,7 @@ int dmz_flush_metadata(struct dmz_metadata *zmd) */ dmz_lock_flush(zmd); - if (dmz_bdev_is_dying(zmd->zoned_dev)) { + if (dmz_bdev_is_dying(zmd_mdev(zmd))) { ret = -EIO; goto out; } @@ -772,7 +814,7 @@ int dmz_flush_metadata(struct dmz_metadata *zmd) /* If there are no dirty metadata blocks, just flush the device cache */ if (list_empty(&write_list)) { - ret = blkdev_issue_flush(zmd->zoned_dev->bdev, GFP_NOIO, NULL); + ret = blkdev_issue_flush(zmd_mdev(zmd)->bdev, GFP_NOIO, NULL); goto err; } @@ -821,7 +863,7 @@ int dmz_flush_metadata(struct dmz_metadata *zmd) list_splice(&write_list, &zmd->mblk_dirty_list); spin_unlock(&zmd->mblk_lock); } - if (!dmz_check_bdev(zmd->zoned_dev)) + if (!dmz_check_bdev(zmd_mdev(zmd))) ret = -EIO; goto out; } @@ -832,10 +874,11 @@ int dmz_flush_metadata(struct dmz_metadata *zmd) static int dmz_check_sb(struct dmz_metadata *zmd, struct dmz_super *sb) { unsigned int nr_meta_zones, nr_data_zones; - struct dmz_dev *dev = zmd->zoned_dev; + struct dmz_dev *dev; u32 crc, stored_crc; u64 gen; + dev = zmd_mdev(zmd); gen = le64_to_cpu(sb->gen); stored_crc = le32_to_cpu(sb->crc); sb->crc = 0; @@ -1131,8 +1174,11 @@ static int dmz_init_zone(struct blk_zone *blkz, unsigned int idx, void *data) zmd->nr_useable_zones++; if (dmz_is_rnd(zone)) { zmd->nr_rnd_zones++; - if (!zmd->sb_zone) { - /* Super block zone */ + if (!zmd->sb_zone && !zmd->regu_dmz_dev) { + /* + * Super block zone goes to regular + * device by default. + */ zmd->sb_zone = zone; } } @@ -1157,7 +1203,8 @@ static void dmz_drop_zones(struct dmz_metadata *zmd) static int dmz_init_zones(struct dmz_metadata *zmd) { struct dmz_dev *dev = zmd->zoned_dev; - int ret; + int ret, i; + unsigned int total_nr_zones; /* Init */ zmd->zone_bitmap_size = dev->zone_nr_blocks >> 3; @@ -1167,7 +1214,10 @@ static int dmz_init_zones(struct dmz_metadata *zmd) DMZ_BLOCK_SIZE_BITS); /* Allocate zone array */ - zmd->zones = kcalloc(dev->nr_zones, sizeof(struct dm_zone), GFP_KERNEL); + total_nr_zones = dev->nr_zones; + if (zmd->regu_dmz_dev) + total_nr_zones += zmd->regu_dmz_dev->nr_zones; + zmd->zones = kcalloc(total_nr_zones, sizeof(struct dm_zone), GFP_KERNEL); if (!zmd->zones) return -ENOMEM; @@ -1186,6 +1236,25 @@ static int dmz_init_zones(struct dmz_metadata *zmd) return ret; } + if (zmd->regu_dmz_dev) { + /* Emulate zone information for regular device zone. */ + for (i = 0; i < zmd->regu_dmz_dev->nr_zones; i++) { + struct dm_zone *zone = &zmd->zones[i + dev->nr_zones]; + + INIT_LIST_HEAD(&zone->link); + atomic_set(&zone->refcount, 0); + zone->chunk = DMZ_MAP_UNMAPPED; + + set_bit(DMZ_RND, &zone->flags); + zmd->nr_rnd_zones++; + zmd->nr_useable_zones++; + zone->wp_block = 0; + if (!zmd->sb_zone) + /* Super block zone */ + zmd->sb_zone = zone; + } + } + return 0; } @@ -1313,13 +1382,13 @@ static void dmz_get_zone_weight(struct dmz_metadata *zmd, struct dm_zone *zone); */ static int dmz_load_mapping(struct dmz_metadata *zmd) { - struct dmz_dev *dev = zmd->zoned_dev; struct dm_zone *dzone, *bzone; struct dmz_mblock *dmap_mblk = NULL; struct dmz_map *dmap; unsigned int i = 0, e = 0, chunk = 0; unsigned int dzone_id; unsigned int bzone_id; + struct dmz_dev *dev = zmd_mdev(zmd); /* Metadata block array for the chunk mapping table */ zmd->map_mblk = kcalloc(zmd->nr_map_blocks, @@ -1345,7 +1414,7 @@ static int dmz_load_mapping(struct dmz_metadata *zmd) if (dzone_id == DMZ_MAP_UNMAPPED) goto next; - if (dzone_id >= dev->nr_zones) { + if (dzone_id >= dev->target->nr_zones) { dmz_dev_err(dev, "Chunk %u mapping: invalid data zone ID %u", chunk, dzone_id); return -EIO; @@ -1366,7 +1435,7 @@ static int dmz_load_mapping(struct dmz_metadata *zmd) if (bzone_id == DMZ_MAP_UNMAPPED) goto next; - if (bzone_id >= dev->nr_zones) { + if (bzone_id >= dev->target->nr_zones) { dmz_dev_err(dev, "Chunk %u mapping: invalid buffer zone ID %u", chunk, bzone_id); return -EIO; @@ -1398,7 +1467,7 @@ static int dmz_load_mapping(struct dmz_metadata *zmd) * fully initialized. All remaining zones are unmapped data * zones. Finish initializing those here. */ - for (i = 0; i < dev->nr_zones; i++) { + for (i = 0; i < dev->target->nr_zones; i++) { dzone = dmz_get(zmd, i); if (dmz_is_meta(dzone)) continue; @@ -1632,7 +1701,7 @@ struct dm_zone *dmz_get_chunk_mapping(struct dmz_metadata *zmd, unsigned int chu /* Allocate a random zone */ dzone = dmz_alloc_zone(zmd, DMZ_ALLOC_RND); if (!dzone) { - if (dmz_bdev_is_dying(zmd->zoned_dev)) { + if (dmz_bdev_is_dying(zmd_mdev(zmd))) { dzone = ERR_PTR(-EIO); goto out; } @@ -1733,7 +1802,7 @@ struct dm_zone *dmz_get_chunk_buffer(struct dmz_metadata *zmd, /* Allocate a random zone */ bzone = dmz_alloc_zone(zmd, DMZ_ALLOC_RND); if (!bzone) { - if (dmz_bdev_is_dying(zmd->zoned_dev)) { + if (dmz_bdev_is_dying(zmd_mdev(zmd))) { bzone = ERR_PTR(-EIO); goto out; } @@ -2360,7 +2429,8 @@ static void dmz_cleanup_metadata(struct dmz_metadata *zmd) /* * Initialize the zoned metadata. */ -int dmz_ctr_metadata(struct dmz_dev *dev, struct dmz_metadata **metadata) +int dmz_ctr_metadata(struct dmz_dev *dev, struct dmz_dev *regu_dmz_dev, + struct dmz_metadata **metadata) { struct dmz_metadata *zmd; unsigned int i, zid; @@ -2372,6 +2442,7 @@ int dmz_ctr_metadata(struct dmz_dev *dev, struct dmz_metadata **metadata) return -ENOMEM; zmd->zoned_dev = dev; + zmd->regu_dmz_dev = regu_dmz_dev; zmd->mblk_rbtree = RB_ROOT; init_rwsem(&zmd->mblk_sem); mutex_init(&zmd->mblk_flush_lock); @@ -2440,9 +2511,9 @@ int dmz_ctr_metadata(struct dmz_dev *dev, struct dmz_metadata **metadata) bdev_zoned_model(dev->bdev) == BLK_ZONED_HA ? "aware" : "managed"); dmz_dev_info(dev, " %llu 512-byte logical sectors", - (u64)dev->capacity); + (u64)dev->capacity + (u64)regu_dmz_dev->capacity); dmz_dev_info(dev, " %u zones of %llu 512-byte logical sectors", - dev->nr_zones, (u64)dev->zone_nr_sectors); + dev->nr_zones + regu_dmz_dev->nr_zones, (u64)dev->zone_nr_sectors); dmz_dev_info(dev, " %u metadata zones", zmd->nr_meta_zones * 2); dmz_dev_info(dev, " %u data zones for %u chunks", @@ -2488,7 +2559,7 @@ void dmz_dtr_metadata(struct dmz_metadata *zmd) */ int dmz_resume_metadata(struct dmz_metadata *zmd) { - struct dmz_dev *dev = zmd->zoned_dev; + struct dmz_dev *dev = zmd_mdev(zmd); struct dm_zone *zone; sector_t wp_block; unsigned int i; diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c index cae4bfe..41dbb9d 100644 --- a/drivers/md/dm-zoned-target.c +++ b/drivers/md/dm-zoned-target.c @@ -803,7 +803,7 @@ static int dmz_ctr(struct dm_target *ti, unsigned int argc, char **argv) /* Initialize metadata */ dev = dmz->zoned_dev; - ret = dmz_ctr_metadata(dev, &dmz->metadata); + ret = dmz_ctr_metadata(dev, dmz->regu_dmz_dev, &dmz->metadata); if (ret) { ti->error = "Metadata initialization failed"; goto err_dev; @@ -852,8 +852,8 @@ static int dmz_ctr(struct dm_target *ti, unsigned int argc, char **argv) } mod_delayed_work(dmz->flush_wq, &dmz->flush_work, DMZ_FLUSH_PERIOD); - /* Initialize reclaim */ - ret = dmz_ctr_reclaim(dev, dmz->metadata, &dmz->reclaim); + /* Initialize reclaim, only reclaim from regular device. */ + ret = dmz_ctr_reclaim(dmz->regu_dmz_dev, dmz->metadata, &dmz->reclaim); if (ret) { ti->error = "Zone reclaim initialization failed"; goto err_fwq; diff --git a/drivers/md/dm-zoned.h b/drivers/md/dm-zoned.h index a3535bc..7aa1a30 100644 --- a/drivers/md/dm-zoned.h +++ b/drivers/md/dm-zoned.h @@ -206,7 +206,8 @@ struct dmz_reclaim; /* * Functions defined in dm-zoned-metadata.c */ -int dmz_ctr_metadata(struct dmz_dev *dev, struct dmz_metadata **zmd); +int dmz_ctr_metadata(struct dmz_dev *dev, struct dmz_dev *regu_dmz_dev, + struct dmz_metadata **zmd); void dmz_dtr_metadata(struct dmz_metadata *zmd); int dmz_resume_metadata(struct dmz_metadata *zmd);