From patchwork Wed May 19 02:55:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12266027 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BEB83C433B4 for ; Wed, 19 May 2021 02:55:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9918F61244 for ; Wed, 19 May 2021 02:55:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237305AbhESC4v (ORCPT ); Tue, 18 May 2021 22:56:51 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:5175 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236798AbhESC4u (ORCPT ); Tue, 18 May 2021 22:56:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621392930; x=1652928930; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=FLRfn93C5No2NRwuf4VUdJAND3wXj0rw1oDVweHUX84=; b=N7W1Prdo2zYCQY0pte/x+ej/uWm1FT/BlI6kVMz4XD+Co8jxMkR/9V4n DgLaprdRZ6lF6uwC+ASZZJyPplIYDuTS7fmNcGkOM/Kd4RPYEdhlGlv2P kLlXxjNdwDgDCVSq2KHSm1+QYh07ME8SxtGuFiiO6KyVsm9n/AIHFcn/b JZPXQL1RBnTvBDuJO7JKVDF7hd8gBMRDKNv+r7GbA6LeJUeTr5lv5mtJ+ 3nkEBdK1TVdyJkRPLgzJeSmwJ0/k9V9tAF98xWyym1LWfgvkeeSRKP16D lfAbvdkTzwvwwmOQ4cJEND7bLKb4PFSm2Im0QauAx4MbTRltzDQJaufls w==; IronPort-SDR: kaDqcJe/Nt/VEZUdc8UMvF9hlHVkKI2uomlPfYrCBPdq4SY3GSgFVeOvfqX1prXZ91o3ySZ5WV l5kiw/HWKx0T5PNzONEss4XWxUWVAgOqltW7i7IpBtd5EqPqvuMdMHMfJBR0+NgMCd00Ifj+U9 FfMQtP1KZAC3pU0xA0Z4kr6DVyPIJqouJ8WhSxOP0RJqPyM4ezfKzcWSGTre0YFalhDq+kBznn mO9sadQHeyqtQ/Z80DTYCX8ekJrHHtaS5k8INJ5ULDAzPgG3Re8AjruHtNDXtBS9utcBiPpKrg Ioc= X-IronPort-AV: E=Sophos;i="5.82,311,1613404800"; d="scan'208";a="173265890" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 19 May 2021 10:55:30 +0800 IronPort-SDR: aIXrFdQcAr4dhxOyUptEVpTzJIsV1oslcVRGEBM31K4T1D+66w6CO+MdKjRgM/GrMZAaGOVxWa hannOiNjeRgu0Kbgh6N0r95tEUXueevXYvJvg/UOFtgREgeriudt42Ac76JOSURy7GEUUMh7Fw gzwpQiBdwNLwe3EXcupNzNeq2H0fBLfaP7A573EpWKDpqxuUMr4oA1+F9rbvFkDlzV5z58jKrx qm4oCCJguzWOvmXiIuCXVVAAbJiZZnarEhCNX43Jnyr/776O8jq9yL845R5u3559rPOqxAaGI5 U/uWHb6O5sj9qn9qdeebUyHV Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 19:35:10 -0700 IronPort-SDR: 8SITTY2jLm6Fku0BO26dPB5D0WnriCNYdIYT8ZXHe8re3QsH8BAOrFlQ+CNb8+1DiXBOcdqD1i ISsrtcorNNZU2yzzXt7lQOshFPllOB/3RcJ11aCAWqHpDACxVooqn27VUXAsq2HPds0S4SmdeA Sm4n8Skv8GvPGv7VMt6Y8qU4bvoGz8Ffh0b2Vydaug7JJymxtC0znOJraFZqnX9xfrdTD9UDeG 40ikyspNgP3X/D+REMGGsUCsXRtTNP9CF277qvXAlwwY2seFQlAzWKfER3VCtg3/+VXAl0NzfM +8Q= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 18 May 2021 19:55:31 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH 01/11] block: improve handling of all zones reset operation Date: Wed, 19 May 2021 11:55:19 +0900 Message-Id: <20210519025529.707897-2-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210519025529.707897-1-damien.lemoal@wdc.com> References: <20210519025529.707897-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org SCSI, ZNS and null_blk zoned devices support resetting all zones using a single command (REQ_OP_ZONE_RESET_ALL), as indicated using the device request queue flag QUEUE_FLAG_ZONE_RESETALL. This flag is not set for device mapper targets creating zoned devices. In this case, a user request for resetting all zones of a device is processed in blkdev_zone_mgmt() by issuing a REQ_OP_ZONE_RESET operation for each zone of the device. This leads to different behaviors of the BLKRESETZONE ioctl() depending on the target device support for the reset all operation. E.g. blkzone reset /dev/sdX will reset all zones of a SCSI device using a single command that will ignore conventional, read-only or offline zones. But a dm-linear device including conventional, read-only or offline zones cannot be reset in the same manner as some of the single zone reset operations issued by blkdev_zone_mgmt() will fail. E.g.: blkzone reset /dev/dm-Y blkzone: /dev/dm-0: BLKRESETZONE ioctl failed: Remote I/O error To simplify applications and tools development, unify the behavior of an all-zone reset operation by modifying blkdev_zone_mgmt() to not issue a zone reset operation for conventional, read-only and offline zones, thus mimicking what an actual reset-all device command does on a device supporting REQ_OP_ZONE_RESET_ALL. The zones needing a reset are identified using a bitmap that is initialized using a zone report. Since empty zones do not need a reset, also ignore these zones. Signed-off-by: Damien Le Moal Signed-off-by: Damien Le Moal Signed-off-by: Christoph Hellwig --- block/blk-zoned.c | 87 ++++++++++++++++++++++++++++++++++------------- 1 file changed, 63 insertions(+), 24 deletions(-) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 250cb76ee615..13f053c06d9e 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -161,18 +161,30 @@ int blkdev_report_zones(struct block_device *bdev, sector_t sector, } EXPORT_SYMBOL_GPL(blkdev_report_zones); -static inline bool blkdev_allow_reset_all_zones(struct block_device *bdev, - sector_t sector, - sector_t nr_sectors) +static inline unsigned long *blk_alloc_zone_bitmap(int node, + unsigned int nr_zones) { - if (!blk_queue_zone_resetall(bdev_get_queue(bdev))) - return false; + return kcalloc_node(BITS_TO_LONGS(nr_zones), sizeof(unsigned long), + GFP_NOIO, node); +} +static int blk_zone_need_reset_cb(struct blk_zone *zone, unsigned int idx, + void *data) +{ /* - * REQ_OP_ZONE_RESET_ALL can be executed only if the number of sectors - * of the applicable zone range is the entire disk. + * For an all-zones reset, ignore conventional, empty, read-only + * and offline zones. */ - return !sector && nr_sectors == get_capacity(bdev->bd_disk); + switch (zone->cond) { + case BLK_ZONE_COND_NOT_WP: + case BLK_ZONE_COND_EMPTY: + case BLK_ZONE_COND_READONLY: + case BLK_ZONE_COND_OFFLINE: + return 0; + default: + set_bit(idx, (unsigned long *)data); + return 0; + } } /** @@ -199,8 +211,10 @@ int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op, sector_t zone_sectors = blk_queue_zone_sectors(q); sector_t capacity = get_capacity(bdev->bd_disk); sector_t end_sector = sector + nr_sectors; + unsigned long *need_reset = NULL; struct bio *bio = NULL; - int ret; + bool reset_all; + int ret = 0; if (!blk_queue_is_zoned(q)) return -EOPNOTSUPP; @@ -222,16 +236,44 @@ int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op, if ((nr_sectors & (zone_sectors - 1)) && end_sector != capacity) return -EINVAL; + /* + * In the case of a zone reset operation over all zones, + * REQ_OP_ZONE_RESET_ALL can be used with devices supporting this + * command. For other devices, we emulate this command behavior by + * identifying the zones needing a reset. + */ + reset_all = op == REQ_OP_ZONE_RESET && + !sector && nr_sectors == capacity; + if (reset_all && !blk_queue_zone_resetall(q)) { + need_reset = blk_alloc_zone_bitmap(q->node, q->nr_zones); + if (!need_reset) + return -ENOMEM; + ret = bdev->bd_disk->fops->report_zones(bdev->bd_disk, 0, + q->nr_zones, blk_zone_need_reset_cb, + need_reset); + if (ret < 0) + return ret; + ret = 0; + } + while (sector < end_sector) { - bio = blk_next_bio(bio, 0, gfp_mask); - bio_set_dev(bio, bdev); /* - * Special case for the zone reset operation that reset all - * zones, this is useful for applications like mkfs. + * For an all zone reset operation, if the device does not + + support REQ_OP_ZONE_RESET_ALL, skip zones that do not + * need a reset. */ - if (op == REQ_OP_ZONE_RESET && - blkdev_allow_reset_all_zones(bdev, sector, nr_sectors)) { + if (reset_all && !blk_queue_zone_resetall(q) && + !test_bit(blk_queue_zone_no(q, sector), need_reset)) { + sector += zone_sectors; + continue; + } + + bio = blk_next_bio(bio, 0, gfp_mask); + bio_set_dev(bio, bdev); + + if (reset_all && blk_queue_zone_resetall(q)) { + /* The device supports REQ_OP_ZONE_RESET_ALL */ bio->bi_opf = REQ_OP_ZONE_RESET_ALL | REQ_SYNC; break; } @@ -244,8 +286,12 @@ int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op, cond_resched(); } - ret = submit_bio_wait(bio); - bio_put(bio); + if (bio) { + ret = submit_bio_wait(bio); + bio_put(bio); + } + + kfree(need_reset); return ret; } @@ -396,13 +442,6 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode, return ret; } -static inline unsigned long *blk_alloc_zone_bitmap(int node, - unsigned int nr_zones) -{ - return kcalloc_node(BITS_TO_LONGS(nr_zones), sizeof(unsigned long), - GFP_NOIO, node); -} - void blk_queue_free_zone_bitmaps(struct request_queue *q) { kfree(q->conv_zones_bitmap); From patchwork Wed May 19 02:55:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12266029 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C6EDC43461 for ; Wed, 19 May 2021 02:55:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 248E0611BF for ; Wed, 19 May 2021 02:55:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236798AbhESC4w (ORCPT ); Tue, 18 May 2021 22:56:52 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:5175 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237381AbhESC4v (ORCPT ); Tue, 18 May 2021 22:56:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621392932; x=1652928932; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=9QruT97IeweDj477QDeACg0wJl3HA59nYa9eSJPBeQI=; b=JTO5Hl1ROgsc1gWhXwkekoZnyrbVhZC3fetUI33P5FC7Ri2WVH68qpwG QauvCpesUFnE0Zj46D17rdTgZB4x+hnvXMRIc2yX6qZirTb0S8vGOf6WZ 3HICLTU2ar9OhV46E/hzDFNDO0k5gFD+i+XZaviCew9i6YPsqhPv9neDk L1BlNLeStIMaQrzSc0Z/H7WjnQ+MkWMXZDjeDcgAwlOcxsUZAevBjp0Ii vWeDiG2Z5odmwVAqogzBJSEJ7uSvGPMKCu1cj6Opa0KozI3Bzxc1pZ1Zm ZQUDTVEhsa4vCxKADQaWsXt0ega0AzOI5sbIUfzApfAxGM+B4WtJDuDp7 g==; IronPort-SDR: 7XA4rRzteEEDYzIQi3vxp9xw3wYvJCBxg27ptXDVxwZEReUACaJEAKzoQnkHxgPQj+rg3WtqlA rOyjI141oHRfL8XoVIN0F5Qk4dcDeq+6khbIOpazdxCHUPVn4nea6edi6HZ14/KaV7e/UjqIHa LaMF96xcqYJsj4gQ46OIkgAQ6/mRWV1SxY0EbNEt0SNSUdRts5oDXfMR5HLQKSkVjElHVaeMsn pfCoG4vHv0ISBGCKRsy7THp7Desu5XqsTsbGtldvVRcPaIMFUiBsPknMBNQfplf4Q90+cByRr3 8Vw= X-IronPort-AV: E=Sophos;i="5.82,311,1613404800"; d="scan'208";a="173265892" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 19 May 2021 10:55:31 +0800 IronPort-SDR: QkP4QJepW0k55r4E2biGRH8Lqes9t461kryLxJuw57+W+tK7DHVYXg3o8ibt/ERtXhdbQRvC+n kwT5yqJi7kTdYCEXS2s62dN3p6bneED+TejTmRsgzgsNNx3h/QVcZXdnXSN2MeFOULaBmFe3gm LHfcWursNFOQRJpzx6undw5eD/AGme4JWX2lzAyW4mMcyx6DsXGntzARMqC6LJzgxekkjMwK2J oGmzzIn34BQtnVI1cjMZV2H6VG5uG0cUqR/4QOmiAKlStn9ZRgYN7/s97EABR3rOCEGNnZ7bhM SZvHqc6xI9iJ9fvqPxzBMVN8 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 19:35:12 -0700 IronPort-SDR: samlL5UFxB1RcbGFoHukxr1J1+J64hnoMxpDfysdEf4DYBQAlWb2iVCqRdgktdcxOP4Wj1kro1 EunAY8LYZpf1jez9tTBI1iyVbgnfmQNhvFBQWabQOGN63UIKOG18xY6TqeMJenkcw/IUPbO6CL 3oO5agsPu2faI8erRG8clFmNEVdDGjyY0TQoRIfoJd1IOeWHgfYIMsaYNrvxP9aWwjkypdVz0d sAbzo1lbUX5Fi1j88DzsazrDZfA9u7DpcrNUoc6Uv63/obKlKjbkcuVt2v2IpfBaG9uXN59wn0 sbs= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 18 May 2021 19:55:32 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH 02/11] block: introduce bio zone helpers Date: Wed, 19 May 2021 11:55:20 +0900 Message-Id: <20210519025529.707897-3-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210519025529.707897-1-damien.lemoal@wdc.com> References: <20210519025529.707897-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Introduce the helper functions bio_zone_no() and bio_zone_is_seq(). Both are the BIO counterparts of the request helpers blk_rq_zone_no() and blk_rq_zone_is_seq(), respectively returning the number of the target zone of a bio and true if the BIO target zone is sequential. Signed-off-by: Damien Le Moal --- include/linux/blkdev.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index f69c75bd6d27..e74ad1252e78 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1008,6 +1008,18 @@ static inline unsigned int blk_rq_stats_sectors(const struct request *rq) /* Helper to convert BLK_ZONE_ZONE_XXX to its string format XXX */ const char *blk_zone_cond_str(enum blk_zone_cond zone_cond); +static inline unsigned int bio_zone_no(struct request_queue *q, + struct bio *bio) +{ + return blk_queue_zone_no(q, bio->bi_iter.bi_sector); +} + +static inline unsigned int bio_zone_is_seq(struct request_queue *q, + struct bio *bio) +{ + return blk_queue_zone_is_seq(q, bio->bi_iter.bi_sector); +} + static inline unsigned int blk_rq_zone_no(struct request *rq) { return blk_queue_zone_no(rq->q, blk_rq_pos(rq)); From patchwork Wed May 19 02:55:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12266031 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECABEC43460 for ; Wed, 19 May 2021 02:55:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D215061244 for ; Wed, 19 May 2021 02:55:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237381AbhESC4x (ORCPT ); Tue, 18 May 2021 22:56:53 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:5175 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237388AbhESC4x (ORCPT ); Tue, 18 May 2021 22:56:53 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621392933; x=1652928933; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=e15Ea9cootfyBzy+8E0nVCIAqVY5pQ16BoohXVVok1A=; b=FJ30N7kcH74lr6TRuFI5WkC+ludLHp0Z4O7qK5u+5WVF0uT0Mg3FiKFU yMpy9V99VI+VDOYsfSqUo59fyUco2pEPjhyekDHJr9gua1peMy2kNybWG rcesRIP7Cd/nsGk2OAWSY8wDvp4qPkLhiout8UsiQQG2llbH4EpsM9YCz bfRhv9q8O/qxYuhFqLIuneqM/gJq+93QGBFbVeRwbsdjPFCVmozHOZv82 +4ob0He4BYg+Xs1Q0ZuPaiKyPCKeAhFLyAZEMMUFAdbxnN1U+FlPD1R+2 BLB4lsMFuBDhSbU8sKXymy978+MZR1w6d+PyoHPS5q7Zza1qiF6rD8Wqc g==; IronPort-SDR: cXtQ53VjUP41GXtUT8+jNo5zraZPdJtkE3DRNptd44O/z5JoI87cX/K0MNNYcpjTBJfcVrpcYg s6pXqRWuoF42VPN8sAL1+KLsEepAI9Z4TSdAfpZyb38yiHMGAlTlv/HKbQp/0RNCGt59bYpJRi FhbXwtc9zSH5cr6k2VxeDP9zf1himMI9jecKn6wKS9zRfSWkGvppsVcTOEEwvNhhRAH0oegfri a91gVa7lh4/Ka8vTEGw8vAySllyq/gvY10F1PR+fK99kIrWYwGFdmabYx9I/2/IyFl6zP9bm// EBs= X-IronPort-AV: E=Sophos;i="5.82,311,1613404800"; d="scan'208";a="173265894" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 19 May 2021 10:55:33 +0800 IronPort-SDR: IhXfFcUreBwGD996aGaBoqbZnsdjGP7DXGGuAWS7TxDrLgMyfRhoOOKmQxifj4009NABjS3GO7 /iD9aS5E9X+vk8rROSyFdPN6CQmS/spVZ1HpBWQdY3+OQHIC8i1IrmDeoX67jj0UIrdRelLIgy oe4JA2hwMlJOmx3/VvQIAdROS0wHnkH2TvF0Wh2tXZBu6QnLSP5Ixbyz15wDcN5U+w+5eLN0xv ij7D12myRB4QBjDP8gTHyKGEcLzR7KBtGJVpU6UP4yXml7LxMxo4HAd9k/D630LgPwq78fkSdP cFLlln1M/b/YxaVh3FqQ+qk6 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 19:35:13 -0700 IronPort-SDR: 9QM6sLf0b8XHcEBsjPanwKimxiG2FUfJgW45SQ+uT3Hcy6oXqx8Yf5xX761X7A2hSpq7ffIQn/ XcAU0a0dQ16bebCHFZ4/elLaf+MVv2pexH+fqiNzaGIMKLWI9gyQu+E3jKDrSXgOfz8hVUaQ7B aG0Q1DoK+WauGjOU3e/hbGU3ZLnJ2go+XYuauwt0wAIJXLyoa2of0OeiNnwkp6gFFKnZhmzHSZ Tun2aj9/fC2iqEC0fmt+AAa2HK0mcMZ3qiN2e7hSGw6HEc3RlDYYpYYPlR5/EUqhMyWCPv50MC qzw= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 18 May 2021 19:55:33 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH 03/11] block: introduce BIO_ZONE_WRITE_LOCKED bio flag Date: Wed, 19 May 2021 11:55:21 +0900 Message-Id: <20210519025529.707897-4-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210519025529.707897-1-damien.lemoal@wdc.com> References: <20210519025529.707897-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Introduce the BIO flag BIO_ZONE_WRITE_LOCKED to indicate that a BIO owns the write lock of the zone it is targeting. This is the counterpart of the struct request flag RQF_ZONE_WRITE_LOCKED. This new BIO flag is reserved for now for zone write locking control for device mapper targets exposing a zoned block device. Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig --- include/linux/blk_types.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index db026b6ec15a..e5cf12f102a2 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -304,6 +304,7 @@ enum { BIO_CGROUP_ACCT, /* has been accounted to a cgroup */ BIO_TRACKED, /* set if bio goes through the rq_qos path */ BIO_REMAPPED, + BIO_ZONE_WRITE_LOCKED, /* Owns a zoned device zone write lock */ BIO_FLAG_LAST }; From patchwork Wed May 19 02:55:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12266033 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AADCC433ED for ; Wed, 19 May 2021 02:55:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5DD2E61244 for ; Wed, 19 May 2021 02:55:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237402AbhESC4z (ORCPT ); Tue, 18 May 2021 22:56:55 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:5175 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237388AbhESC4y (ORCPT ); Tue, 18 May 2021 22:56:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621392935; x=1652928935; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=2vVwCP/3ZUymwi7nSrTUwfXgB66YKY0EOI/cZefWxco=; b=UZ+fUk/uPl+1smxW1lkybFv4OAelKa7JDc3Y/cXxxc5NFj9vScUDlJOX b6kUH7dtZtRs2R5dL/FBuxu4m+amawBXyxigpii6XhDSfrI6oU8+rn9kq kIPPioOaAfqUdYwum6Nf5DU9uZHUt6WhSMsmXmcgTzUk37Vvz+1AzhhgR kNLX9LoYnhX2jbsyIAlNeIZ4dKtsqOo3/ugElytHRBuC8HWKCRaP1HuSo QNm+rJKAkHbrhWLVtH43w7hXeK5EwZu7Z6HRImciR/9In3SOyhVCSEAU6 oknXf2CJgbDOFLvrHTiPUllV9amVo1l84Ni+d9ingc5Gnv8uxPFR7p4ZG g==; IronPort-SDR: YvwVA8b4frPWEGgTaMq4h6byrXf1ZNzsH8zxbLoJBf9x+lFPGBdgk1hG60fpwZiRO+4E2qyNQT pWsw9zjlFS/xZ//QvFX++DCUuVFY4jyx2q4pQuEeifZCWlIYRNrRCb2OlWZWvnuK+0+V2slqof IXU46TCcfzQ6NY+Kw0Ar8XD+AdPOVa3Uiz7FOLsiOWLJJkga81Wlfb48qpRfaYCnmiBiqj754b KtIVmjAb41GPcDFisW9/pt1uIrpYZ8ovD+cavkhpCMEXGt37WejJo1rAstOK44ual8x5DJGcub vHo= X-IronPort-AV: E=Sophos;i="5.82,311,1613404800"; d="scan'208";a="173265898" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 19 May 2021 10:55:34 +0800 IronPort-SDR: MuBQPe+1UTxgNn57ijTOgxABsRMalpDuxzWLJv4m1BdU6LsSLQxv98X4uvHHTgOHE3Ajset473 KgMlnbBV3iDhkm7NMF0KJxhX3a3/9NiQsXlnCJ8Sf5wh1L+3vWEIK68TQR8c/w62a+TxF7CslO Ru+91QpxLL0VVLY7zpj7KmGjXJLz+LCpm2cUIwh40W7s+DP4Mk3AOt9ZrpUeXVvDMUvXt75Dlr IEytW2eptz4VLv9LopNiaj+usG7TZ9EGig8BbUPHI4xNmoiVgaZv+vnMsfbGjlEtPq9NlTRMn0 GkeO7AEKmthsippNAM7jwyOa Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 19:35:14 -0700 IronPort-SDR: eBdyt+Zu6RnJp2t8ia3+EQ301600BuwRL4NGvgzYP/66dBzoFoyWUHDrhMoHZbzBHQMUvImMgr 4OJ3LahGeuRsTiZHZRBWzTKEVWS+Faj1r3e5wbGKnym2kzosN/k/fB1/TmkziejJUG18tq8/9o qw6SWqkMc0KzpOthvJLzmKTj9YQGb0hjJ2a04uGT6gG6dj+UuU08uYLMiYxlanDFQ5pieiAe3W EmCHk3lXr0LYCY7tbcJq7OxfkvZmhj+LHvTBI9O0YC6T2O8EzfgZ7u90ODSFPMaOyALKFuZeGz cWc= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 18 May 2021 19:55:34 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH 04/11] dm: Fix dm_accept_partial_bio() Date: Wed, 19 May 2021 11:55:22 +0900 Message-Id: <20210519025529.707897-5-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210519025529.707897-1-damien.lemoal@wdc.com> References: <20210519025529.707897-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Fix dm_accept_partial_bio() to actually check that zone management commands are not passed as explained in the function documentation comment. Also, since a zone append operation cannot be split, add REQ_OP_ZONE_APPEND as a forbidden command. White lines are added around the group of BUG_ON() calls to make the code more legible. Signed-off-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- drivers/md/dm.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index ca2aedd8ee7d..a9211575bfed 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1237,8 +1237,9 @@ static int dm_dax_zero_page_range(struct dax_device *dax_dev, pgoff_t pgoff, /* * A target may call dm_accept_partial_bio only from the map routine. It is - * allowed for all bio types except REQ_PREFLUSH, REQ_OP_ZONE_RESET, - * REQ_OP_ZONE_OPEN, REQ_OP_ZONE_CLOSE and REQ_OP_ZONE_FINISH. + * allowed for all bio types except REQ_PREFLUSH, zone management operations + * (REQ_OP_ZONE_RESET, REQ_OP_ZONE_OPEN, REQ_OP_ZONE_CLOSE and + * REQ_OP_ZONE_FINISH) and zone append writes. * * dm_accept_partial_bio informs the dm that the target only wants to process * additional n_sectors sectors of the bio and the rest of the data should be @@ -1268,9 +1269,13 @@ void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors) { struct dm_target_io *tio = container_of(bio, struct dm_target_io, clone); unsigned bi_size = bio->bi_iter.bi_size >> SECTOR_SHIFT; + BUG_ON(bio->bi_opf & REQ_PREFLUSH); + BUG_ON(op_is_zone_mgmt(bio_op(bio))); + BUG_ON(bio_op(bio) == REQ_OP_ZONE_APPEND); BUG_ON(bi_size > *tio->len_ptr); BUG_ON(n_sectors > bi_size); + *tio->len_ptr -= bi_size - n_sectors; bio->bi_iter.bi_size = n_sectors << SECTOR_SHIFT; } From patchwork Wed May 19 02:55:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12266035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7F60C433B4 for ; Wed, 19 May 2021 02:55:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BB4C2611BF for ; Wed, 19 May 2021 02:55:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237411AbhESC44 (ORCPT ); Tue, 18 May 2021 22:56:56 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:5175 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237388AbhESC4z (ORCPT ); Tue, 18 May 2021 22:56:55 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621392936; x=1652928936; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=pkTM/w4V/LaOIzD75iRX3V2KkhfGqUPYP/VrhBVpQ3M=; b=KWEX2s79I53vPkC0LCYtljbUbrjSqh/PMkCC5bQc/u09m73yUrRCN+xm u6TvBgYBqO5TTo8NqdnrioU4941vi8G96pT00qvH3NcGWHxBhfv1ZNzt2 tc3FAK8YNlbiG4sMr3ljgYBRCW8xXn0fs/qAjP5aCuMR72Pb1hnnznAsh iEF4KRud5I10+Gg0mQKWq495jFKwPA2xyYZFoYZRWzHuYFGdso3/1NK8q /EwcgVL9CHrhqGLASySnylqMCQCmOYenVhKOoCtDkArmJkgE2HKyPqwZk LFh5oDK7W7T6myS+ZhoPsa0bQSGhODtHOWaIbyKvL3ZD8fZu8Xe2DmtgE w==; IronPort-SDR: A1TL2IErHA5bbDAzWqipWaRh5tbBMiARyaj4DyEhD2HjyQ4nN/rYDi4PbnUydeV0gm9eYGI0yS U2GetrWx3PLN9X7Cp0zg2oSTXO4BtPyi29Tn4h0xGX3kqF6iVZQ17lDS62i6/buQNCNZANPEvt cGPvAk3Nk/dq28TfrrWhymdTg7m3L2M1VAGMif8nG6JJygZIbcvuDTZGBVyUvWAUxhKxBLxay4 MDWzPFi5TH4KI7IFrwfewBynbQej3DC/7dlWxwvzio8qkFiY1iwc9rLP0ngjMRIAux86WroX7g AEY= X-IronPort-AV: E=Sophos;i="5.82,311,1613404800"; d="scan'208";a="173265901" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 19 May 2021 10:55:36 +0800 IronPort-SDR: 7eThjK3WZLLBvUwvuPN/Wor6WrIPLKaTsT/l0jHHqM8KATpHW71ciVQDfgPZVI3mt1fVe+W4Bs VTnQLnzl8/Y32IrDSXF+x6CN7o0qcEkgC7YpQto8DQ/yAPSUpXzY2oSV6Q3Ns6h6O7BGBvm7mJ 35iFPjC2GPECxj5OH8kQguGg/zEWGAQTtdcvXPXPnNHwgXNg4sIuPGfjFv4nq51IzS+bIJe6wM EHqLHvmQBVRhOEzn0ESymt7cqNPXBDQcEeZprb9vxbaztqWoXsJN8ocz+j0MRVVD7iwF0+h2s0 se1CuuhPakX35/MhKvxWjzi+ Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 19:35:15 -0700 IronPort-SDR: cn9S+bH2vAN57lDXfGpWNGHaFTa/lMAMBGpy+k9iIXRCR04fLCAkSLAShfC5hohyVGXEguITG/ GO4zfe4hRxkA3lMiiPPA699VG9pjN1dNP+amZ7VC5ohQMVWifka2UzbujOEC3cuLlvYrXhhrZv VC0oRhzfYZA2l8yfI3NmivNrk8Jtkyxn7GdWRC+WSsOCNojGZcIixiC/In4e2G/LDhMNOnu5Uc ZTe6KrE9pqjvFotycJ57CJLkv1wdeLFO9X9KfNcqbQvmoo5xKqSCF8vZ8rHpFcvA3+wTbUY/Ix GLQ= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 18 May 2021 19:55:35 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH 05/11] dm: cleanup device_area_is_invalid() Date: Wed, 19 May 2021 11:55:23 +0900 Message-Id: <20210519025529.707897-6-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210519025529.707897-1-damien.lemoal@wdc.com> References: <20210519025529.707897-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org In device_area_is_invalid(), use bdev_is_zoned() instead of open coding the test on the zoned model returned by bdev_zoned_model(). Signed-off-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- drivers/md/dm-table.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index ee47a332b462..21fd9cd4da32 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -249,7 +249,7 @@ static int device_area_is_invalid(struct dm_target *ti, struct dm_dev *dev, * If the target is mapped to zoned block device(s), check * that the zones are not partially mapped. */ - if (bdev_zoned_model(bdev) != BLK_ZONED_NONE) { + if (bdev_is_zoned(bdev)) { unsigned int zone_sectors = bdev_zone_sectors(bdev); if (start & (zone_sectors - 1)) { From patchwork Wed May 19 02:55:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12266037 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E8A6C43460 for ; Wed, 19 May 2021 02:55:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 42A5A6124C for ; Wed, 19 May 2021 02:55:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237404AbhESC45 (ORCPT ); Tue, 18 May 2021 22:56:57 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:5175 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237441AbhESC44 (ORCPT ); Tue, 18 May 2021 22:56:56 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621392937; x=1652928937; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=o2emM5SLbJgvp2VxjVVl0RQxstwbfZQwKW602P99Nxk=; b=q5erHIu5PQXvg+phun3EMD+EC8zjyppBDQUixAWMgRxiHOzBrEamj+xV ZZAHW/St85dhfIL/eBJ6KIge3ZHKQfsniYm94jbV7EP8yuQOv0tQHFygf kKlxeZPf45dhZ1y2vSydZAYnZOhyejhvJAGe6FFXIIqoLYJhBFB3byZd3 M6gJb6qYb32CCYl04FF+is0HJxfFaV+eEdttPzO0HFK/PSA5aNXBzlVaD IEENdNqlNu+TKYvkL4Kw2Xc3d2kqMi4qQgptQ9l7HALNcLe9HDqqT86lh iUMywSq/XTjqLhjO+Ykm4c7Fkoa9BcPhikkLk6QZRdUA0xnMXGeLPwiYe g==; IronPort-SDR: Tf0W2HzhW766Y8gmCFSxItjeFxJBXZ7Q2K+0vIYj1sCY2asg/t8t4uT8XmTrcH3IMBNENkrica syO4My9hp3dOK1EVrpkkcwAu9jiPr8D/2ny1AI5G/lWJXTJ4VbaItZvOHrRcxeFwC1rscXtOuf DuC68tefzhn64zDu/OyHv3dp8Rm648ylgtDNpa5EOehUnt0zag1GaAaxlV14W6Cnu5AQvbiUH/ 2/u/Z9FxYW+94vermL5v3U78JeicAOP5scEr9OEl/6iheps0PfDo1eEDvcE/ncMmldDw2xTEfJ xbE= X-IronPort-AV: E=Sophos;i="5.82,311,1613404800"; d="scan'208";a="173265903" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 19 May 2021 10:55:37 +0800 IronPort-SDR: kjePzY+YCz9UjgWdGMiV+feu/cws3yqoH8QdeCssk/k8S0TK2L3TOFHieK3TtJsjN9LhBQ0oJb hoqjo6TCviYYcs0DTujIOiTP0Nkk2Ga0Tpne5UPH5+VKbNtyhtG/mHZg845jlXPf3rLusQBQW2 76OGWuXJMb4lYavQ5nMPU2CKt83CXXKCHybkU+TVEom5goGVBhjxzXziGjp6i4po8o9FCKcEHu 64+H4RNuAyawhXjjUKQLJ5EkXg9eV/01j9wSXuDrhB0Q6M3T7ap1XH5nmArymWADy11T3gDoZ9 UpHyRYGvHP/B6PQgQECpewgQ Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 19:35:16 -0700 IronPort-SDR: keXIDC0OV/5AndlOcnlRwmZXdCYcboENvsJUE0j5VIgZSdwlz/bY/GzIstEY2WXiz4iXavN3vA 28ytbH6o1/Y8ZwOXFNMINhaB21SATqPOaoRGaR1RQXSBHgyPQZcaJE3swU7IadQRiVRmurDxLd 86bAck95Zk0YqAdTcylVZfzxHdQmTaGPP0uxVsJYCfEEIKnXVjDFH65STp4Epx9OTZbwAow0mH cpmSfpSz+ei/3E7AXQkxoYtM04dZsqxRLYQfs1yfCuTVP0RtXP7wX99W3YuKsLpwkV8rfjherT r+A= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 18 May 2021 19:55:36 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH 06/11] dm: move zone related code to dm-zone.c Date: Wed, 19 May 2021 11:55:24 +0900 Message-Id: <20210519025529.707897-7-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210519025529.707897-1-damien.lemoal@wdc.com> References: <20210519025529.707897-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Move core and table code used for zoned targets and conditionally defined with #ifdef CONFIG_BLK_DEV_ZONED to the new file dm-zone.c. This file is conditionally compiled depending on CONFIG_BLK_DEV_ZONED. The small helper dm_set_zones_restrictions() is introduced to initialize a mapped device request queue zone attributes in dm_table_set_restrictions(). Signed-off-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- drivers/md/Makefile | 4 ++ drivers/md/dm-table.c | 14 ++---- drivers/md/dm-zone.c | 102 ++++++++++++++++++++++++++++++++++++++++++ drivers/md/dm.c | 78 -------------------------------- drivers/md/dm.h | 11 +++++ 5 files changed, 120 insertions(+), 89 deletions(-) create mode 100644 drivers/md/dm-zone.c diff --git a/drivers/md/Makefile b/drivers/md/Makefile index ef7ddc27685c..a74aaf8b1445 100644 --- a/drivers/md/Makefile +++ b/drivers/md/Makefile @@ -92,6 +92,10 @@ ifeq ($(CONFIG_DM_UEVENT),y) dm-mod-objs += dm-uevent.o endif +ifeq ($(CONFIG_BLK_DEV_ZONED),y) +dm-mod-objs += dm-zone.o +endif + ifeq ($(CONFIG_DM_VERITY_FEC),y) dm-verity-objs += dm-verity-fec.o endif diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 21fd9cd4da32..dd9f648ab598 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -2064,17 +2064,9 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, dm_table_any_dev_attr(t, device_is_not_random, NULL)) blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q); - /* - * For a zoned target, the number of zones should be updated for the - * correct value to be exposed in sysfs queue/nr_zones. For a BIO based - * target, this is all that is needed. - */ -#ifdef CONFIG_BLK_DEV_ZONED - if (blk_queue_is_zoned(q)) { - WARN_ON_ONCE(queue_is_mq(q)); - q->nr_zones = blkdev_nr_zones(t->md->disk); - } -#endif + /* For a zoned target, setup the zones related queue attributes */ + if (blk_queue_is_zoned(q)) + dm_set_zones_restrictions(t, q); dm_update_keyslot_manager(q, t); blk_queue_update_readahead(q); diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c new file mode 100644 index 000000000000..3243c42b7951 --- /dev/null +++ b/drivers/md/dm-zone.c @@ -0,0 +1,102 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Western Digital Corporation or its affiliates. + */ + +#include + +#include "dm-core.h" + +/* + * User facing dm device block device report zone operation. This calls the + * report_zones operation for each target of a device table. This operation is + * generally implemented by targets using dm_report_zones(). + */ +int dm_blk_report_zones(struct gendisk *disk, sector_t sector, + unsigned int nr_zones, report_zones_cb cb, void *data) +{ + struct mapped_device *md = disk->private_data; + struct dm_table *map; + int srcu_idx, ret; + struct dm_report_zones_args args = { + .next_sector = sector, + .orig_data = data, + .orig_cb = cb, + }; + + if (dm_suspended_md(md)) + return -EAGAIN; + + map = dm_get_live_table(md, &srcu_idx); + if (!map) { + ret = -EIO; + goto out; + } + + do { + struct dm_target *tgt; + + tgt = dm_table_find_target(map, args.next_sector); + if (WARN_ON_ONCE(!tgt->type->report_zones)) { + ret = -EIO; + goto out; + } + + args.tgt = tgt; + ret = tgt->type->report_zones(tgt, &args, + nr_zones - args.zone_idx); + if (ret < 0) + goto out; + } while (args.zone_idx < nr_zones && + args.next_sector < get_capacity(disk)); + + ret = args.zone_idx; +out: + dm_put_live_table(md, srcu_idx); + return ret; +} + +int dm_report_zones_cb(struct blk_zone *zone, unsigned int idx, void *data) +{ + struct dm_report_zones_args *args = data; + sector_t sector_diff = args->tgt->begin - args->start; + + /* + * Ignore zones beyond the target range. + */ + if (zone->start >= args->start + args->tgt->len) + return 0; + + /* + * Remap the start sector and write pointer position of the zone + * to match its position in the target range. + */ + zone->start += sector_diff; + if (zone->type != BLK_ZONE_TYPE_CONVENTIONAL) { + if (zone->cond == BLK_ZONE_COND_FULL) + zone->wp = zone->start + zone->len; + else if (zone->cond == BLK_ZONE_COND_EMPTY) + zone->wp = zone->start; + else + zone->wp += sector_diff; + } + + args->next_sector = zone->start + zone->len; + return args->orig_cb(zone, args->zone_idx++, args->orig_data); +} +EXPORT_SYMBOL_GPL(dm_report_zones_cb); + +void dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q) +{ + if (!blk_queue_is_zoned(q)) + return; + + /* + * For a zoned target, the number of zones should be updated for the + * correct value to be exposed in sysfs queue/nr_zones. For a BIO based + * target, this is all that is needed. + */ + WARN_ON_ONCE(queue_is_mq(q)); + q->nr_zones = blkdev_nr_zones(t->md->disk); +} + diff --git a/drivers/md/dm.c b/drivers/md/dm.c index a9211575bfed..45d2dc2ee844 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -444,84 +444,6 @@ static int dm_blk_getgeo(struct block_device *bdev, struct hd_geometry *geo) return dm_get_geometry(md, geo); } -#ifdef CONFIG_BLK_DEV_ZONED -int dm_report_zones_cb(struct blk_zone *zone, unsigned int idx, void *data) -{ - struct dm_report_zones_args *args = data; - sector_t sector_diff = args->tgt->begin - args->start; - - /* - * Ignore zones beyond the target range. - */ - if (zone->start >= args->start + args->tgt->len) - return 0; - - /* - * Remap the start sector and write pointer position of the zone - * to match its position in the target range. - */ - zone->start += sector_diff; - if (zone->type != BLK_ZONE_TYPE_CONVENTIONAL) { - if (zone->cond == BLK_ZONE_COND_FULL) - zone->wp = zone->start + zone->len; - else if (zone->cond == BLK_ZONE_COND_EMPTY) - zone->wp = zone->start; - else - zone->wp += sector_diff; - } - - args->next_sector = zone->start + zone->len; - return args->orig_cb(zone, args->zone_idx++, args->orig_data); -} -EXPORT_SYMBOL_GPL(dm_report_zones_cb); - -static int dm_blk_report_zones(struct gendisk *disk, sector_t sector, - unsigned int nr_zones, report_zones_cb cb, void *data) -{ - struct mapped_device *md = disk->private_data; - struct dm_table *map; - int srcu_idx, ret; - struct dm_report_zones_args args = { - .next_sector = sector, - .orig_data = data, - .orig_cb = cb, - }; - - if (dm_suspended_md(md)) - return -EAGAIN; - - map = dm_get_live_table(md, &srcu_idx); - if (!map) { - ret = -EIO; - goto out; - } - - do { - struct dm_target *tgt; - - tgt = dm_table_find_target(map, args.next_sector); - if (WARN_ON_ONCE(!tgt->type->report_zones)) { - ret = -EIO; - goto out; - } - - args.tgt = tgt; - ret = tgt->type->report_zones(tgt, &args, - nr_zones - args.zone_idx); - if (ret < 0) - goto out; - } while (args.zone_idx < nr_zones && - args.next_sector < get_capacity(disk)); - - ret = args.zone_idx; -out: - dm_put_live_table(md, srcu_idx); - return ret; -} -#else -#define dm_blk_report_zones NULL -#endif /* CONFIG_BLK_DEV_ZONED */ - static int dm_prepare_ioctl(struct mapped_device *md, int *srcu_idx, struct block_device **bdev) { diff --git a/drivers/md/dm.h b/drivers/md/dm.h index b441ad772c18..fdf1536a4b62 100644 --- a/drivers/md/dm.h +++ b/drivers/md/dm.h @@ -100,6 +100,17 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t); */ #define dm_target_hybrid(t) (dm_target_bio_based(t) && dm_target_request_based(t)) +/* + * Zoned targets related functions. + */ +void dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q); +#ifdef CONFIG_BLK_DEV_ZONED +int dm_blk_report_zones(struct gendisk *disk, sector_t sector, + unsigned int nr_zones, report_zones_cb cb, void *data); +#else +#define dm_blk_report_zones NULL +#endif + /*----------------------------------------------------------------- * A registry of target types. *---------------------------------------------------------------*/ From patchwork Wed May 19 02:55:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12266039 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1954C433B4 for ; Wed, 19 May 2021 02:55:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9130161244 for ; Wed, 19 May 2021 02:55:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237445AbhESC46 (ORCPT ); Tue, 18 May 2021 22:56:58 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:5175 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237388AbhESC45 (ORCPT ); Tue, 18 May 2021 22:56:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621392938; x=1652928938; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=AeyOk5MO17fcqdr0oJ49WNQeja1wf8UwoMxgGriLLF8=; b=SvCHIiAsd2pZJptCemh0GleN3yQkGiyXJp5xxxHRC5uW6re5ko8hOK8l wa1sggfN/2By3QKRANRaLbLc1O6VR+EeoujdBgxzHQRlwL9aTJHymQiw/ lSCeQDkTVmBGuhibnL3uQ82dqU/lc9Pb3DoSeRuAUT1XMmNJyEzajc8CL yZFTWzUONb1+KbkbU6wwBHYEkUNSuq6Z1OSACRqfjNt1FUT2M1BzBaKZ7 Dyf5vojcuVNTdkloY/meXXOtL5T240vQYh495rX4loQPyUNESbdBbkvEi jW7GmU0eTrykZ3cFBHKFaJbGF3D9lOm9yEDxIhXpZJyUwTy6KO8EKHW3N Q==; IronPort-SDR: Ex0StahL3MWezmEXUgmjRJIXYzBUpO1HHAdH7I33iEqLckSZ4Iw/U18lKifSziqmkR4Iij+3ik N1AkUD4DGQ1DUwaYX3lJMNPXSo8s/kLTe/2qjtKiQ9MjCikxORItb0hLIXowCSKAxB/UlgAoTf eIKD4JemobKBboV0tAztjTa3xw7DgB2BpBBlUFcJ5tePFGIROnGEAdTW8o1pLNbE83xRM9kYLy Plz0Sk9r99FRwwCwA07PyPN1MMDpl5QEaWh25Bk+JonMdOQRjMtM2OljgYHbK7E722XlA30dcW OE4= X-IronPort-AV: E=Sophos;i="5.82,311,1613404800"; d="scan'208";a="173265905" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 19 May 2021 10:55:38 +0800 IronPort-SDR: CBr3NwBsicnhcdtbQ/iafF+zBSJCyvrEcfrLKErzhNdxLJXr7IucG96Fz/S7ieWVMTTv0Dshl1 YF3UeEwQ8WTw1NL7JadwmT9VlG9bklba+P/XT4eo/PByk5YRDCHomsfhoLDWpW7vLVmjmDKiyL Osq1dXK8QmnV+/HdK4yXULzBWAHr4FQi8EhI2SfyMdVl7m4fDNEtU5ySxL9hheBZdefchsOxmV tIMGNy9bp5Hg0XlIx2MUWFl7smoTYNOJetSyJ5VOaAwGRsw0j35/QOE1jd3t82R/5+Bxbf6r8+ iJ3xAYniqDBL4Sar319KRdux Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 19:35:17 -0700 IronPort-SDR: szUcZvs+gA42gSRd+bv203BQmPI1kz1j4wujLXMB14/7hZqlJtAn125bOz1cs0cNNsQCOKIU/I JVDo+GafHDDY0HN3BwnCJdRsc9hpGNU0RlqGmPOsvmOlwo8RokDRNNHjY7/5QPCofziUquTJ3N rapt03QGxqmL/ojXQKgyTZk3M84cdNZ2AIKCp5w7v0AAbpksr7QnQtjzLTXXeKAqHSoCvKeY4N BMaqYrbA32PLAIL2vEzH53gKi4A8IXObK1iFbYGvQWXVrBiIvH/h+QRZzfBHcxjEinyuybsWjr 1ts= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 18 May 2021 19:55:38 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH 07/11] dm: Introduce dm_report_zones() Date: Wed, 19 May 2021 11:55:25 +0900 Message-Id: <20210519025529.707897-8-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210519025529.707897-1-damien.lemoal@wdc.com> References: <20210519025529.707897-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org To simplify the implementation of the report_zones operation of a zoned target, introduce the function dm_report_zones() to set a target mapping start sector in struct dm_report_zones_args and call blkdev_report_zones(). This new function is exported and the report zones callback function dm_report_zones_cb() is not. dm-linear, dm-flakey and dm-crypt are modified to use dm_report_zones(). Signed-off-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- drivers/md/dm-crypt.c | 7 +++---- drivers/md/dm-flakey.c | 7 +++---- drivers/md/dm-linear.c | 7 +++---- drivers/md/dm-zone.c | 23 ++++++++++++++++++++--- include/linux/device-mapper.h | 3 ++- 5 files changed, 31 insertions(+), 16 deletions(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index b0ab080f2567..f410ceee51d7 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -3138,11 +3138,10 @@ static int crypt_report_zones(struct dm_target *ti, struct dm_report_zones_args *args, unsigned int nr_zones) { struct crypt_config *cc = ti->private; - sector_t sector = cc->start + dm_target_offset(ti, args->next_sector); - args->start = cc->start; - return blkdev_report_zones(cc->dev->bdev, sector, nr_zones, - dm_report_zones_cb, args); + return dm_report_zones(cc->dev->bdev, cc->start, + cc->start + dm_target_offset(ti, args->next_sector), + args, nr_zones); } #else #define crypt_report_zones NULL diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c index b7fee9936f05..5877220c01ed 100644 --- a/drivers/md/dm-flakey.c +++ b/drivers/md/dm-flakey.c @@ -463,11 +463,10 @@ static int flakey_report_zones(struct dm_target *ti, struct dm_report_zones_args *args, unsigned int nr_zones) { struct flakey_c *fc = ti->private; - sector_t sector = flakey_map_sector(ti, args->next_sector); - args->start = fc->start; - return blkdev_report_zones(fc->dev->bdev, sector, nr_zones, - dm_report_zones_cb, args); + return dm_report_zones(fc->dev->bdev, fc->start, + flakey_map_sector(ti, args->next_sector), + args, nr_zones); } #else #define flakey_report_zones NULL diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c index 92db0f5e7f28..c91f1e2e2f65 100644 --- a/drivers/md/dm-linear.c +++ b/drivers/md/dm-linear.c @@ -140,11 +140,10 @@ static int linear_report_zones(struct dm_target *ti, struct dm_report_zones_args *args, unsigned int nr_zones) { struct linear_c *lc = ti->private; - sector_t sector = linear_map_sector(ti, args->next_sector); - args->start = lc->start; - return blkdev_report_zones(lc->dev->bdev, sector, nr_zones, - dm_report_zones_cb, args); + return dm_report_zones(lc->dev->bdev, lc->start, + linear_map_sector(ti, args->next_sector), + args, nr_zones); } #else #define linear_report_zones NULL diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c index 3243c42b7951..b42474043249 100644 --- a/drivers/md/dm-zone.c +++ b/drivers/md/dm-zone.c @@ -56,7 +56,8 @@ int dm_blk_report_zones(struct gendisk *disk, sector_t sector, return ret; } -int dm_report_zones_cb(struct blk_zone *zone, unsigned int idx, void *data) +static int dm_report_zones_cb(struct blk_zone *zone, unsigned int idx, + void *data) { struct dm_report_zones_args *args = data; sector_t sector_diff = args->tgt->begin - args->start; @@ -84,7 +85,24 @@ int dm_report_zones_cb(struct blk_zone *zone, unsigned int idx, void *data) args->next_sector = zone->start + zone->len; return args->orig_cb(zone, args->zone_idx++, args->orig_data); } -EXPORT_SYMBOL_GPL(dm_report_zones_cb); + +/* + * Helper for drivers of zoned targets to implement struct target_type + * report_zones operation. + */ +int dm_report_zones(struct block_device *bdev, sector_t start, sector_t sector, + struct dm_report_zones_args *args, unsigned int nr_zones) +{ + /* + * Set the target mapping start sector first so that + * dm_report_zones_cb() can correctly remap zone information. + */ + args->start = start; + + return blkdev_report_zones(bdev, sector, nr_zones, + dm_report_zones_cb, args); +} +EXPORT_SYMBOL_GPL(dm_report_zones); void dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q) { @@ -99,4 +117,3 @@ void dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q) WARN_ON_ONCE(queue_is_mq(q)); q->nr_zones = blkdev_nr_zones(t->md->disk); } - diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h index ff700fb6ce1d..caea0a079d2d 100644 --- a/include/linux/device-mapper.h +++ b/include/linux/device-mapper.h @@ -478,7 +478,8 @@ struct dm_report_zones_args { /* must be filled by ->report_zones before calling dm_report_zones_cb */ sector_t start; }; -int dm_report_zones_cb(struct blk_zone *zone, unsigned int idx, void *data); +int dm_report_zones(struct block_device *bdev, sector_t start, sector_t sector, + struct dm_report_zones_args *args, unsigned int nr_zones); #endif /* CONFIG_BLK_DEV_ZONED */ /* From patchwork Wed May 19 02:55:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12266041 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E855DC43460 for ; Wed, 19 May 2021 02:55:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CA1AA611BF for ; Wed, 19 May 2021 02:55:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237388AbhESC47 (ORCPT ); Tue, 18 May 2021 22:56:59 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:5175 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237536AbhESC47 (ORCPT ); Tue, 18 May 2021 22:56:59 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621392940; x=1652928940; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=aVa7W8T6lKCbKI/ygLHjB1phf2zK6Oo7yLjeZw7C+Rk=; b=mv1dL2j9fAtYovmD6j4s+e/9AZAGHOqnahMd0ORm5d0NQlwWcmUywBI6 ar4zejV5P3tbAQP4Gdxr6osowlZXrRRqT6Wms6phHvBPR+8hpO/V+c07p ijSc3nAaXtcboGrStaTXl8nF/xEVRnPHpaiyox29uOfxETnnNnW85GEHF iiz1W6xZaUjfuslQfkmXH5bvBqpCiU3Q9vgp7TC0Mp45zk1HQb0N5uX9c AMEzRCXr+sa7UbP/rgPXBWMy4Z6fjxRlCw/sebz5EIr504fqeYbbUZ7cS SpJ3IQ/0h8opmi58OjnVixNXV7MfLR96H+vZ3I2+ofRg32s8MikPv9mqB Q==; IronPort-SDR: tWqzTopWGkTFCUc5NcRVa8Bn2MvbQQ39tTJbCPmZSC7X0pL+4casor1KOECx2fqf6m/YVNy9rQ cucdLlMjiN5uFFUmrRGSAILx58h3vIiCuGayAlDtmg3eM5Q+GqVUZUeVORjD+6aggcIBC5IAv2 B2P7A+zyR8I94uythjD8wgsXH0RBFUC3pzaBnS0a9G0lnI8zgvGRLoS6p0FwelEYnZXjs7Tm2m WyLN92VTJc0OOST4Nd/MF8L7Jm53y9Zf7FS5dikHjyQJFtKkyYpBZBF78S5OzZOQtknTGxP/Cq HNY= X-IronPort-AV: E=Sophos;i="5.82,311,1613404800"; d="scan'208";a="173265906" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 19 May 2021 10:55:39 +0800 IronPort-SDR: hT9UCWCU8KkiNqoMtQAnfM5+ia3vHHtPtZyzoGmpK4P0bjE1Jru/o8rjC8uHSf5cVyfPzXvbB2 L0OyWo88QjWfBkA8i4VY07HDcZNpqcHoILl9hJQCR5y0Vip+uvwaJJsktNs9tnHUACYKyA06gx Au2IeiOSkt9ch8VN3E/IC6jFhTJd+SgAbaxDdk7SaIR/0IN/4eq5BAdey99ckyvsVD6ur6iwip 5O7gvbGJXA0Hn7Dex2ELDM6FFrsCEMAtFO8eymsVMbzco4gjqKdWfLt8VMwkFpHNXCXNUV/hlQ WZiFtsf8tJqIbDuqb6Q/FHvI Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 19:35:19 -0700 IronPort-SDR: 0UQ9iEcznzJkmRbe6Dx9LCZmmKO+7yOsDQUuKOKkqT20CEV5wEZPPUsDVrxFeXOO2ebZ91cOCO DcHmTJ+wcDsycyCT+uXtBliqH67u+4kfcsZoImNk6fvRIvM7cLuAlGoTcy7TFEq0QsnZ3onsBM LUCly5uk0JTsZL8WR9RTNts+rJoVPzxudC0rw7+tVaR+1LYo/Psy1ySnt1IOeRuYRc4ftzghtN CHt0xw5keSWBjEywpASgnoraT5Era2tCO+MULVMQvIbyT+oyBiVJ12B0FugekZ8ftl6f2hCCjy Ims= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 18 May 2021 19:55:39 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH 08/11] dm: Forbid requeue of writes to zones Date: Wed, 19 May 2021 11:55:26 +0900 Message-Id: <20210519025529.707897-9-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210519025529.707897-1-damien.lemoal@wdc.com> References: <20210519025529.707897-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org A target map method requesting the requeue of a bio with DM_MAPIO_REQUEUE or completing it with DM_ENDIO_REQUEUE can cause unaligned write errors if the bio is a write operation targeting a sequential zone. If a zoned target request such a requeue, warn about it and kill the IO. The function dm_is_zone_write() is introduced to detect write operations to zoned targets. This change does not affect the target drivers supporting zoned devices and exposing a zoned device, namely dm-crypt, dm-linear and dm-flakey as none of these targets ever request a requeue. Signed-off-by: Damien Le Moal --- drivers/md/dm-zone.c | 17 +++++++++++++++++ drivers/md/dm.c | 18 +++++++++++++++--- drivers/md/dm.h | 5 +++++ 3 files changed, 37 insertions(+), 3 deletions(-) diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c index b42474043249..edc3bbb45637 100644 --- a/drivers/md/dm-zone.c +++ b/drivers/md/dm-zone.c @@ -104,6 +104,23 @@ int dm_report_zones(struct block_device *bdev, sector_t start, sector_t sector, } EXPORT_SYMBOL_GPL(dm_report_zones); +bool dm_is_zone_write(struct mapped_device *md, struct bio *bio) +{ + struct request_queue *q = md->queue; + + if (!blk_queue_is_zoned(q)) + return false; + + switch (bio_op(bio)) { + case REQ_OP_WRITE_ZEROES: + case REQ_OP_WRITE_SAME: + case REQ_OP_WRITE: + return !op_is_flush(bio->bi_opf) && bio_sectors(bio); + default: + return false; + } +} + void dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q) { if (!blk_queue_is_zoned(q)) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 45d2dc2ee844..4426019a89cc 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -846,11 +846,15 @@ static void dec_pending(struct dm_io *io, blk_status_t error) * Target requested pushing back the I/O. */ spin_lock_irqsave(&md->deferred_lock, flags); - if (__noflush_suspending(md)) + if (__noflush_suspending(md) && + !WARN_ON_ONCE(dm_is_zone_write(md, bio))) /* NOTE early return due to BLK_STS_DM_REQUEUE below */ bio_list_add_head(&md->deferred, io->orig_bio); else - /* noflush suspend was interrupted. */ + /* + * noflush suspend was interrupted or this is + * a write to a zoned target. + */ io->status = BLK_STS_IOERR; spin_unlock_irqrestore(&md->deferred_lock, flags); } @@ -947,7 +951,15 @@ static void clone_endio(struct bio *bio) int r = endio(tio->ti, bio, &error); switch (r) { case DM_ENDIO_REQUEUE: - error = BLK_STS_DM_REQUEUE; + /* + * Requeuing writes to a sequential zone of a zoned + * target will break the sequential write pattern: + * fail such IO. + */ + if (WARN_ON_ONCE(dm_is_zone_write(md, bio))) + error = BLK_STS_IOERR; + else + error = BLK_STS_DM_REQUEUE; fallthrough; case DM_ENDIO_DONE: break; diff --git a/drivers/md/dm.h b/drivers/md/dm.h index fdf1536a4b62..39c243258e24 100644 --- a/drivers/md/dm.h +++ b/drivers/md/dm.h @@ -107,8 +107,13 @@ void dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q); #ifdef CONFIG_BLK_DEV_ZONED int dm_blk_report_zones(struct gendisk *disk, sector_t sector, unsigned int nr_zones, report_zones_cb cb, void *data); +bool dm_is_zone_write(struct mapped_device *md, struct bio *bio); #else #define dm_blk_report_zones NULL +static inline bool dm_is_zone_write(struct mapped_device *md, struct bio *bio) +{ + return false; +} #endif /*----------------------------------------------------------------- From patchwork Wed May 19 02:55:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12266043 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39B02C433B4 for ; Wed, 19 May 2021 02:55:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1C8AD611BF for ; Wed, 19 May 2021 02:55:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352866AbhESC5A (ORCPT ); Tue, 18 May 2021 22:57:00 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:5175 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237441AbhESC5A (ORCPT ); Tue, 18 May 2021 22:57:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621392941; x=1652928941; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=GrYIYra7P0QhMjflEcQS+evpfDJJISFMIs/aUgAFK6g=; b=eclyClr+r+DQr43Pk9/SATGCz7jxl4wY0W79JlIfX0kBNyXQdVjshTmO bD0/tGhpuc0a+ig7t2CUWgTM39/A66JaRY67Kyi/+eXeVqGlJ/NOqfwrx +lRVyDZ/iC2yNShRGzAxHbpsj1KDjQXP0F+CT2AApbHGyj1r6EVRoPALK Ml1FOLHP7QBULQOOLeVeuRWLw+R2HNtkkepop/Jhdc9wztcDBNnsrG0/l 0xL89qKwOOXPp86NwMHsT71H/lWxu4IFwmbvgOHmDXWWoUDLhjdPyjUTh nbYX6L1j9ILkuG9VrJ648B39h8XNpbSapVtX3OKbXsrBA67i8auOoNIqN w==; IronPort-SDR: Tfg70BqNceJCAmCisrPI4Z0cOPkvVE53tr2L2ADXLwUSxNsm5o9p5aJYBTIGX5w9oc3H8XBvEX FDcwfBVWcXsYY9ffbBKNbNPwiwczOMtYQ0SS0y8TWYeEztJwfTNZuqte1jU/CP44aKkVI9N3Q9 da3u5BgEbdPwgq1MN6c1yUC2x20Hic4X89w9Fk7WpclK6s77vddw8Uha6AKm0KaT6GqKSn388h A/fPBEwZoTrdSQ6z5MWOm8YIxcotHHJ7KE0kWgIfnGrm/s7BaQzjd50L/zQkn//ROrQhKQfLJw CKU= X-IronPort-AV: E=Sophos;i="5.82,311,1613404800"; d="scan'208";a="173265909" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 19 May 2021 10:55:40 +0800 IronPort-SDR: 4u/F1wWPWU6kKWY0WpmOjg3N11HIZlro52Rj4grkui7kig8C5Yy6lhcMXm5vtgR7nrDyUNcgiD hsR105IPEiZ+SFpOt0E6CyOv/Qb9Dlz4kqAMpzmWbD60k2ik9RI7kOGgVp6LxemGR6D7r6vB97 28QH8AV+rHuD560/yhRjMQSIouROXPLT1+JkzpecG4jueQ99HYpKAOKpAXsFAQP1XAS1oOBxZl idb3czZlQ6GB29iUVvulAyxtYx3Mt3U+KulRC98IFIyHpWmGL0BAmoHiZjY72cqsfG8SnSa4JL yMZB/w3tlhFsdHV+eY6JRG8j Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 19:35:20 -0700 IronPort-SDR: mdE3gw2ZnAUBNBAibOuJsk/ngpeJQohqfrAsIFLroxiSWZtEeiKd9ugvk6IcFyAEq++ZGK1WRP ny04Ve6u6LVz5AK+xpYM3G1UHvgNiPeqWk270QpnAsfJytJ60OQ3ViGVPRHRKbQYfyJq6hSExv hcU8OrKR81E8AmBtjUJApB3ZwD9T3AOMXgaAKjMjmcww1M1RHFrWN6Zy22/VTFtRfI7uHuZwCP 2Qbn0bYzuEH0/ygm/YKGWmFWSL1SdNzfJtYS+adwNMRMAhhLscLYQQfzwX+UjeUOgcsAB/YeVY ncE= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 18 May 2021 19:55:40 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH 09/11] dm: rearrange core declarations Date: Wed, 19 May 2021 11:55:27 +0900 Message-Id: <20210519025529.707897-10-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210519025529.707897-1-damien.lemoal@wdc.com> References: <20210519025529.707897-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Move the definitions of struct dm_target_io, struct dm_io and of the bits of the flags field of struct mapped_device from dm.c to dm-core.h to make them usable from dm-zone.c. For the same reason, declare dec_pending() in dm-core.h after renaming it to dm_io_dec_pending(). And for symmetry of the function names, introduce the inline helper dm_io_inc_pending() instead of directly using atomic_inc() calls. Signed-off-by: Damien Le Moal --- drivers/md/dm-core.h | 52 ++++++++++++++++++++++++++++++++++++++ drivers/md/dm.c | 59 ++++++-------------------------------------- 2 files changed, 59 insertions(+), 52 deletions(-) diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h index 5953ff2bd260..cfabc1c91f9f 100644 --- a/drivers/md/dm-core.h +++ b/drivers/md/dm-core.h @@ -116,6 +116,19 @@ struct mapped_device { struct srcu_struct io_barrier; }; +/* + * Bits for the flags field of struct mapped_device. + */ +#define DMF_BLOCK_IO_FOR_SUSPEND 0 +#define DMF_SUSPENDED 1 +#define DMF_FROZEN 2 +#define DMF_FREEING 3 +#define DMF_DELETING 4 +#define DMF_NOFLUSH_SUSPENDING 5 +#define DMF_DEFERRED_REMOVE 6 +#define DMF_SUSPENDED_INTERNALLY 7 +#define DMF_POST_SUSPENDING 8 + void disable_discard(struct mapped_device *md); void disable_write_same(struct mapped_device *md); void disable_write_zeroes(struct mapped_device *md); @@ -173,6 +186,45 @@ struct dm_table { #endif }; +/* + * One of these is allocated per clone bio. + */ +#define DM_TIO_MAGIC 7282014 +struct dm_target_io { + unsigned int magic; + struct dm_io *io; + struct dm_target *ti; + unsigned int target_bio_nr; + unsigned int *len_ptr; + bool inside_dm_io; + struct bio clone; +}; + +/* + * One of these is allocated per original bio. + * It contains the first clone used for that original. + */ +#define DM_IO_MAGIC 5191977 +struct dm_io { + unsigned int magic; + struct mapped_device *md; + blk_status_t status; + atomic_t io_count; + struct bio *orig_bio; + unsigned long start_time; + spinlock_t endio_lock; + struct dm_stats_aux stats_aux; + /* last member of dm_target_io is 'struct bio' */ + struct dm_target_io tio; +}; + +static inline void dm_io_inc_pending(struct dm_io *io) +{ + atomic_inc(&io->io_count); +} + +void dm_io_dec_pending(struct dm_io *io, blk_status_t error); + static inline struct completion *dm_get_completion_from_kobject(struct kobject *kobj) { return &container_of(kobj, struct dm_kobject_holder, kobj)->completion; diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 4426019a89cc..563504163b74 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -74,38 +74,6 @@ struct clone_info { unsigned sector_count; }; -/* - * One of these is allocated per clone bio. - */ -#define DM_TIO_MAGIC 7282014 -struct dm_target_io { - unsigned magic; - struct dm_io *io; - struct dm_target *ti; - unsigned target_bio_nr; - unsigned *len_ptr; - bool inside_dm_io; - struct bio clone; -}; - -/* - * One of these is allocated per original bio. - * It contains the first clone used for that original. - */ -#define DM_IO_MAGIC 5191977 -struct dm_io { - unsigned magic; - struct mapped_device *md; - blk_status_t status; - atomic_t io_count; - struct bio *orig_bio; - unsigned long start_time; - spinlock_t endio_lock; - struct dm_stats_aux stats_aux; - /* last member of dm_target_io is 'struct bio' */ - struct dm_target_io tio; -}; - #define DM_TARGET_IO_BIO_OFFSET (offsetof(struct dm_target_io, clone)) #define DM_IO_BIO_OFFSET \ (offsetof(struct dm_target_io, clone) + offsetof(struct dm_io, tio)) @@ -137,19 +105,6 @@ EXPORT_SYMBOL_GPL(dm_bio_get_target_bio_nr); #define MINOR_ALLOCED ((void *)-1) -/* - * Bits for the md->flags field. - */ -#define DMF_BLOCK_IO_FOR_SUSPEND 0 -#define DMF_SUSPENDED 1 -#define DMF_FROZEN 2 -#define DMF_FREEING 3 -#define DMF_DELETING 4 -#define DMF_NOFLUSH_SUSPENDING 5 -#define DMF_DEFERRED_REMOVE 6 -#define DMF_SUSPENDED_INTERNALLY 7 -#define DMF_POST_SUSPENDING 8 - #define DM_NUMA_NODE NUMA_NO_NODE static int dm_numa_node = DM_NUMA_NODE; @@ -825,7 +780,7 @@ static int __noflush_suspending(struct mapped_device *md) * Decrements the number of outstanding ios that a bio has been * cloned into, completing the original io if necc. */ -static void dec_pending(struct dm_io *io, blk_status_t error) +void dm_io_dec_pending(struct dm_io *io, blk_status_t error) { unsigned long flags; blk_status_t io_error; @@ -978,7 +933,7 @@ static void clone_endio(struct bio *bio) } free_tio(tio); - dec_pending(io, error); + dm_io_dec_pending(io, error); } /* @@ -1247,7 +1202,7 @@ static blk_qc_t __map_bio(struct dm_target_io *tio) * anything, the target has assumed ownership of * this io. */ - atomic_inc(&io->io_count); + dm_io_inc_pending(io); sector = clone->bi_iter.bi_sector; if (unlikely(swap_bios_limit(ti, clone))) { @@ -1273,7 +1228,7 @@ static blk_qc_t __map_bio(struct dm_target_io *tio) up(&md->swap_bios_semaphore); } free_tio(tio); - dec_pending(io, BLK_STS_IOERR); + dm_io_dec_pending(io, BLK_STS_IOERR); break; case DM_MAPIO_REQUEUE: if (unlikely(swap_bios_limit(ti, clone))) { @@ -1281,7 +1236,7 @@ static blk_qc_t __map_bio(struct dm_target_io *tio) up(&md->swap_bios_semaphore); } free_tio(tio); - dec_pending(io, BLK_STS_DM_REQUEUE); + dm_io_dec_pending(io, BLK_STS_DM_REQUEUE); break; default: DMWARN("unimplemented target map return value: %d", r); @@ -1570,7 +1525,7 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md, if (bio->bi_opf & REQ_PREFLUSH) { error = __send_empty_flush(&ci); - /* dec_pending submits any data associated with flush */ + /* dm_io_dec_pending submits any data associated with flush */ } else if (op_is_zone_mgmt(bio_op(bio))) { ci.bio = bio; ci.sector_count = 0; @@ -1611,7 +1566,7 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md, } /* drop the extra reference count */ - dec_pending(ci.io, errno_to_blk_status(error)); + dm_io_dec_pending(ci.io, errno_to_blk_status(error)); return ret; } From patchwork Wed May 19 02:55:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12266047 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 706A6C43460 for ; Wed, 19 May 2021 02:55:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 525F5611BF for ; Wed, 19 May 2021 02:55:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237437AbhESC5C (ORCPT ); Tue, 18 May 2021 22:57:02 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:5175 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352875AbhESC5B (ORCPT ); Tue, 18 May 2021 22:57:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621392942; x=1652928942; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=bqtlsUlpVCw/uvjOfbHYIwW2C9KBJSBtGzl2obza/HU=; b=USxgvgKZEyv9dxZs5yb8huTwBMEgxQdhINoA3CF7Md7IgZVRdyTaqOsA zy4RsALSwsnHBfbpKNjcgSXNM0Rki4iiJsV5fyDRx5voIngutVx0LNENe ina1DXrgnx4MPu9de8MjMzz5MlE8G91heW+Oe8Esvkpi+hRupEcJ0Sgp3 /jHMnLPXU9gvjsoR+ffkYGqy2+PfY/a22ElK97K5+XZ9nFNC0BPOjFzYS gQDFTBqtv23Z69cSrJmmoSA2KaqRMlM24NsZ6y/fXDqjnMeaicv5TxLZo ANAqdhyFfusoX7/HrmPW6+k2idbbwEiIcdRcrCIdh5TpFQxa3hepdwe6e w==; IronPort-SDR: B2J6NZvddldZEL1VxMhqeyFXTSobvWzwdRpTFKw4Z7Q/w6DG4NbNYmhWgRC/naOlWR935kvglT PTVJ+VXPG2t9BhGGP2pdpNm0+rktzN33juVZ+9Cj1raErd94OSiiVMd0i/hU0Oilsc9777hZli 0FkfkEGXeYiwi+NzemhCAT0oUeh8csF02ghzu5b125iR471pY4Q5PY+IGnlI3vK5BP6TGKaE+H nuXZiE53uDDQhS1nfVatBxrp++NZs0v6t9I+Pn7Sq+qn0AaaujoZpF43jANOZg7waF2KgmBTgl AUk= X-IronPort-AV: E=Sophos;i="5.82,311,1613404800"; d="scan'208";a="173265912" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 19 May 2021 10:55:42 +0800 IronPort-SDR: s13ypOSNzqxv/Rx5J+P+wCIkaWQc52yTRUSrcvG4QKs0MawUfagudEZpFVEUe+YLAOczS3v6pN euFRQdgsCF29vmY6IUFDvT8XWjBEdJS4XkMm75wVwf18DENsiw81j0MFJW22FX4njZ63n65huo 9ErL72ThgaeAP2f1X8k3su4hJkUH5X5w4SuVnk5+ZxOhyx9vlsrkxCQB2j0ZWA3etKbrPWdIWX dl17JOXvhKr+frF7Au2SR/YweVcZjVJlJ4Xj18wwDsjfkV6fXOFw6jzaJWwe+jHuGVfLjVdofQ +ZztFEqEyOXr7ciydmlt1J2t Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 19:35:21 -0700 IronPort-SDR: jNkFgQo5ptPR6VKa33+iz/m9RiywDay6dqxTwXW4K0ynArLIgtB8YeYqh3Ni+HQazwhAYy2FSl TP1OwaSR6VaZ17nJZKtIS3L1Vz+X8DCkRzm3Zpf7pNrYtQdGY6ehiHPNsShzgSIJ47z03A2mLU eEhqIfb35Xo9TwrZYjC1hcjgx744gIHbayHPRXqjlVR0uwChvkPbPktQtc5H9i/deZcZwaZ298 ghCF5m4kXZ0WFIQaPhqth7g0RChEkZjzevaSuop35eavwJSpTd3tNHhnQirFURZlvd7HUSDF6V lwA= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 18 May 2021 19:55:41 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH 10/11] dm: introduce zone append emulation Date: Wed, 19 May 2021 11:55:28 +0900 Message-Id: <20210519025529.707897-11-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210519025529.707897-1-damien.lemoal@wdc.com> References: <20210519025529.707897-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org For zoned targets that cannot support zone append operations, implement an emulation using regular write operations. If the original BIO submitted by the user is a zone append operation, change its clone into a regular write operation directed at the target zone write pointer position. To do so, an array of write pointer offsets (write pointer position relative to the start of a zone) is added to struct mapped_device. All operations that modify a sequential zone write pointer (writes, zone reset, zone finish and zone append) are intersepted in __map_bio() and processed using the new functions dm_zone_map_bio(). Detection of the target ability to natively support zone append operations is done from dm_table_set_restrictions() by calling the function dm_set_zones_restrictions(). A target that does not support zone append operation, either by explicitly declaring it using the new struct dm_target field zone_append_not_supported, or because the device table contains a non-zoned device, has its mapped device marked with the new flag DMF_ZONE_APPEND_EMULATED. The helper function dm_emulate_zone_append() is introduced to test a mapped device for this new flag. Atomicity of the zones write pointer tracking and updates is done using a zone write locking mechanism based on a bitmap. This is similar to the block layer method but based on BIOs rather than struct request. A zone write lock is taken in dm_zone_map_bio() for any clone BIO with an operation type that changes the BIO target zone write pointer position. The zone write lock is released if the clone BIO is failed before submission or when dm_zone_endio() is called when the clone BIO completes. The zone write lock bitmap of the mapped device, together with a bitmap indicating zone types (conv_zones_bitmap) and the write pointer offset array (zwp_offset) are allocated and initialized with a full device zone report in dm_set_zones_restrictions() using the function dm_revalidate_zones(). For failed operations that may have modified a zone write pointer, the zone write pointer offset is marked as invalid in dm_zone_endio(). Zones with an invalid write pointer offset are checked and the write pointer updated using an internal report zone operation when the faulty zone is accessed again by the user. All functions added for this emulation have a minimal overhead for zoned targets natively supporting zone append operations. Regular device targets are also not affected. The added code also does not impact builds with CONFIG_BLK_DEV_ZONED disabled by stubbing out all dm zone related functions. Signed-off-by: Damien Le Moal --- drivers/md/dm-core.h | 14 + drivers/md/dm-table.c | 19 +- drivers/md/dm-zone.c | 617 ++++++++++++++++++++++++++++++++-- drivers/md/dm.c | 39 ++- drivers/md/dm.h | 18 +- include/linux/device-mapper.h | 6 + 6 files changed, 659 insertions(+), 54 deletions(-) diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h index cfabc1c91f9f..2dbb0c7ff720 100644 --- a/drivers/md/dm-core.h +++ b/drivers/md/dm-core.h @@ -114,6 +114,12 @@ struct mapped_device { bool init_tio_pdu:1; struct srcu_struct io_barrier; + +#ifdef CONFIG_BLK_DEV_ZONED + unsigned int nr_zones; + spinlock_t zwp_offset_lock; + unsigned int *zwp_offset; +#endif }; /* @@ -128,6 +134,7 @@ struct mapped_device { #define DMF_DEFERRED_REMOVE 6 #define DMF_SUSPENDED_INTERNALLY 7 #define DMF_POST_SUSPENDING 8 +#define DMF_EMULATE_ZONE_APPEND 9 void disable_discard(struct mapped_device *md); void disable_write_same(struct mapped_device *md); @@ -143,6 +150,13 @@ static inline struct dm_stats *dm_get_stats(struct mapped_device *md) return &md->stats; } +static inline bool dm_emulate_zone_append(struct mapped_device *md) +{ + if (blk_queue_is_zoned(md->queue)) + return test_bit(DMF_EMULATE_ZONE_APPEND, &md->flags); + return false; +} + #define DM_TABLE_MAX_DEPTH 16 struct dm_table { diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index dd9f648ab598..21fdccfb16cf 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1981,11 +1981,12 @@ static int device_requires_stable_pages(struct dm_target *ti, return blk_queue_stable_writes(q); } -void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, - struct queue_limits *limits) +int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, + struct queue_limits *limits) { bool wc = false, fua = false; int page_size = PAGE_SIZE; + int r; /* * Copy table's limits to the DM device's request_queue @@ -2064,12 +2065,20 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, dm_table_any_dev_attr(t, device_is_not_random, NULL)) blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q); - /* For a zoned target, setup the zones related queue attributes */ - if (blk_queue_is_zoned(q)) - dm_set_zones_restrictions(t, q); + /* + * For a zoned target, setup the zones related queue attributes + * and resources necessary for zone append emulation if necessary. + */ + if (blk_queue_is_zoned(q)) { + r = dm_set_zones_restrictions(t, q); + if (r) + return r; + } dm_update_keyslot_manager(q, t); blk_queue_update_readahead(q); + + return 0; } unsigned int dm_table_get_num_targets(struct dm_table *t) diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c index edc3bbb45637..388a9bf3ba8a 100644 --- a/drivers/md/dm-zone.c +++ b/drivers/md/dm-zone.c @@ -4,55 +4,73 @@ */ #include +#include +#include +#include #include "dm-core.h" +#define DM_MSG_PREFIX "zone" + +#define DM_ZONE_INVALID_WP_OFST UINT_MAX +#define DM_ZONE_UPDATING_WP_OFST (DM_ZONE_INVALID_WP_OFST - 1) + /* - * User facing dm device block device report zone operation. This calls the - * report_zones operation for each target of a device table. This operation is - * generally implemented by targets using dm_report_zones(). + * For internal zone reports bypassing the top BIO submission path. */ -int dm_blk_report_zones(struct gendisk *disk, sector_t sector, - unsigned int nr_zones, report_zones_cb cb, void *data) +static int dm_blk_do_report_zones(struct mapped_device *md, struct dm_table *t, + sector_t sector, unsigned int nr_zones, + report_zones_cb cb, void *data) { - struct mapped_device *md = disk->private_data; - struct dm_table *map; - int srcu_idx, ret; + struct gendisk *disk = md->disk; + int ret; struct dm_report_zones_args args = { .next_sector = sector, .orig_data = data, .orig_cb = cb, }; - if (dm_suspended_md(md)) - return -EAGAIN; - - map = dm_get_live_table(md, &srcu_idx); - if (!map) { - ret = -EIO; - goto out; - } - do { struct dm_target *tgt; - tgt = dm_table_find_target(map, args.next_sector); - if (WARN_ON_ONCE(!tgt->type->report_zones)) { - ret = -EIO; - goto out; - } + tgt = dm_table_find_target(t, args.next_sector); + if (WARN_ON_ONCE(!tgt->type->report_zones)) + return -EIO; args.tgt = tgt; ret = tgt->type->report_zones(tgt, &args, nr_zones - args.zone_idx); if (ret < 0) - goto out; + return ret; } while (args.zone_idx < nr_zones && args.next_sector < get_capacity(disk)); - ret = args.zone_idx; -out: + return args.zone_idx; +} + +/* + * User facing dm device block device report zone operation. This calls the + * report_zones operation for each target of a device table. This operation is + * generally implemented by targets using dm_report_zones(). + */ +int dm_blk_report_zones(struct gendisk *disk, sector_t sector, + unsigned int nr_zones, report_zones_cb cb, void *data) +{ + struct mapped_device *md = disk->private_data; + struct dm_table *map; + int srcu_idx, ret; + + if (dm_suspended_md(md)) + return -EAGAIN; + + map = dm_get_live_table(md, &srcu_idx); + if (!map) + return -EIO; + + ret = dm_blk_do_report_zones(md, map, sector, nr_zones, cb, data); + dm_put_live_table(md, srcu_idx); + return ret; } @@ -121,16 +139,553 @@ bool dm_is_zone_write(struct mapped_device *md, struct bio *bio) } } -void dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q) +void dm_init_zoned_dev(struct mapped_device *md) { - if (!blk_queue_is_zoned(q)) - return; + spin_lock_init(&md->zwp_offset_lock); +} + +void dm_cleanup_zoned_dev(struct mapped_device *md) +{ + struct request_queue *q = md->queue; + + if (q) { + kfree(q->conv_zones_bitmap); + q->conv_zones_bitmap = NULL; + kfree(q->seq_zones_wlock); + q->seq_zones_wlock = NULL; + } + + kvfree(md->zwp_offset); + md->zwp_offset = NULL; + md->nr_zones = 0; +} + +static unsigned int dm_get_zone_wp_offset(struct blk_zone *zone) +{ + switch (zone->cond) { + case BLK_ZONE_COND_IMP_OPEN: + case BLK_ZONE_COND_EXP_OPEN: + case BLK_ZONE_COND_CLOSED: + return zone->wp - zone->start; + case BLK_ZONE_COND_FULL: + return zone->len; + case BLK_ZONE_COND_EMPTY: + case BLK_ZONE_COND_NOT_WP: + case BLK_ZONE_COND_OFFLINE: + case BLK_ZONE_COND_READONLY: + default: + /* + * Conventional, offline and read-only zones do not have a valid + * write pointer. Use 0 as for an empty zone. + */ + return 0; + } +} + +static int dm_zone_revalidate_cb(struct blk_zone *zone, unsigned int idx, + void *data) +{ + struct mapped_device *md = data; + struct request_queue *q = md->queue; + + switch (zone->type) { + case BLK_ZONE_TYPE_CONVENTIONAL: + if (!q->conv_zones_bitmap) { + q->conv_zones_bitmap = + kcalloc(BITS_TO_LONGS(q->nr_zones), + sizeof(unsigned long), GFP_NOIO); + if (!q->conv_zones_bitmap) + return -ENOMEM; + } + set_bit(idx, q->conv_zones_bitmap); + break; + case BLK_ZONE_TYPE_SEQWRITE_REQ: + case BLK_ZONE_TYPE_SEQWRITE_PREF: + if (!q->seq_zones_wlock) { + q->seq_zones_wlock = + kcalloc(BITS_TO_LONGS(q->nr_zones), + sizeof(unsigned long), GFP_NOIO); + if (!q->seq_zones_wlock) + return -ENOMEM; + } + if (!md->zwp_offset) { + md->zwp_offset = + kvcalloc(q->nr_zones, sizeof(unsigned int), + GFP_NOIO); + if (!md->zwp_offset) + return -ENOMEM; + } + md->zwp_offset[idx] = dm_get_zone_wp_offset(zone); + + break; + default: + DMERR("Invalid zone type 0x%x at sectors %llu", + (int)zone->type, zone->start); + return -ENODEV; + } + + return 0; +} + +/* + * Revalidate the zones of a mapped device to initialize resource necessary + * for zone append emulation. Note that we cannot simply use the block layer + * blk_revalidate_disk_zones() function here as the mapped device is suspended + * (this is called from __bind() context). + */ +static int dm_revalidate_zones(struct mapped_device *md, struct dm_table *t) +{ + struct request_queue *q = md->queue; + int ret; + + /* + * Check if something changed. If yes, cleanup the current resources + * and reallocate everything. + */ + if (!q->nr_zones || q->nr_zones != md->nr_zones) + dm_cleanup_zoned_dev(md); + if (md->nr_zones) + return 0; + + /* Scan all zones to initialize everything */ + ret = dm_blk_do_report_zones(md, t, 0, q->nr_zones, + dm_zone_revalidate_cb, md); + if (ret < 0) + goto err; + if (ret != q->nr_zones) { + ret = -EIO; + goto err; + } + + md->nr_zones = q->nr_zones; + + return 0; + +err: + DMERR("Revalidate zones failed %d", ret); + dm_cleanup_zoned_dev(md); + return ret; +} + +static int device_not_zone_append_capable(struct dm_target *ti, + struct dm_dev *dev, sector_t start, + sector_t len, void *data) +{ + return !blk_queue_is_zoned(bdev_get_queue(dev->bdev)); +} + +static bool dm_table_supports_zone_append(struct dm_table *t) +{ + struct dm_target *ti; + unsigned int i; + + for (i = 0; i < dm_table_get_num_targets(t); i++) { + ti = dm_table_get_target(t, i); + + if (ti->emulate_zone_append) + return false; + + if (!ti->type->iterate_devices || + ti->type->iterate_devices(ti, device_not_zone_append_capable, NULL)) + return false; + } + + return true; +} + +int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q) +{ + struct mapped_device *md = t->md; /* * For a zoned target, the number of zones should be updated for the - * correct value to be exposed in sysfs queue/nr_zones. For a BIO based - * target, this is all that is needed. + * correct value to be exposed in sysfs queue/nr_zones. */ WARN_ON_ONCE(queue_is_mq(q)); - q->nr_zones = blkdev_nr_zones(t->md->disk); + q->nr_zones = blkdev_nr_zones(md->disk); + + /* Check if zone append is natively supported */ + if (dm_table_supports_zone_append(t)) { + clear_bit(DMF_EMULATE_ZONE_APPEND, &md->flags); + dm_cleanup_zoned_dev(md); + return 0; + } + + /* + * Mark the mapped device as needing zone append emulation and + * initialize the emulation resources once the capacity is set. + */ + set_bit(DMF_EMULATE_ZONE_APPEND, &md->flags); + if (!get_capacity(md->disk)) + return 0; + + return dm_revalidate_zones(md, t); +} + +static int dm_update_zone_wp_offset_cb(struct blk_zone *zone, unsigned int idx, + void *data) +{ + unsigned int *wp_offset = data; + + *wp_offset = dm_get_zone_wp_offset(zone); + + return 0; +} + +static int dm_update_zone_wp_offset(struct mapped_device *md, unsigned int zno, + unsigned int *wp_ofst) +{ + sector_t sector = zno * blk_queue_zone_sectors(md->queue); + unsigned int noio_flag; + struct dm_table *t; + int srcu_idx, ret; + + t = dm_get_live_table(md, &srcu_idx); + if (!t) + return -EIO; + + /* + * Ensure that all memory allocations in this context are done as if + * GFP_NOIO was specified. + */ + noio_flag = memalloc_noio_save(); + ret = dm_blk_do_report_zones(md, t, sector, 1, + dm_update_zone_wp_offset_cb, wp_ofst); + memalloc_noio_restore(noio_flag); + + dm_put_live_table(md, srcu_idx); + + if (ret != 1) + return -EIO; + + return 0; +} + +/* + * First phase of BIO mapping for targets with zone append emulation: + * check all BIO that change a zone writer pointer and change zone + * append operations into regular write operations. + */ +static bool dm_zone_map_bio_begin(struct mapped_device *md, + struct bio *orig_bio, struct bio *clone) +{ + struct request_queue *q = md->queue; + unsigned int zno = bio_zone_no(q, orig_bio); + sector_t zone_sectors = blk_queue_zone_sectors(q); + unsigned long flags; + bool good_io = false; + + spin_lock_irqsave(&md->zwp_offset_lock, flags); + + /* + * If the target zone is in an error state, recover by inspecting the + * zone to get its current write pointer position. Note that since the + * target zone is already locked, a BIO issuing context should never + * see the zone write in the DM_ZONE_UPDATING_WP_OFST state. + */ + if (md->zwp_offset[zno] == DM_ZONE_INVALID_WP_OFST) { + unsigned int wp_offset; + int ret; + + md->zwp_offset[zno] = DM_ZONE_UPDATING_WP_OFST; + + spin_unlock_irqrestore(&md->zwp_offset_lock, flags); + ret = dm_update_zone_wp_offset(md, zno, &wp_offset); + spin_lock_irqsave(&md->zwp_offset_lock, flags); + + if (ret) { + md->zwp_offset[zno] = DM_ZONE_INVALID_WP_OFST; + goto out; + } + md->zwp_offset[zno] = wp_offset; + } else if (md->zwp_offset[zno] == DM_ZONE_UPDATING_WP_OFST) { + DMWARN_LIMIT("Invalid DM_ZONE_UPDATING_WP_OFST state"); + goto out; + } + + switch (bio_op(orig_bio)) { + case REQ_OP_WRITE_ZEROES: + case REQ_OP_WRITE_SAME: + case REQ_OP_WRITE: + break; + case REQ_OP_ZONE_RESET: + case REQ_OP_ZONE_FINISH: + goto good; + case REQ_OP_ZONE_APPEND: + /* + * Change zone append operations into a non-mergeable regular + * writes directed at the current write pointer position of the + * target zone. + */ + clone->bi_opf = REQ_OP_WRITE | REQ_NOMERGE | + (orig_bio->bi_opf & (~REQ_OP_MASK)); + clone->bi_iter.bi_sector = + orig_bio->bi_iter.bi_sector + md->zwp_offset[zno]; + break; + default: + DMWARN_LIMIT("Invalid BIO operation"); + goto out; + } + + /* Cannot write to a full zone */ + if (md->zwp_offset[zno] >= zone_sectors) + goto out; + + /* Writes must be aligned to the zone write pointer */ + if ((clone->bi_iter.bi_sector & (zone_sectors - 1)) != md->zwp_offset[zno]) + goto out; + +good: + good_io = true; + +out: + spin_unlock_irqrestore(&md->zwp_offset_lock, flags); + + return good_io; +} + +/* + * Second phase of BIO mapping for targets with zone append emulation: + * update the zone write pointer offset array to account for the additional + * data written to a zone. Note that at this point, the remapped clone BIO + * may already have completed, so we do not touch it. + */ +static blk_status_t dm_zone_map_bio_end(struct mapped_device *md, + struct bio *orig_bio, + unsigned int nr_sectors) +{ + struct request_queue *q = md->queue; + unsigned int zno = bio_zone_no(q, orig_bio); + blk_status_t sts = BLK_STS_OK; + unsigned long flags; + + spin_lock_irqsave(&md->zwp_offset_lock, flags); + + /* Update the zone wp offset */ + switch (bio_op(orig_bio)) { + case REQ_OP_ZONE_RESET: + md->zwp_offset[zno] = 0; + break; + case REQ_OP_ZONE_FINISH: + md->zwp_offset[zno] = blk_queue_zone_sectors(q); + break; + case REQ_OP_WRITE_ZEROES: + case REQ_OP_WRITE_SAME: + case REQ_OP_WRITE: + md->zwp_offset[zno] += nr_sectors; + break; + case REQ_OP_ZONE_APPEND: + /* + * Check that the target did not truncate the write operation + * emulating a zone append. + */ + if (nr_sectors != bio_sectors(orig_bio)) { + DMWARN_LIMIT("Truncated write for zone append"); + sts = BLK_STS_IOERR; + break; + } + md->zwp_offset[zno] += nr_sectors; + break; + default: + DMWARN_LIMIT("Invalid BIO operation"); + sts = BLK_STS_IOERR; + break; + } + + spin_unlock_irqrestore(&md->zwp_offset_lock, flags); + + return sts; +} + +static inline void dm_zone_lock(struct request_queue *q, + struct bio *clone, unsigned int zno) +{ + if (WARN_ON_ONCE(bio_flagged(clone, BIO_ZONE_WRITE_LOCKED))) + return; + + wait_on_bit_lock_io(q->seq_zones_wlock, zno, TASK_UNINTERRUPTIBLE); + bio_set_flag(clone, BIO_ZONE_WRITE_LOCKED); +} + +static inline void dm_zone_unlock(struct request_queue *q, + struct bio *clone, unsigned int zno) +{ + if (!bio_flagged(clone, BIO_ZONE_WRITE_LOCKED)) + return; + + WARN_ON_ONCE(!test_bit(zno, q->seq_zones_wlock)); + clear_bit_unlock(zno, q->seq_zones_wlock); + smp_mb__after_atomic(); + wake_up_bit(q->seq_zones_wlock, zno); + + bio_clear_flag(clone, BIO_ZONE_WRITE_LOCKED); +} + +static bool dm_need_zone_wp_tracking(struct request_queue *q, + struct bio *orig_bio) +{ + /* + * Special processing is not needed for operations that do not need the + * zone write lock, that is, all operations that target conventional + * zones and all operations that do not modify directly a sequential + * zone write pointer. + */ + if (op_is_flush(orig_bio->bi_opf) && !bio_sectors(orig_bio)) + return false; + switch (bio_op(orig_bio)) { + case REQ_OP_WRITE_ZEROES: + case REQ_OP_WRITE_SAME: + case REQ_OP_WRITE: + case REQ_OP_ZONE_RESET: + case REQ_OP_ZONE_FINISH: + case REQ_OP_ZONE_APPEND: + return bio_zone_is_seq(q, orig_bio); + default: + return false; + } +} + +/* + * Special IO mapping for targets needing zone append emulation. + */ +int dm_zone_map_bio(struct dm_target_io *tio) +{ + struct dm_io *io = tio->io; + struct dm_target *ti = tio->ti; + struct mapped_device *md = io->md; + struct request_queue *q = md->queue; + struct bio *orig_bio = io->orig_bio; + struct bio *clone = &tio->clone; + unsigned int zno = bio_zone_no(q, orig_bio); + blk_status_t sts; + int r; + + /* + * IOs that do not change a zone write pointer do not need + * any additional special processing. + */ + if (!dm_need_zone_wp_tracking(q, orig_bio)) + return ti->type->map(ti, clone); + + /* Lock the target zone */ + dm_zone_lock(q, clone, zno); + + /* + * Check that the bio and the target zone write pointer offset are + * both valid, and if the bio is a zone append, remap it to a write. + */ + if (!dm_zone_map_bio_begin(md, orig_bio, clone)) { + dm_zone_unlock(q, clone, zno); + return DM_MAPIO_KILL; + } + + /* + * The target map function may issue and complete the IO quickly. + * Take an extra reference on the IO to make sure it does disappear + * until we run dm_zone_map_bio_end(). + */ + dm_io_inc_pending(io); + + /* Let the target do its work */ + r = ti->type->map(ti, clone); + switch (r) { + case DM_MAPIO_SUBMITTED: + /* + * The target submitted the clone BIO. The target zone will + * be unlocked on completion of the clone. + */ + sts = dm_zone_map_bio_end(md, orig_bio, *tio->len_ptr); + break; + case DM_MAPIO_REMAPPED: + /* + * The target only remapped the clone BIO. In case of error, + * unlock the target zone here as the clone will not be + * submitted. + */ + sts = dm_zone_map_bio_end(md, orig_bio, *tio->len_ptr); + if (sts != BLK_STS_OK) + dm_zone_unlock(q, clone, zno); + break; + case DM_MAPIO_REQUEUE: + case DM_MAPIO_KILL: + default: + dm_zone_unlock(q, clone, zno); + sts = BLK_STS_IOERR; + break; + } + + /* Drop the extra reference on the IO */ + dm_io_dec_pending(io, sts); + + if (sts != BLK_STS_OK) + return DM_MAPIO_KILL; + + return r; +} + +/* + * IO completion callback called from clone_endio(). + */ +void dm_zone_endio(struct dm_io *io, struct bio *clone) +{ + struct mapped_device *md = io->md; + struct request_queue *q = md->queue; + struct bio *orig_bio = io->orig_bio; + unsigned long flags; + unsigned int zno; + + /* + * For targets that do not emulate zone append, we only need to + * handle native zone-append bios. + */ + if (!dm_emulate_zone_append(md)) { + /* + * Get the offset within the zone of the written sector + * and add that to the original bio sector position. + */ + if (clone->bi_status == BLK_STS_OK && + bio_op(clone) == REQ_OP_ZONE_APPEND) { + sector_t mask = (sector_t)blk_queue_zone_sectors(q) - 1; + + orig_bio->bi_iter.bi_sector += + clone->bi_iter.bi_sector & mask; + } + + return; + } + + /* + * For targets that do emulate zone append, if the clone BIO does not + * own the target zone write lock, we have nothing to do. + */ + if (!bio_flagged(clone, BIO_ZONE_WRITE_LOCKED)) + return; + + zno = bio_zone_no(q, orig_bio); + + spin_lock_irqsave(&md->zwp_offset_lock, flags); + if (clone->bi_status != BLK_STS_OK) { + /* + * BIOs that modify a zone write pointer may leave the zone + * in an unknown state in case of failure (e.g. the write + * pointer was only partially advanced). In this case, set + * the target zone write pointer as invalid unless it is + * already being updated. + */ + if (md->zwp_offset[zno] != DM_ZONE_UPDATING_WP_OFST) + md->zwp_offset[zno] = DM_ZONE_INVALID_WP_OFST; + } else if (bio_op(orig_bio) == REQ_OP_ZONE_APPEND) { + /* + * Get the written sector for zone append operation that were + * emulated using regular write operations. + */ + if (WARN_ON_ONCE(md->zwp_offset[zno] < bio_sectors(orig_bio))) + md->zwp_offset[zno] = DM_ZONE_INVALID_WP_OFST; + else + orig_bio->bi_iter.bi_sector += + md->zwp_offset[zno] - bio_sectors(orig_bio); + } + spin_unlock_irqrestore(&md->zwp_offset_lock, flags); + + dm_zone_unlock(q, clone, zno); } diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 563504163b74..5038bf522b0d 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -875,7 +875,6 @@ static void clone_endio(struct bio *bio) struct dm_io *io = tio->io; struct mapped_device *md = tio->io->md; dm_endio_fn endio = tio->ti->type->end_io; - struct bio *orig_bio = io->orig_bio; struct request_queue *q = bio->bi_bdev->bd_disk->queue; if (unlikely(error == BLK_STS_TARGET)) { @@ -890,17 +889,8 @@ static void clone_endio(struct bio *bio) disable_write_zeroes(md); } - /* - * For zone-append bios get offset in zone of the written - * sector and add that to the original bio sector pos. - */ - if (bio_op(orig_bio) == REQ_OP_ZONE_APPEND) { - sector_t written_sector = bio->bi_iter.bi_sector; - struct request_queue *q = orig_bio->bi_bdev->bd_disk->queue; - u64 mask = (u64)blk_queue_zone_sectors(q) - 1; - - orig_bio->bi_iter.bi_sector += written_sector & mask; - } + if (blk_queue_is_zoned(q)) + dm_zone_endio(io, bio); if (endio) { int r = endio(tio->ti, bio, &error); @@ -1213,7 +1203,16 @@ static blk_qc_t __map_bio(struct dm_target_io *tio) down(&md->swap_bios_semaphore); } - r = ti->type->map(ti, clone); + /* + * Check if the IO needs a special mapping due to zone append emulation + * on zoned target. In this case, dm_zone_map_begin() calls the target + * map operation. + */ + if (dm_emulate_zone_append(io->md)) + r = dm_zone_map_bio(tio); + else + r = ti->type->map(ti, clone); + switch (r) { case DM_MAPIO_SUBMITTED: break; @@ -1757,6 +1756,7 @@ static struct mapped_device *alloc_dev(int minor) INIT_LIST_HEAD(&md->uevent_list); INIT_LIST_HEAD(&md->table_devices); spin_lock_init(&md->uevent_lock); + dm_init_zoned_dev(md); /* * default to bio-based until DM table is loaded and md->type @@ -1956,11 +1956,16 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t, goto out; } + ret = dm_table_set_restrictions(t, q, limits); + if (ret) { + old_map = ERR_PTR(ret); + goto out; + } + old_map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock)); rcu_assign_pointer(md->map, (void *)t); md->immutable_target_type = dm_table_get_immutable_target_type(t); - dm_table_set_restrictions(t, q, limits); if (old_map) dm_sync_table(md); @@ -2079,7 +2084,10 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t) DMERR("Cannot calculate initial queue limits"); return r; } - dm_table_set_restrictions(t, md->queue, &limits); + r = dm_table_set_restrictions(t, md->queue, &limits); + if (r) + return r; + blk_register_queue(md->disk); return 0; @@ -2188,6 +2196,7 @@ static void __dm_destroy(struct mapped_device *md, bool wait) dm_device_name(md), atomic_read(&md->holders)); dm_sysfs_exit(md); + dm_cleanup_zoned_dev(md); dm_table_destroy(__unbind(md)); free_dev(md); } diff --git a/drivers/md/dm.h b/drivers/md/dm.h index 39c243258e24..65f20d8cc415 100644 --- a/drivers/md/dm.h +++ b/drivers/md/dm.h @@ -45,6 +45,8 @@ struct dm_dev_internal { struct dm_table; struct dm_md_mempools; +struct dm_target_io; +struct dm_io; /*----------------------------------------------------------------- * Internal table functions. @@ -56,8 +58,8 @@ struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector); bool dm_table_has_no_data_devices(struct dm_table *table); int dm_calculate_queue_limits(struct dm_table *table, struct queue_limits *limits); -void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, - struct queue_limits *limits); +int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, + struct queue_limits *limits); struct list_head *dm_table_get_devices(struct dm_table *t); void dm_table_presuspend_targets(struct dm_table *t); void dm_table_presuspend_undo_targets(struct dm_table *t); @@ -103,17 +105,27 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t); /* * Zoned targets related functions. */ -void dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q); +int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q); +void dm_zone_endio(struct dm_io *io, struct bio *clone); #ifdef CONFIG_BLK_DEV_ZONED +void dm_init_zoned_dev(struct mapped_device *md); +void dm_cleanup_zoned_dev(struct mapped_device *md); int dm_blk_report_zones(struct gendisk *disk, sector_t sector, unsigned int nr_zones, report_zones_cb cb, void *data); bool dm_is_zone_write(struct mapped_device *md, struct bio *bio); +int dm_zone_map_bio(struct dm_target_io *io); #else +static inline void dm_init_zoned_dev(struct mapped_device *md) {} +static inline void dm_cleanup_zoned_dev(struct mapped_device *md) {} #define dm_blk_report_zones NULL static inline bool dm_is_zone_write(struct mapped_device *md, struct bio *bio) { return false; } +static inline int dm_zone_map_bio(struct dm_target_io *tio) +{ + return DM_MAPIO_KILL; +} #endif /*----------------------------------------------------------------- diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h index caea0a079d2d..7457d49acf9a 100644 --- a/include/linux/device-mapper.h +++ b/include/linux/device-mapper.h @@ -361,6 +361,12 @@ struct dm_target { * Set if we need to limit the number of in-flight bios when swapping. */ bool limit_swap_bios:1; + + /* + * Set if this target implements a a zoned device and needs emulation of + * zone append operations using regular writes. + */ + bool emulate_zone_append:1; }; void *dm_per_bio_data(struct bio *bio, size_t data_size); From patchwork Wed May 19 02:55:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12266045 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA478C433ED for ; Wed, 19 May 2021 02:55:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CA5A2611BF for ; Wed, 19 May 2021 02:55:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237567AbhESC5C (ORCPT ); Tue, 18 May 2021 22:57:02 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:5175 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237441AbhESC5C (ORCPT ); Tue, 18 May 2021 22:57:02 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621392943; x=1652928943; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=lIgUcOusoJNjMDu8KPcVMKOA5N7HcVpHKeH6lBsQiik=; b=fWb9L0tGzo/WTSfsNwTRqWkqtWhhrVim4nrw7NpBOwNZKmYykJNZRB1d bldFaELrI69SI5oPHasIXq7jSZmlfLUa6BHhj4ilUi8G9swz32tNXNzcn Toqhi1BWi8FXydPYvk/BGvaFa/AYh3qCyGIvs97yxzpsq3IbHc/hMvYGR EN5deLGX3LqVRBUoz2KwygQfM7C/ArfYMfpgaNwaPOag/HvS2gMGKVK96 V0gT+Q+8GsT9mbGijBNN0NnzJkFWGsFqViGkQ/X8apKcKkKC4yxQBXnkY 6ZpGDdLh3TGcvzNYTsA7PFjzACuxk8rJORmLOUWpTC44KgA9X/RlgJbrk g==; IronPort-SDR: P9pzfYc6qtV4joF22oYPuLOidA1G5ch1bbJvjIN/dm3UzCnUOSStrj/v+GwtmYSbxJzJa99Vx7 G83etPUZRvqKCzvwC+ewnXQ51XLjs2kBNvANP+oIwzD8fyZ89I8sxlmcHm9HX/y9RuoCGfUAe6 aGViTotYxtHwYnltqqrWblqBH9STMFbH7NYQD51wkeY8AVeZpWNFg7vrTQNtVjZHF5BThz/VFc RXy68kan3OqgouyeZ99lXdyt6lXMLTeGigSLXocTPg8N0qYgGRxpqzW47E9jrhAHmzVhd0vnIO o1g= X-IronPort-AV: E=Sophos;i="5.82,311,1613404800"; d="scan'208";a="173265913" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 19 May 2021 10:55:43 +0800 IronPort-SDR: tbYcbzyvoA0HxGg7xQ/lebnt5/JapSvWYHufEvV9xz06pByHp99hmMhkw0KsobXO8wg7uXszFZ QXDZjKGBoRWa5qNiL7qnzGsO4wd6Ap1yTPdjDclna8MY1+AD0xViPrii4gQr7ZNA+Xa3YKSf2C w1KMW/oL4C76dl18Gq4kirZBESSlzsxDYc7KIKER/RDMomLLksbU+BqkwmBY/Dn7UTDd2ZTl85 vmLTZqk416yuguEyCeLMKjXgWa1e68q4YhJL44EtHBIYsIeAgawYUFxsRc15zQAlUU4bqU5qXN cJ72Fzi1VyW043sD6XMRVIGi Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 19:35:22 -0700 IronPort-SDR: 6A8IdEnTL5S95ycxdlnSTWMMDa1i+XG5rlrA0x2Me19yJDH1rC8v/SYmy4PXFyVUtX5nY7Gbq5 /UAZX7oR7MAcT9AJnThInzxfux3sJMTQ5XDbPPv/bQfUjkNdWH7sKV0DW6N5MoazIGXyapTa+L TSi/qgfKNJsUXXJVAJF0cQOGw+k6M0hCixREgHUupwDVcB5z85cIMeLA8EUHfCuh3BCTFcCYr/ XCdiLuLPHCDN6DcnaY4sNuEjNkPnG2MTM4lgcnbFQBwRNhLf+IQl572ELvhOBnGN/GmmjDt3xv FeI= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 18 May 2021 19:55:42 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH 11/11] dm crypt: Fix zoned block device support Date: Wed, 19 May 2021 11:55:29 +0900 Message-Id: <20210519025529.707897-12-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210519025529.707897-1-damien.lemoal@wdc.com> References: <20210519025529.707897-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Zone append BIOs (REQ_OP_ZONE_APPEND) always specify the start sector of the zone to be written instead of the actual sector location to write. The write location is determined by the device and returned to the host upon completion of the operation. This interface, while simple and efficient for writing into sequential zones of a zoned block device, is incompatible with the use of sector values to calculate a cypher block IV. All data written in a zone end up using the same IV values corresponding to the first sectors of the zone, but read operation will specify any sector within the zone resulting in an IV mismatch between encryption and decryption. To solve this problem, report to DM core that zone append operations are not supported. This result in the zone append operations being emulated using regular write operations. Reported-by: Shin'ichiro Kawasaki Signed-off-by: Damien Le Moal --- drivers/md/dm-crypt.c | 24 +++++++++++++++++++----- 1 file changed, 19 insertions(+), 5 deletions(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index f410ceee51d7..44339823371c 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -3280,14 +3280,28 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv) } cc->start = tmpll; - /* - * For zoned block devices, we need to preserve the issuer write - * ordering. To do so, disable write workqueues and force inline - * encryption completion. - */ if (bdev_is_zoned(cc->dev->bdev)) { + /* + * For zoned block devices, we need to preserve the issuer write + * ordering. To do so, disable write workqueues and force inline + * encryption completion. + */ set_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags); set_bit(DM_CRYPT_WRITE_INLINE, &cc->flags); + + /* + * All zone append writes to a zone of a zoned block device will + * have the same BIO sector, the start of the zone. When the + * cypher IV mode uses sector values, all data targeting a + * zone will be encrypted using the first sector numbers of the + * zone. This will not result in write errors but will + * cause most reads to fail as reads will use the sector values + * for the actual data locations, resulting in IV mismatch. + * To avoid this problem, ask DM core to emulate zone append + * operations with regular writes. + */ + DMWARN("Zone append operations will be emulated"); + ti->emulate_zone_append = true; } if (crypt_integrity_aead(cc) || cc->integrity_iv_size) {