From patchwork Fri Jun 30 08:39:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 13297807 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93927C0015E for ; Fri, 30 Jun 2023 08:39:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232600AbjF3Ijl (ORCPT ); Fri, 30 Jun 2023 04:39:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231984AbjF3Ijk (ORCPT ); Fri, 30 Jun 2023 04:39:40 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F5601FCD; Fri, 30 Jun 2023 01:39:39 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B24BB61701; Fri, 30 Jun 2023 08:39:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4EF0CC433CA; Fri, 30 Jun 2023 08:39:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1688114378; bh=w6Brw9/Fy0gUZLJQWka9YJh50AUvIG52WhFXjTZLnJQ=; h=From:To:Subject:Date:In-Reply-To:References:From; b=HN0xGFMKQiW2qhjMEjjjOgarJhP3bFRz/gEwMF2S+229LicqNg7z+aY8FfO/4Yt7w cb74v3F63ewmKStQQC+9OcQUc5xcAvWw5mbe04HSRt/V9z61U+AfQLzCFHFzF93Lad yJ00rT9TzUO0KDXUaVrLmuNJM9glycDxH1oN9Sp55vHXRLM0Y48gpWy6XPZriugqIp +Jw7Z2By0Ulu1gP+Yqk3c22LGhHBteZw+SIL6m/gDBgwxRZhte6lhHteeijrCcwEGY Xh7/ZOtocQ8Ys8EaeSkZPNqpU+Ude8ZmCiWh7B9AEU2vS4X7us5O70L8G3D/uf2B3W 93YbDktZa921Q== From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , Keith Busch , linux-scsi@vger.kernel.org, "Martin K . Petersen" Subject: [PATCH v2 1/5] scsi: sd_zbc: Set zone limits before revalidating zones Date: Fri, 30 Jun 2023 17:39:31 +0900 Message-ID: <20230630083935.433334-2-dlemoal@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230630083935.433334-1-dlemoal@kernel.org> References: <20230630083935.433334-1-dlemoal@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org In sd_zbc_revalidate_zones(), execute blk_queue_chunk_sectors() and blk_queue_max_zone_append_sectors() to respectively set a ZBC device zone size and maximum zone append sector limit before executing blk_revalidate_disk_zones(). This is to allow the block layer zone reavlidation to check these device characteristics prior to checking all zones of the device. Since blk_queue_max_zone_append_sectors() already caps the device maximum zone append limit to the zone size and to the maximum command size, the max_append value passed to blk_queue_max_zone_append_sectors() is simplified to the maximum number of segments times the number of sectors per page. Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig --- drivers/scsi/sd_zbc.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c index 22801c24ea19..a25215507668 100644 --- a/drivers/scsi/sd_zbc.c +++ b/drivers/scsi/sd_zbc.c @@ -831,7 +831,6 @@ int sd_zbc_revalidate_zones(struct scsi_disk *sdkp) struct request_queue *q = disk->queue; u32 zone_blocks = sdkp->early_zone_info.zone_blocks; unsigned int nr_zones = sdkp->early_zone_info.nr_zones; - u32 max_append; int ret = 0; unsigned int flags; @@ -876,6 +875,11 @@ int sd_zbc_revalidate_zones(struct scsi_disk *sdkp) goto unlock; } + blk_queue_chunk_sectors(q, + logical_to_sectors(sdkp->device, zone_blocks)); + blk_queue_max_zone_append_sectors(q, + q->limits.max_segments << PAGE_SECTORS_SHIFT); + ret = blk_revalidate_disk_zones(disk, sd_zbc_revalidate_zones_cb); memalloc_noio_restore(flags); @@ -888,12 +892,6 @@ int sd_zbc_revalidate_zones(struct scsi_disk *sdkp) goto unlock; } - max_append = min_t(u32, logical_to_sectors(sdkp->device, zone_blocks), - q->limits.max_segments << (PAGE_SHIFT - 9)); - max_append = min_t(u32, max_append, queue_max_hw_sectors(q)); - - blk_queue_max_zone_append_sectors(q, max_append); - sd_zbc_print_zones(sdkp); unlock: From patchwork Fri Jun 30 08:39:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 13297808 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44640EB64D7 for ; Fri, 30 Jun 2023 08:39:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232602AbjF3Ijm (ORCPT ); Fri, 30 Jun 2023 04:39:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232585AbjF3Ijk (ORCPT ); Fri, 30 Jun 2023 04:39:40 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 303961BDB; Fri, 30 Jun 2023 01:39:40 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C2A6361689; Fri, 30 Jun 2023 08:39:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 68273C433C9; Fri, 30 Jun 2023 08:39:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1688114379; bh=oZUH+aGuaNokG6mpzJLFGf85p884Ot0RaMjfhyYrdWo=; h=From:To:Subject:Date:In-Reply-To:References:From; b=K18N30Pg7RNQQHd8gGMn5peSb+1f8rzmv7CCyUZpoM3MkMidAF2WaBarW9N2JcpYx bMtZtrIFUA9dE0dfHBHdg33evHsbpKuRUTPKTEnw4mGwr3b6CDDeVYoMFNCNDToVfS 81bzBPyM01RhEiowyTYeqSDpHHLovixLIbS3rLHToguthUoOKLW/Prq87n8KC25esH bRb/uEcexRd2SDdnXymD9SD0FTE648Yc/Y06rhO35k+Ecxr1+QIuW1bmHR9wf4jqk4 6APvM2OPO8IHXFz+Xucy58P9PZQZElibQg9seigv/rZ0PTndmoQkys4vjZxDsFP3gw NpS3tqxdp69lA== From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , Keith Busch , linux-scsi@vger.kernel.org, "Martin K . Petersen" Subject: [PATCH v2 2/5] nvme: zns: Set zone limits before revalidating zones Date: Fri, 30 Jun 2023 17:39:32 +0900 Message-ID: <20230630083935.433334-3-dlemoal@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230630083935.433334-1-dlemoal@kernel.org> References: <20230630083935.433334-1-dlemoal@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org In nvme_revalidate_zones(), execute blk_queue_chunk_sectors() and blk_queue_max_zone_append_sectors() to respectively set a ZNS namespace zone size and maximum zone append sector limit before executing blk_revalidate_disk_zones(). This is to allow the block layer zone reavlidation to check these device characteristics prior to checking all zones of the device. Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig --- drivers/nvme/host/zns.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/host/zns.c b/drivers/nvme/host/zns.c index 12316ab51bda..ec8557810c21 100644 --- a/drivers/nvme/host/zns.c +++ b/drivers/nvme/host/zns.c @@ -10,12 +10,11 @@ int nvme_revalidate_zones(struct nvme_ns *ns) { struct request_queue *q = ns->queue; - int ret; - ret = blk_revalidate_disk_zones(ns->disk, NULL); - if (!ret) - blk_queue_max_zone_append_sectors(q, ns->ctrl->max_zone_append); - return ret; + blk_queue_chunk_sectors(q, ns->zsze); + blk_queue_max_zone_append_sectors(q, ns->ctrl->max_zone_append); + + return blk_revalidate_disk_zones(ns->disk, NULL); } static int nvme_set_max_append(struct nvme_ctrl *ctrl) From patchwork Fri Jun 30 08:39:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 13297809 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 798B0EB64DA for ; Fri, 30 Jun 2023 08:40:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232620AbjF3Ijq (ORCPT ); Fri, 30 Jun 2023 04:39:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232604AbjF3Ijm (ORCPT ); Fri, 30 Jun 2023 04:39:42 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 433C01FCD; Fri, 30 Jun 2023 01:39:41 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D4EF6616F7; Fri, 30 Jun 2023 08:39:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7B966C433C8; Fri, 30 Jun 2023 08:39:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1688114380; bh=bbajuzLvUWwwdTbld5XYYfeTrAfvglhYfafdpdararM=; h=From:To:Subject:Date:In-Reply-To:References:From; b=MvH5007mYb1dbvaGIRHyVq/UeUPYh2bGPJR4iycz3+9AkjKKFuufzOuI1Pe//M7Wg 2+cxs6U0CHDZZTrt9eN7cJqQOrbVNXBlsr3dVPRoNlWbl2n9BUUXYcdmATXjUe514h 5yTP+lCqaaFtFCtI1Y1jgEgSyCQqLmmdkDvKYeBTViL50nlNf8WmFiqzJ7rQBWt8xc u0Jx9p4oi6FCJo4u7pkjVZeA7wMelxip2C0J3GsSGKDBhEs1IrPVudCmhh7ZMsWrQ9 AdPEa2Gt+dE5ewF+osf5Hb8QbnnwiAZ9Q6C8I2wuH6MbTKtwB6ZEGnwYuZ+EYrWB8u WIGaOaQTzW5tw== From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , Keith Busch , linux-scsi@vger.kernel.org, "Martin K . Petersen" Subject: [PATCH v2 3/5] block: nullblk: Set zone limits before revalidating zones Date: Fri, 30 Jun 2023 17:39:33 +0900 Message-ID: <20230630083935.433334-4-dlemoal@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230630083935.433334-1-dlemoal@kernel.org> References: <20230630083935.433334-1-dlemoal@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org In null_register_zoned_dev(), execute blk_queue_chunk_sectors() and blk_queue_max_zone_append_sectors() to respectively set the zoned device zone size and maximum zone append sector limit before executing blk_revalidate_disk_zones(). This is to allow the block layer zone reavlidation to check these device characteristics prior to checking all zones of the device. Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig --- drivers/block/null_blk/zoned.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-) diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c index 635ce0648133..55c5b48bc276 100644 --- a/drivers/block/null_blk/zoned.c +++ b/drivers/block/null_blk/zoned.c @@ -162,21 +162,15 @@ int null_register_zoned_dev(struct nullb *nullb) disk_set_zoned(nullb->disk, BLK_ZONED_HM); blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q); blk_queue_required_elevator_features(q, ELEVATOR_F_ZBD_SEQ_WRITE); - - if (queue_is_mq(q)) { - int ret = blk_revalidate_disk_zones(nullb->disk, NULL); - - if (ret) - return ret; - } else { - blk_queue_chunk_sectors(q, dev->zone_size_sects); - nullb->disk->nr_zones = bdev_nr_zones(nullb->disk->part0); - } - + blk_queue_chunk_sectors(q, dev->zone_size_sects); + nullb->disk->nr_zones = bdev_nr_zones(nullb->disk->part0); blk_queue_max_zone_append_sectors(q, dev->zone_size_sects); disk_set_max_open_zones(nullb->disk, dev->zone_max_open); disk_set_max_active_zones(nullb->disk, dev->zone_max_active); + if (queue_is_mq(q)) + return blk_revalidate_disk_zones(nullb->disk, NULL); + return 0; } From patchwork Fri Jun 30 08:39:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 13297810 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 885F4C0015E for ; Fri, 30 Jun 2023 08:40:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232623AbjF3Ijr (ORCPT ); Fri, 30 Jun 2023 04:39:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231984AbjF3Ijn (ORCPT ); Fri, 30 Jun 2023 04:39:43 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7BDCD1BDB; Fri, 30 Jun 2023 01:39:42 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id ECCD061703; Fri, 30 Jun 2023 08:39:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8D6D2C433C0; Fri, 30 Jun 2023 08:39:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1688114381; bh=CLkjUwaBY5FTvL02K2/nJwMOM3vPLIrW0Ksmnah+OcM=; h=From:To:Subject:Date:In-Reply-To:References:From; b=oCMZIzanjtlyQlIY/vh8hI/DvRMYDUBke1c3WlYevpIBRvwG/lKwHuNEVY79RZubJ Ddn7B8gOh6ukNRQ3riAp6j7WeLBnaNuwk3+X0XTQv3o+ketGY+CR+dFZw+3j5qPGDj 0tvMJqPn+qcz3WARY556WRBcP2K1jsbNcSjAifn784fvANkorRtU0iNABZujiiH0Zi KRBMQUFAIAtw8vfdu/3xmTHaoa4wbezoqY43yRSEsBwyftqF2vPXjkdF16mAmZtUPU ngOjsalZFQyCTR/YTIXLRv8XOfexkzhnKgBIbmmoP38Lvq71WVZ25yeaATogsWm3HS 6xGAqDBKtQW2Q== From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , Keith Busch , linux-scsi@vger.kernel.org, "Martin K . Petersen" Subject: [PATCH v2 4/5] block: virtio_blk: Set zone limits before revalidating zones Date: Fri, 30 Jun 2023 17:39:34 +0900 Message-ID: <20230630083935.433334-5-dlemoal@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230630083935.433334-1-dlemoal@kernel.org> References: <20230630083935.433334-1-dlemoal@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org In virtblk_probe_zoned_device(), execute blk_queue_chunk_sectors() and blk_queue_max_zone_append_sectors() to respectively set the zoned device zone size and maximum zone append sector limit before executing blk_revalidate_disk_zones(). This is to allow the block layer zone reavlidation to check these device characteristics prior to checking all zones of the device. Signed-off-by: Damien Le Moal Reviewed-by: Dmitry Fomichev Reviewed-by: Christoph Hellwig --- drivers/block/virtio_blk.c | 34 +++++++++++++++------------------- 1 file changed, 15 insertions(+), 19 deletions(-) diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index b47358da92a2..1fe011676d07 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -751,7 +751,6 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev, { u32 v, wg; u8 model; - int ret; virtio_cread(vdev, struct virtio_blk_config, zoned.model, &model); @@ -806,6 +805,7 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev, vblk->zone_sectors); return -ENODEV; } + blk_queue_chunk_sectors(q, vblk->zone_sectors); dev_dbg(&vdev->dev, "zone sectors = %u\n", vblk->zone_sectors); if (virtio_has_feature(vdev, VIRTIO_BLK_F_DISCARD)) { @@ -814,26 +814,22 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev, blk_queue_max_discard_sectors(q, 0); } - ret = blk_revalidate_disk_zones(vblk->disk, NULL); - if (!ret) { - virtio_cread(vdev, struct virtio_blk_config, - zoned.max_append_sectors, &v); - if (!v) { - dev_warn(&vdev->dev, "zero max_append_sectors reported\n"); - return -ENODEV; - } - if ((v << SECTOR_SHIFT) < wg) { - dev_err(&vdev->dev, - "write granularity %u exceeds max_append_sectors %u limit\n", - wg, v); - return -ENODEV; - } - - blk_queue_max_zone_append_sectors(q, v); - dev_dbg(&vdev->dev, "max append sectors = %u\n", v); + virtio_cread(vdev, struct virtio_blk_config, + zoned.max_append_sectors, &v); + if (!v) { + dev_warn(&vdev->dev, "zero max_append_sectors reported\n"); + return -ENODEV; + } + if ((v << SECTOR_SHIFT) < wg) { + dev_err(&vdev->dev, + "write granularity %u exceeds max_append_sectors %u limit\n", + wg, v); + return -ENODEV; } + blk_queue_max_zone_append_sectors(q, v); + dev_dbg(&vdev->dev, "max append sectors = %u\n", v); - return ret; + return blk_revalidate_disk_zones(vblk->disk, NULL); } #else From patchwork Fri Jun 30 08:39:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 13297811 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EC61C001B1 for ; Fri, 30 Jun 2023 08:40:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232604AbjF3Ijs (ORCPT ); Fri, 30 Jun 2023 04:39:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46476 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232613AbjF3Ijo (ORCPT ); Fri, 30 Jun 2023 04:39:44 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 764EA1FD8; Fri, 30 Jun 2023 01:39:43 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0BD41616FA; Fri, 30 Jun 2023 08:39:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A0EE4C433CA; Fri, 30 Jun 2023 08:39:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1688114382; bh=m3uYkFESO4SPXlo0dJapKHAXbaVHD0mlQ7CgJ6mKBHo=; h=From:To:Subject:Date:In-Reply-To:References:From; b=iubKeW11hXuhIHpy+/tgnneglGDNMCbUg8z5+Ju1HsoI+QUkfXCn6TQXX5kiIqxVx ivfKz0xcCA2Ei+6vukwpM0T4sNEmnDXtCK8Ai7/Jgi1DgIQOBGU3yM+j9JTexQn/g0 PGOnU8Qz5qMdUxuaSgCflQr2v4KeaSJa2iL3QtIP78GZKW20aFyE1NaC2RwA3WCfqx 9mdpN/FUN6NZjEk4tGRCDd6YbMShec7LIbR4UZbELQfIfb4wwbqSKX22ywl1wqmMJQ pWz5jsfdT+GKJFzxd8k+OtCKYKzxsGvh5UIqe1zoDu9BQk6l2U9gvBZCgbw1arnZSZ GnwxbBuJAXb/w== From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , Keith Busch , linux-scsi@vger.kernel.org, "Martin K . Petersen" Subject: [PATCH v2 5/5] block: improve checks in blk_revalidate_disk_zones() Date: Fri, 30 Jun 2023 17:39:35 +0900 Message-ID: <20230630083935.433334-6-dlemoal@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230630083935.433334-1-dlemoal@kernel.org> References: <20230630083935.433334-1-dlemoal@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org blk_revalidate_disk_zones() implements checks of the zones of a zoned block device, verifying that the zone size is a power of 2 number of sectors, that all zones (except possibly the last one) have the same size and that zones cover the entire addressing space of the device. While these checks are appropriate to verify that well tested hardware devices have an adequate zone configurations, they lack in certain areas which may result in issues with emulated devices implemented with user drivers such as ublk or tcmu. Specifically, this function does not check if the device driver indicated support for the mandatory zone append writes, that is, if the device max_zone_append_sectors queue limit is set to a non-zero value. Additionally, invalid zones such as a zero length zone with a start sector equal to the device capacity will not be detected and result in out of bounds use of the zone bitmaps prepared with the callback function blk_revalidate_zone_cb(). Improve blk_revalidate_disk_zones() to address these inadequate checks, relying on the fact that all device drivers supporting zoned block devices must set the device zone size (chunk_sectors queue limit) and the max_zone_append_sectors queue limit before executing this function. The check for a non-zero max_zone_append_sectors value is done in blk_revalidate_disk_zones() before executing the zone report. The zone report callback function blk_revalidate_zone_cb() is also modified to add a check that a zone start is below the device capacity. The check that the zone size is a power of 2 number of sectors is moved to blk_revalidate_disk_zones() as the zone size is already known. Similarly, the number of zones of the device can be calculated in blk_revalidate_disk_zones() before executing the zone report. The kdoc comment for blk_revalidate_disk_zones() is also updated to mention that device drivers must set the device zone size and the max_zone_append_sectors queue limit before calling this function. Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig --- block/blk-zoned.c | 86 +++++++++++++++++++++++++++-------------------- 1 file changed, 50 insertions(+), 36 deletions(-) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 0f9f97cdddd9..619ee41a51cc 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -442,7 +442,6 @@ struct blk_revalidate_zone_args { unsigned long *conv_zones_bitmap; unsigned long *seq_zones_wlock; unsigned int nr_zones; - sector_t zone_sectors; sector_t sector; }; @@ -456,38 +455,34 @@ static int blk_revalidate_zone_cb(struct blk_zone *zone, unsigned int idx, struct gendisk *disk = args->disk; struct request_queue *q = disk->queue; sector_t capacity = get_capacity(disk); + sector_t zone_sectors = q->limits.chunk_sectors; + + /* Check for bad zones and holes in the zone report */ + if (zone->start != args->sector) { + pr_warn("%s: Zone gap at sectors %llu..%llu\n", + disk->disk_name, args->sector, zone->start); + return -ENODEV; + } + + if (zone->start >= capacity || !zone->len) { + pr_warn("%s: Invalid zone start %llu, length %llu\n", + disk->disk_name, zone->start, zone->len); + return -ENODEV; + } /* * All zones must have the same size, with the exception on an eventual * smaller last zone. */ - if (zone->start == 0) { - if (zone->len == 0 || !is_power_of_2(zone->len)) { - pr_warn("%s: Invalid zoned device with non power of two zone size (%llu)\n", - disk->disk_name, zone->len); - return -ENODEV; - } - - args->zone_sectors = zone->len; - args->nr_zones = (capacity + zone->len - 1) >> ilog2(zone->len); - } else if (zone->start + args->zone_sectors < capacity) { - if (zone->len != args->zone_sectors) { + if (zone->start + zone->len < capacity) { + if (zone->len != zone_sectors) { pr_warn("%s: Invalid zoned device with non constant zone size\n", disk->disk_name); return -ENODEV; } - } else { - if (zone->len > args->zone_sectors) { - pr_warn("%s: Invalid zoned device with larger last zone size\n", - disk->disk_name); - return -ENODEV; - } - } - - /* Check for holes in the zone report */ - if (zone->start != args->sector) { - pr_warn("%s: Zone gap at sectors %llu..%llu\n", - disk->disk_name, args->sector, zone->start); + } else if (zone->len > zone_sectors) { + pr_warn("%s: Invalid zoned device with larger last zone size\n", + disk->disk_name); return -ENODEV; } @@ -526,11 +521,13 @@ static int blk_revalidate_zone_cb(struct blk_zone *zone, unsigned int idx, * @disk: Target disk * @update_driver_data: Callback to update driver data on the frozen disk * - * Helper function for low-level device drivers to (re) allocate and initialize - * a disk request queue zone bitmaps. This functions should normally be called - * within the disk ->revalidate method for blk-mq based drivers. For BIO based - * drivers only q->nr_zones needs to be updated so that the sysfs exposed value - * is correct. + * Helper function for low-level device drivers to check and (re) allocate and + * initialize a disk request queue zone bitmaps. This functions should normally + * be called within the disk ->revalidate method for blk-mq based drivers. + * Before calling this function, the device driver must already have set the + * device zone size (chunk_sector limit) and the max zone append limit. + * For BIO based drivers, this function cannot be used. BIO based device drivers + * only need to set disk->nr_zones so that the sysfs exposed value is correct. * If the @update_driver_data callback function is not NULL, the callback is * executed with the device request queue frozen after all zones have been * checked. @@ -539,9 +536,9 @@ int blk_revalidate_disk_zones(struct gendisk *disk, void (*update_driver_data)(struct gendisk *disk)) { struct request_queue *q = disk->queue; - struct blk_revalidate_zone_args args = { - .disk = disk, - }; + sector_t zone_sectors = q->limits.chunk_sectors; + sector_t capacity = get_capacity(disk); + struct blk_revalidate_zone_args args = { }; unsigned int noio_flag; int ret; @@ -550,13 +547,31 @@ int blk_revalidate_disk_zones(struct gendisk *disk, if (WARN_ON_ONCE(!queue_is_mq(q))) return -EIO; - if (!get_capacity(disk)) - return -EIO; + if (!capacity) + return -ENODEV; + + /* + * Checks that the device driver indicated a valid zone size and that + * the max zone append limit is set. + */ + if (!zone_sectors || !is_power_of_2(zone_sectors)) { + pr_warn("%s: Invalid non power of two zone size (%llu)\n", + disk->disk_name, zone_sectors); + return -ENODEV; + } + + if (!q->limits.max_zone_append_sectors) { + pr_warn("%s: Invalid 0 maximum zone append limit\n", + disk->disk_name); + return -ENODEV; + } /* * Ensure that all memory allocations in this context are done as if * GFP_NOIO was specified. */ + args.disk = disk; + args.nr_zones = (capacity + zone_sectors - 1) >> ilog2(zone_sectors); noio_flag = memalloc_noio_save(); ret = disk->fops->report_zones(disk, 0, UINT_MAX, blk_revalidate_zone_cb, &args); @@ -570,7 +585,7 @@ int blk_revalidate_disk_zones(struct gendisk *disk, * If zones where reported, make sure that the entire disk capacity * has been checked. */ - if (ret > 0 && args.sector != get_capacity(disk)) { + if (ret > 0 && args.sector != capacity) { pr_warn("%s: Missing zones from sector %llu\n", disk->disk_name, args.sector); ret = -ENODEV; @@ -583,7 +598,6 @@ int blk_revalidate_disk_zones(struct gendisk *disk, */ blk_mq_freeze_queue(q); if (ret > 0) { - blk_queue_chunk_sectors(q, args.zone_sectors); disk->nr_zones = args.nr_zones; swap(disk->seq_zones_wlock, args.seq_zones_wlock); swap(disk->conv_zones_bitmap, args.conv_zones_bitmap);