From patchwork Sat Apr 17 02:33:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12209447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0211EC43462 for ; Sat, 17 Apr 2021 02:33:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D109461166 for ; Sat, 17 Apr 2021 02:33:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235462AbhDQCdy (ORCPT ); Fri, 16 Apr 2021 22:33:54 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:17957 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231997AbhDQCdx (ORCPT ); Fri, 16 Apr 2021 22:33:53 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1618626807; x=1650162807; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=M2olJXYeF+UIjCTOBygvjDqf/S+qS1mKqstmI3YoyAQ=; b=ByF7AE8X2OUTum1GxDCtJeo9gtF4F5faJ5lc1PWvJpEiJR1bApk2CX7L dbzfxmJE4Zih9mk5TLblOcyWvO/b4H7OSgYYLcXqDEsYpQM99oJReUE7z 1VcujWWqq4BdfepXmV/1mAwK/YTbMEN6wj2Lf8vqvBw0/Mmsah2TdIO16 6gjW5+ivHs/8edKR+g/HbwhNh85xDVb0+E2knuUJ06/0hc2k+HIYiiLeT uObRBsRyUtVYgJHhycufMjlh79gBkYUb1bEFaI3ynT6U1pVqZZdU6hVvj C9UGzxaCz8tm9lt27M9IIZo5IrQv3OGhA8GPXxWNG1OeHFLP/wWwtxIsP A==; IronPort-SDR: vMNKW6kwmjU1AU53BRX582/nsbavO1otvhPVNHLn3T2RnQ3WYOYdNNtTeSu+3uP0+smQthDLpA 3HQ6/RNLh8fMSiGx8IysBfgnVdhYzikWMD0K26C/vpPj+zLQrVTG9/MD6QJTkx5EtClFbPoklN K3ON6oqbyycLsC5DHksd3I8W4wLsMn0KEx8cTGNJ0ZGiaV4R7wr4I3PxXMzLlJM2zwe9zdi7fW rWz4L4h5KXnRStUpC5jLirI3UuX7f47GM9tACHkDZVVDiTJzhSBpnrvSCg4cnDG94ZNoYMVOaY WZ0= X-IronPort-AV: E=Sophos;i="5.82,228,1613404800"; d="scan'208";a="165193272" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 17 Apr 2021 10:33:27 +0800 IronPort-SDR: 3B4kx/poib3uHSsSdNpwXxhWtBgki1RcaPN7xUMV2OZkBAMvDf3Qujo6+Ngwb4cIDDVWJiFgs7 objZ0Cj3q60btnk3CiRZhGc90cQCS0BMsN1S9/01Gx3dhbBEhn0NDSHrD7yQnaLHnLLMCWgby9 3YZIx2VNjmqr/cibqiddD4CEHlK4LlRwtPQip56oh0MczHOLo6pzBeXBJn2X7crbn1ITu3sfUL gKpNmItPyFAlDh+ELS6JuXu/CAh6CRN2VFbfd5jk41P+Q/PTsX69+zhVnIPYNBdRbrq6eAePoN DT0rSNhrwss48UVZ2yBUzw4e Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Apr 2021 19:12:41 -0700 IronPort-SDR: UQflehws6aKDT+mFZ2nVQiUGb2lRUwWiMuBTx9jxGjceks+o06GOUReKHZ5fkVXjt7Kj5B/CZy Mp0Ws+qZfN+7WYGWYst3lEYumdIWJKlDCPxHvhR3q+AOWEUrWg4EIcgZ9stY6RgD8+AuNB0WNW xHduyhNolA7A+ImExw3mcQc08lDB9cUDyKFHP0pqsXMA73uC5RH42QEgfkZ4eBaL6/Q078vJpu wh5OcUneAqZuDkBjpm6TDPxIE0bhp4yo+cURINauqGwoe8qeHV11jqbhnar9TCvep01s+iLBkl cKw= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip01.wdc.com with ESMTP; 16 Apr 2021 19:33:27 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , linux-scsi@vger.kernel.org, "Martin K . Petersen" , linux-fsdevel@vger.kernel.org Cc: Johannes Thumshirn , Shinichiro Kawasaki Subject: [PATCH v2 1/3] dm: Introduce zone append support control Date: Sat, 17 Apr 2021 11:33:21 +0900 Message-Id: <20210417023323.852530-2-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210417023323.852530-1-damien.lemoal@wdc.com> References: <20210417023323.852530-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add the boolean field zone_append_not_supported to the dm_target structure to allow a target implementing a zoned block device to explicitly opt out from zone append (REQ_OP_ZONE_APPEND) operations support. When set to true by the target constructor, the target device queue limit max_zone_append_sectors is set to 0 in dm_table_set_restrictions() so that users of the target (e.g. file systems) can detect that the device cannot process zone append operations. Detection for the target support of zone append is done similarly to the detection for other device features such as secure erase, using a helper function. For zone append, the function dm_table_supports_zone_append() is defined if CONFIG_BLK_DEV_ZONED is enabled. Fixes: 8e225f04d2dd ("dm crypt: Enable zoned block device support") Cc: stable@vger.kernel.org Signed-off-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- drivers/md/dm-table.c | 41 +++++++++++++++++++++++++++++++++++ include/linux/device-mapper.h | 6 +++++ 2 files changed, 47 insertions(+) diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index e5f0f1703c5d..9efd7a0ee27e 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1999,6 +1999,37 @@ static int device_requires_stable_pages(struct dm_target *ti, return blk_queue_stable_writes(q); } +#ifdef CONFIG_BLK_DEV_ZONED +static int device_not_zone_append_capable(struct dm_target *ti, + struct dm_dev *dev, sector_t start, + sector_t len, void *data) +{ + struct request_queue *q = bdev_get_queue(dev->bdev); + + return !blk_queue_is_zoned(q) || + !q->limits.max_zone_append_sectors; +} + +static bool dm_table_supports_zone_append(struct dm_table *t) +{ + struct dm_target *ti; + unsigned int i; + + for (i = 0; i < dm_table_get_num_targets(t); i++) { + ti = dm_table_get_target(t, i); + + if (ti->zone_append_not_supported) + return false; + + if (!ti->type->iterate_devices || + ti->type->iterate_devices(ti, device_not_zone_append_capable, NULL)) + return false; + } + + return true; +} +#endif + void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, struct queue_limits *limits) { @@ -2091,6 +2122,16 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, if (blk_queue_is_zoned(q)) { WARN_ON_ONCE(queue_is_mq(q)); q->nr_zones = blkdev_nr_zones(t->md->disk); + + /* + * All zoned devices support zone append by default. However, + * some zoned targets (e.g. dm-crypt) cannot support this + * operation. Check here if the target indicated the lack of + * support for zone append and set max_zone_append_sectors to 0 + * in that case so that users (e.g. an FS) can detect this fact. + */ + if (!dm_table_supports_zone_append(t)) + q->limits.max_zone_append_sectors = 0; } #endif diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h index 5c641f930caf..4da699add262 100644 --- a/include/linux/device-mapper.h +++ b/include/linux/device-mapper.h @@ -361,6 +361,12 @@ struct dm_target { * Set if we need to limit the number of in-flight bios when swapping. */ bool limit_swap_bios:1; + + /* + * Set if this target is a zoned device that cannot accept + * zone append operations. + */ + bool zone_append_not_supported:1; }; void *dm_per_bio_data(struct bio *bio, size_t data_size); From patchwork Sat Apr 17 02:33:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12209449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 604D7C43460 for ; Sat, 17 Apr 2021 02:33:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 35A1C613C0 for ; Sat, 17 Apr 2021 02:33:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235482AbhDQCd5 (ORCPT ); Fri, 16 Apr 2021 22:33:57 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:17957 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231997AbhDQCdz (ORCPT ); Fri, 16 Apr 2021 22:33:55 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1618626810; x=1650162810; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=najxIh+FXOHZ7dZhad+BaMtkrWLrGz6Zvppn8AQJSQc=; b=rYvMR5AQ2it0KKJoe5OxrMDUZpg94CQUAIEGALNYcjeWoIVmrQb+E5eH Xv/R9f6CQmacNzy0JDLhQnAdeahOVgFs4Xsxds7JXkc8TxtDULCDUF6b9 umuUH/aMf1W/jZb6p1Ctjid8mNYeBFM1m1JMyhIQr0dnkPSMDXgqp5rIo ajK+Qrh2YN3EJ6LqiDCYfZbXTyrLkDXscC1V5XffBv6QExGUs7M8NTB10 qysNYfXeFFnKseanY1InisasK4ovAC8WcAhkq7/+TJdgv43wLxjQVBO4w 5qe0aqooa6L7e1sceZORB6okXQuIuWPV4vabGc+KERCAhQfSJRMp8UY0Q g==; IronPort-SDR: BSmFvG9aaKJUkrXpSO63RbdKZk0kL3uibY0thF77F56zcV8SMyaRIf3sYiA2smdWmqxNpT81oU knSXr1dBs11lk2gQiDJ5MM01y0sQXkyERB2YIOi+vZkxvU27k0D14g/TAL8xldT/r7KBh5GusL UTOT0BaMUk47hsBG3MPsbt0sA4911s04Xdcdx0dmQE/7sCNmJZdY/UDWEY4GdZTcizmARZ2twO VSNZz0KenJtKxRmcMVYFdyuS/K2oiffMfu37bluKeCpEFYpmJ0dyIkZLJuaadBeY/UcvJ7ZOD2 Uvg= X-IronPort-AV: E=Sophos;i="5.82,228,1613404800"; d="scan'208";a="165193275" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 17 Apr 2021 10:33:29 +0800 IronPort-SDR: VM05/6arzp0Phxd5wg2j4IEb4Ww2CGyHKfs59xgQVOh6aBMifWxwgcV8GExhCWS2uv7/UyTE4x 7I0v0NQkPuxYJQhQa3SQHwlQWbGJ18HdXDtv31DgjTABZehtx6a3zHxzf2x2TvZ4SGh7zabpIA 1JE4ZPbZjZv7rDYkul90+V5SJ/qJTTn4aR2rGIhlahRiKMqILU1ziwBU9/+v1n58i86EY7VaIm Uf32vVivnmO2xXRRClN4LLr2QYMKkaasGD9o+qr6txrpR7lnd7J25xo9Iv+ma2Y5lN26wdoDpT ioZF4dvAgsgP0SxhJ1X+YQvy Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Apr 2021 19:12:43 -0700 IronPort-SDR: xIOtvDVVUoEgifhjwweuy0fDdzV0M33C9Q/9LAS60QbO2JgzCZQNnvvYqoXOtCsotU6X1Tq9qR fRdbluZIbVpVm7Kdm6M6AbxvVhXX7+gHfj7VFjUlI7ggjslKXnRGbmZJfxt6CaVyEs8yLLb2ap xcl4sfWnLREwFWWruNSyclYXh1u8MnxTlEANLAvHw5TsltOSm4gZTenFH5kwSSxriR+VFF6G7y W2tjR8E/oS3cFvaD2JFJzrVxFbgRKyUn3dhoGIoRLiEMlr6hZwjTvpgEuHhNk35wxgqQnQFfPH QTg= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip01.wdc.com with ESMTP; 16 Apr 2021 19:33:29 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , linux-scsi@vger.kernel.org, "Martin K . Petersen" , linux-fsdevel@vger.kernel.org Cc: Johannes Thumshirn , Shinichiro Kawasaki Subject: [PATCH v2 2/3] dm crypt: Fix zoned block device support Date: Sat, 17 Apr 2021 11:33:22 +0900 Message-Id: <20210417023323.852530-3-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210417023323.852530-1-damien.lemoal@wdc.com> References: <20210417023323.852530-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Zone append BIOs (REQ_OP_ZONE_APPEND) always specify the start sector of the zone to be written instead of the actual location sector to write. The write location is determined by the device and returned to the host upon completion of the operation. This interface, while simple and efficient for writing into sequential zones of a zoned block device, is incompatible with the use of sector values to calculate a cypher block IV. All data written in a zone end up using the same IV values corresponding to the first sectors of the zone, but read operation will specify any sector within the zone, resulting in an IV mismatch between encryption and decryption. Using a single sector value (e.g. the zone start sector) for all read and writes into a zone can solve this problem, but at the cost of weakening the cypher chosen by the user. Instead, to solve this problem, explicitly disable support for zone append operations using the zone_append_not_supported field of struct dm_target if the IV mode used is sector-based, that is for all IVs modes except null and random. The cypher flag CRYPT_IV_NO_SECTORS iis introduced to indicate that the cypher does not use sector values. This flag is set in crypt_ctr_ivmode() for the null and random IV modes and checked in crypt_ctr() to set to true zone_append_not_supported if CRYPT_IV_NO_SECTORS is not set for the chosen cypher. Reported-by: Shin'ichiro Kawasaki Fixes: 8e225f04d2dd ("dm crypt: Enable zoned block device support") Cc: stable@vger.kernel.org Signed-off-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- drivers/md/dm-crypt.c | 49 +++++++++++++++++++++++++++++++++++-------- 1 file changed, 40 insertions(+), 9 deletions(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index b0ab080f2567..6ef35bb29ce5 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -137,6 +137,7 @@ enum cipher_flags { CRYPT_MODE_INTEGRITY_AEAD, /* Use authenticated mode for cipher */ CRYPT_IV_LARGE_SECTORS, /* Calculate IV from sector_size, not 512B sectors */ CRYPT_ENCRYPT_PREPROCESS, /* Must preprocess data for encryption (elephant) */ + CRYPT_IV_ZONE_APPEND, /* IV mode supports zone append operations */ }; /* @@ -2750,9 +2751,10 @@ static int crypt_ctr_ivmode(struct dm_target *ti, const char *ivmode) } /* Choose ivmode, see comments at iv code. */ - if (ivmode == NULL) + if (ivmode == NULL) { cc->iv_gen_ops = NULL; - else if (strcmp(ivmode, "plain") == 0) + set_bit(CRYPT_IV_ZONE_APPEND, &cc->cipher_flags); + } else if (strcmp(ivmode, "plain") == 0) cc->iv_gen_ops = &crypt_iv_plain_ops; else if (strcmp(ivmode, "plain64") == 0) cc->iv_gen_ops = &crypt_iv_plain64_ops; @@ -2762,9 +2764,10 @@ static int crypt_ctr_ivmode(struct dm_target *ti, const char *ivmode) cc->iv_gen_ops = &crypt_iv_essiv_ops; else if (strcmp(ivmode, "benbi") == 0) cc->iv_gen_ops = &crypt_iv_benbi_ops; - else if (strcmp(ivmode, "null") == 0) + else if (strcmp(ivmode, "null") == 0) { cc->iv_gen_ops = &crypt_iv_null_ops; - else if (strcmp(ivmode, "eboiv") == 0) + set_bit(CRYPT_IV_ZONE_APPEND, &cc->cipher_flags); + } else if (strcmp(ivmode, "eboiv") == 0) cc->iv_gen_ops = &crypt_iv_eboiv_ops; else if (strcmp(ivmode, "elephant") == 0) { cc->iv_gen_ops = &crypt_iv_elephant_ops; @@ -2791,6 +2794,7 @@ static int crypt_ctr_ivmode(struct dm_target *ti, const char *ivmode) cc->key_extra_size = cc->iv_size + TCW_WHITENING_SIZE; } else if (strcmp(ivmode, "random") == 0) { cc->iv_gen_ops = &crypt_iv_random_ops; + set_bit(CRYPT_IV_ZONE_APPEND, &cc->cipher_flags); /* Need storage space in integrity fields. */ cc->integrity_iv_size = cc->iv_size; } else { @@ -3281,14 +3285,32 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv) } cc->start = tmpll; - /* - * For zoned block devices, we need to preserve the issuer write - * ordering. To do so, disable write workqueues and force inline - * encryption completion. - */ if (bdev_is_zoned(cc->dev->bdev)) { + /* + * For zoned block devices, we need to preserve the issuer write + * ordering. To do so, disable write workqueues and force inline + * encryption completion. + */ set_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags); set_bit(DM_CRYPT_WRITE_INLINE, &cc->flags); + + /* + * All zone append writes to a zone of a zoned block device will + * have the same BIO sector (the start of the zone). When the + * cypher IV mode uses sector values, all data targeting a + * zone will be encrypted using the first sector numbers of the + * zone. This will not result in write errors but will + * cause most reads to fail as reads will use the sector values + * for the actual data locations, resulting in IV mismatch. + * To avoid this problem, allow zone append operations only when + * the selected IV mode indicated that zone append operations + * are supported, that is, IV modes that do not use sector + * values (null and random IVs). + */ + if (!test_bit(CRYPT_IV_ZONE_APPEND, &cc->cipher_flags)) { + DMWARN("Zone append is not supported with the selected IV mode"); + ti->zone_append_not_supported = true; + } } if (crypt_integrity_aead(cc) || cc->integrity_iv_size) { @@ -3356,6 +3378,15 @@ static int crypt_map(struct dm_target *ti, struct bio *bio) struct dm_crypt_io *io; struct crypt_config *cc = ti->private; + /* + * For zoned targets, we should not see any zone append operation if + * the cypher IV mode selected does not support them. In the unlikely + * case we do see one such operation, warn and fail the request. + */ + if (WARN_ON_ONCE(bio_op(bio) == REQ_OP_ZONE_APPEND && + !test_bit(CRYPT_IV_ZONE_APPEND, &cc->cipher_flags))) + return DM_MAPIO_KILL; + /* * If bio is REQ_PREFLUSH or REQ_OP_DISCARD, just bypass crypt queues. * - for REQ_PREFLUSH device-mapper core ensures that no IO is in-flight From patchwork Sat Apr 17 02:33:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12209451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9104AC43470 for ; Sat, 17 Apr 2021 02:33:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7634B6124B for ; Sat, 17 Apr 2021 02:33:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235561AbhDQCd7 (ORCPT ); Fri, 16 Apr 2021 22:33:59 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:17957 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235502AbhDQCd5 (ORCPT ); Fri, 16 Apr 2021 22:33:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1618626812; x=1650162812; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1bt8pdRaO5fUcGqSEgkOZzZgAvMQ4pvkgiTI/HOypSk=; b=HE+6rLF3zUS3FAkqXJtiUoPTGDT4XOsThUwuo2Pd0tiDZGGMk/Sv91LY sBzmgHDN/b9sns7Ld+C+nlPsvXnHhdZteQ6EvVJlrW04nJffK/WyGvk1D sLWAfthg2fNWsHDTL2N4sjlOF64Hg/PfegRveoiXJElk0QlVflNxeLZQ/ TJWmwdLCdO2lgPyKxfe7XGbFFt0JklEO9ye483WcVcd4wuIeMtBYsyXjO jkYKhVsWZnwyEpuJYekAqcg5lgEHOYCDVVYIy6WdajlsUkhgmHgP2ynoY AMjdK1c/BkXkeb9FDvk2pBNpP+92q0loXmz7Fnj4LrqsWz2RYy4vc/Amm Q==; IronPort-SDR: eQFqlBTv+6YqQHs8SD2OxFSGsCIeL22Vns0GZmM68nilCN4ee3LixawNMknP+mjsSPizpP1fxN XqqzfEgLwn2/X7iP+cL4bqjDZteJ6qlCPzcnCo0ysHG946TlCGkb9MnScf5ORXPce7W7RuFfZ6 x5ClhlUiAILrqyDcIkfMFHC0d2NyjoIHo1bQPe9/Brmbsm+J3x4GkZVndDITtoDo5uZv3195ae oofvJsChf3ZrmcAmMpvewPADauSJXJxPptuQb3eUkl3gJX2mFLeCUI756hmDBcbzuwwB9Ovb1f s/A= X-IronPort-AV: E=Sophos;i="5.82,228,1613404800"; d="scan'208";a="165193277" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 17 Apr 2021 10:33:31 +0800 IronPort-SDR: EA0GwqVOc5QzZUO4660rt9CIAk4Rmg/yzraTuABX8mJl7FAFMM/sCgouVAoQOhniLP4gx8aCVm yaBvjbf0s6vLSgEyuDE1m5O4Cd+yR3Ak7LmhcE3sj9fH+8osHSDaf113VE4BwqauDJG8D9dR5k q3+sHxneEbPTmVpLgi2Z1qejtx1Nw6Hk+I0typO8DhqkMU0Qxy3we0DdHmgf3ImAZoMqNZ6AVC TO+Na131IKoD4T5Ubg+WEJQtDLdpAPGrIBAd60J5phM4mV83ej/pHftZaul3/TMQpry8Ko0gVX wsNPzu9MbL8fXVD+aBbm7hDY Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Apr 2021 19:12:45 -0700 IronPort-SDR: ffwDuNJdgpDyZs+F+I7hb5tSfdWh7H3NQY/F8KVBqjx1K+WoPrmksYPHiqq3maw5NfYpqZctE9 gWv+C7z7BiidjMOBh+0BeZRwHA5xPSD1s6Kg3GUZiliiyyBB0i0Uv2nQz0774FLe8UiAoCUSqt e+5ZqS22Z3zk1jmc+G8LVLkZJk8bruBBF+amRb69DmiJ5Lq6G3GvG7tTV6S+KC5sTAewBEgA84 hAnkDYwYNQCOL+1UEM6V12/EMZYtLQG3IrKQoLYYc5WTTO5MKtF+ZIu3Pf8XRj1vqGFTTgE1HH LWc= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip01.wdc.com with ESMTP; 16 Apr 2021 19:33:31 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , linux-scsi@vger.kernel.org, "Martin K . Petersen" , linux-fsdevel@vger.kernel.org Cc: Johannes Thumshirn , Shinichiro Kawasaki Subject: [PATCH v2 3/3] zonefs: fix synchronous write to sequential zone files Date: Sat, 17 Apr 2021 11:33:23 +0900 Message-Id: <20210417023323.852530-4-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210417023323.852530-1-damien.lemoal@wdc.com> References: <20210417023323.852530-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Synchronous writes to sequential zone files cannot use zone append operations if the underlying zoned device queue limit max_zone_append_sectors is 0, indicating that the device does not support this operation. In this case, fall back to using regular write operations. Fixes: 02ef12a663c7 ("zonefs: use REQ_OP_ZONE_APPEND for sync DIO") Cc: stable@vger.kernel.org Signed-off-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- fs/zonefs/super.c | 16 ++++++++++++---- fs/zonefs/zonefs.h | 2 ++ 2 files changed, 14 insertions(+), 4 deletions(-) diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c index 049e36c69ed7..b97566b9dff7 100644 --- a/fs/zonefs/super.c +++ b/fs/zonefs/super.c @@ -689,14 +689,15 @@ static ssize_t zonefs_file_dio_append(struct kiocb *iocb, struct iov_iter *from) { struct inode *inode = file_inode(iocb->ki_filp); struct zonefs_inode_info *zi = ZONEFS_I(inode); - struct block_device *bdev = inode->i_sb->s_bdev; - unsigned int max; + struct super_block *sb = inode->i_sb; + struct zonefs_sb_info *sbi = ZONEFS_SB(sb); + struct block_device *bdev = sb->s_bdev; + sector_t max = sbi->s_max_zone_append_sectors; struct bio *bio; ssize_t size; int nr_pages; ssize_t ret; - max = queue_max_zone_append_sectors(bdev_get_queue(bdev)); max = ALIGN_DOWN(max << SECTOR_SHIFT, inode->i_sb->s_blocksize); iov_iter_truncate(from, max); @@ -853,6 +854,8 @@ static ssize_t zonefs_file_dio_write(struct kiocb *iocb, struct iov_iter *from) /* Enforce sequential writes (append only) in sequential zones */ if (zi->i_ztype == ZONEFS_ZTYPE_SEQ) { + struct zonefs_sb_info *sbi = ZONEFS_SB(sb); + mutex_lock(&zi->i_truncate_mutex); if (iocb->ki_pos != zi->i_wpoffset) { mutex_unlock(&zi->i_truncate_mutex); @@ -860,7 +863,7 @@ static ssize_t zonefs_file_dio_write(struct kiocb *iocb, struct iov_iter *from) goto inode_unlock; } mutex_unlock(&zi->i_truncate_mutex); - append = sync; + append = sync && sbi->s_max_zone_append_sectors; } if (append) @@ -1683,6 +1686,11 @@ static int zonefs_fill_super(struct super_block *sb, void *data, int silent) sbi->s_mount_opts &= ~ZONEFS_MNTOPT_EXPLICIT_OPEN; } + sbi->s_max_zone_append_sectors = + queue_max_zone_append_sectors(bdev_get_queue(sb->s_bdev)); + if (!sbi->s_max_zone_append_sectors) + zonefs_info(sb, "Zone append is not supported: falling back to using regular writes\n"); + ret = zonefs_read_super(sb); if (ret) return ret; diff --git a/fs/zonefs/zonefs.h b/fs/zonefs/zonefs.h index 51141907097c..2b8c3b1a32ea 100644 --- a/fs/zonefs/zonefs.h +++ b/fs/zonefs/zonefs.h @@ -185,6 +185,8 @@ struct zonefs_sb_info { unsigned int s_max_open_zones; atomic_t s_open_zones; + + sector_t s_max_zone_append_sectors; }; static inline struct zonefs_sb_info *ZONEFS_SB(struct super_block *sb)