From patchwork Fri Apr 16 03:05:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12206583 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45C20C433ED for ; Fri, 16 Apr 2021 03:05:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 297CE61152 for ; Fri, 16 Apr 2021 03:05:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237623AbhDPDGA (ORCPT ); Thu, 15 Apr 2021 23:06:00 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:63371 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234806AbhDPDF6 (ORCPT ); Thu, 15 Apr 2021 23:05:58 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1618542333; x=1650078333; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NwfKGSQILmjdwgHuwjMoyjbZ2rm26aZiGYIjk3FC6bk=; b=ceDB56gK4KXfrgHy7YYq3X4LIc6QxDx3JHFsxrLZ2yGDTrOyRwj+C/Qw vWi7Esq1vlc+clFsDC9V8CWz7+7TKuB/0MBL7lIWwB0Sopi65Oaiu1x5y NJCDzR/w4YvsnLiP+9QTvj1cV4ucyuPpd3/m7wiq1VZif3ELxAtCPvn7i 5jKdq1Zp9o4i0Z+P3ZcfuL7yHlWCe+pBBHKVUdwArNzpWJqYlXYG8H0OI 4F79V692VboVhIPPjcz5YtUBeKcy3PvtHVOB4IxORlmtsyr/31u4HknC9 tIRvOMEPYuxGjOCtVoKfKBbkhv5a1ZYnGDJ4AvOxtOYHVwq/9qoivMJCD g==; IronPort-SDR: l4EIBOhaJNN1npSAIpEtwxvRKGuHmN0ap7F40LvgLUuun1ShgYqc8Emo4w5uvohNyMmc2MTzWz uRpZEGqxOz1VyX4FlpnHN/hUVC/qhYsqE1+vZWzoMhqx8W9cFuRresZboThK0iIZ7XtKSj6Xk3 wybOlOXCgvNJGuyXPmS9KDFyC+P2N8HJ4J4+KxvF83Y1vzL5hEsxZZwi1KixSaN0lR/zLrHHOd VWc8smXWukt5mMTn8THkUeZ+g6Rz0JVzCmSBiQPLMzt9nJweJinlqAmnl118l5Dcwxftxv/72C rmQ= X-IronPort-AV: E=Sophos;i="5.82,226,1613404800"; d="scan'208";a="169567867" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 16 Apr 2021 11:05:32 +0800 IronPort-SDR: f8yokaj9wPSgxAzwUxFLWrg7O4NCsV+GmF/0s6mNkVen2QkCfv6b5TRS/CHyyQa+hcRQhS0KeV U2pXKb9eqh8s1E4L/0saGxMVLKz9V4AssBImJnXfkM5TgM0AWMzEHl+xBevsrBF+XgBsVpQqS0 2fFd0eT13dZk32MJHp30omuGKXb7aQf1rzhk6Hk1vdUmw/h096dJvtvobXX3AUD4Yg7eiWO9dB Iq6ZSKdfWBati2bjs4zosLYrGrkh+SKiURGfq7GTYKCqDi/wHLG32CvoJUaeir0W3MZi/A9F5k 0u+OyhpLJp4/iUNSvCpAC9qA Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2021 19:44:51 -0700 IronPort-SDR: Uc9KTrW6k6VQOr13ItTIaG8m3FB4YcJF1CzR6j/OxyKZ6But3DAOGBk6j8DkP8U3UokM2KOgF7 bp6T/GYt4q1nYRIGoeV5RTgAl77CZr4yaRyzL+yL5c7TyFo7XOWZXM7gQgHLK5gymIHR6nlCfR iHQyTFiLU5CBMsrjLAlzrcCNPhqBRWgObvhXD4wNc83bClCRUuptsE2Ssn/mOwjvp72nn7sUMn 0nPclDz8ZKEhM3vTrOVPZRiOZ5yCYZucvpACEdx8vTk9aaFV6EM0QgE7PsLZTC2lmjJAhNKH16 wXc= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 15 Apr 2021 20:05:32 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , linux-scsi@vger.kernel.org, "Martin K . Petersen" , linux-fsdevel@vger.kernel.org, linux-btrfs@vger.kernel.org, David Sterba , Josef Bacik Cc: Johannes Thumshirn , Shinichiro Kawasaki , Naohiro Aota Subject: [PATCH 1/4] dm: Introduce zone append support control Date: Fri, 16 Apr 2021 12:05:25 +0900 Message-Id: <20210416030528.757513-2-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210416030528.757513-1-damien.lemoal@wdc.com> References: <20210416030528.757513-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add the boolean field zone_append_not_supported to the dm_target structure to allow a target implementing a zoned block device to explicitly opt out from zone append (REQ_OP_ZONE_APPEND) operations support. When set to true by the target constructor, the target device queue limit max_zone_append_sectors is set to 0 in dm_table_set_restrictions() so that users of the target (e.g. file systems) can detect that the device cannot process zone append operations. Detection for the target support of zone append is done similarly to the detection for other device features such as secure erase, using a helper function. For zone append, the function dm_table_supports_zone_append() is defined if CONFIG_BLK_DEV_ZONED is enabled. Signed-off-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- drivers/md/dm-table.c | 41 +++++++++++++++++++++++++++++++++++ include/linux/device-mapper.h | 6 +++++ 2 files changed, 47 insertions(+) diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index e5f0f1703c5d..9efd7a0ee27e 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1999,6 +1999,37 @@ static int device_requires_stable_pages(struct dm_target *ti, return blk_queue_stable_writes(q); } +#ifdef CONFIG_BLK_DEV_ZONED +static int device_not_zone_append_capable(struct dm_target *ti, + struct dm_dev *dev, sector_t start, + sector_t len, void *data) +{ + struct request_queue *q = bdev_get_queue(dev->bdev); + + return !blk_queue_is_zoned(q) || + !q->limits.max_zone_append_sectors; +} + +static bool dm_table_supports_zone_append(struct dm_table *t) +{ + struct dm_target *ti; + unsigned int i; + + for (i = 0; i < dm_table_get_num_targets(t); i++) { + ti = dm_table_get_target(t, i); + + if (ti->zone_append_not_supported) + return false; + + if (!ti->type->iterate_devices || + ti->type->iterate_devices(ti, device_not_zone_append_capable, NULL)) + return false; + } + + return true; +} +#endif + void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, struct queue_limits *limits) { @@ -2091,6 +2122,16 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, if (blk_queue_is_zoned(q)) { WARN_ON_ONCE(queue_is_mq(q)); q->nr_zones = blkdev_nr_zones(t->md->disk); + + /* + * All zoned devices support zone append by default. However, + * some zoned targets (e.g. dm-crypt) cannot support this + * operation. Check here if the target indicated the lack of + * support for zone append and set max_zone_append_sectors to 0 + * in that case so that users (e.g. an FS) can detect this fact. + */ + if (!dm_table_supports_zone_append(t)) + q->limits.max_zone_append_sectors = 0; } #endif diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h index 5c641f930caf..4da699add262 100644 --- a/include/linux/device-mapper.h +++ b/include/linux/device-mapper.h @@ -361,6 +361,12 @@ struct dm_target { * Set if we need to limit the number of in-flight bios when swapping. */ bool limit_swap_bios:1; + + /* + * Set if this target is a zoned device that cannot accept + * zone append operations. + */ + bool zone_append_not_supported:1; }; void *dm_per_bio_data(struct bio *bio, size_t data_size); From patchwork Fri Apr 16 03:05:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12206585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B261C433ED for ; Fri, 16 Apr 2021 03:05:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 40E0C6117A for ; Fri, 16 Apr 2021 03:05:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237733AbhDPDGF (ORCPT ); Thu, 15 Apr 2021 23:06:05 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:63371 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237593AbhDPDGA (ORCPT ); Thu, 15 Apr 2021 23:06:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1618542335; x=1650078335; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7xpGw5gOit7hgiXUka877UmTYdM1ZmxbqzhNHdLZabs=; b=Tx05KlTGk2/vWIo0d6ssD1I+M6OpOTjM3NCsr0GcEWInENSR/o+hfsXE aBTHSXqN3jMrqFsPBT3ozBSl9zcgpGDBPMMSiE/uC07DAoESvUKimEPp/ mpCplCS+YecAa85W9JxzLbX/U+RrkDDS7jr1QaONZbipro3q7uTixiE+z 1Cj+gN2EXAh50l9i4nWH168uzKAorP1C7VD5fsFsz484PhkWB/8PWyj3t 4HBoNS71KjeEvxIB1CjjNUZg6Bp86sIBcW3XWH19vTjtr/+ttzIIud1IU g4VMdZhq+2MK2HLys+L5DXTBxFGVi8XA5oeDCkwW8UzQmZ9BJ1sg+Xyjv A==; IronPort-SDR: rtOdSX0tb3/sbl2mwTe9ibEUqbZbeR3GqelApw5evYWj10jqMaRvLG0DZ0CYfm7oAui7f6fE74 pxsWC/VwQ7BrkN6O5DUYHEAexCOebR9sIARKK5CykSysCBqoPYrm3ki2Zubkz9iBM3awUTg/3+ PM2wX1SKXrPe4x2ohJQXr9Yk2n/bcjE1pAHbPggFpzqiFgqRFPTsFnFi2+wt+CRSgba+rpkiO6 r8nFJzRmfskzrn6TEmkUgb3q7DL7I3Fsolib5ibl2kPUrSNAgw5QzbxLBsC9T5We2ATOjfn+rK EM0= X-IronPort-AV: E=Sophos;i="5.82,226,1613404800"; d="scan'208";a="169567883" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 16 Apr 2021 11:05:35 +0800 IronPort-SDR: NTf6YD+kTvlYGxs2Gb7w2WGaEojr+5+V8jgErNcIckQr9OuVGebNXY5+Fd+mQySKwnzMsJOMoA PQAM0ObdPMW3x+MG/UsngoyxBGBa8e6t1TMGVNxpiJNfOT3nZufjJRrG4gy3XGorvFiyXZaSSy C2kkaH66fIXh1DUujuaHEVMDJ1R+Q/Dg2R0w4FOZZjOpFYalgO9wB7hlwbxpES9Wn6Y4TieGkX 0B5MPa/8jUR3KXS+/BDSr+Y+DgxG0Qe0gX29Dj/0GkHsgiMNVWHv4EBmqz4kWUDR0UTJIRqXEv U/7e1tnEACK5+tEwpcwtrdDQ Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2021 19:44:54 -0700 IronPort-SDR: reXaIl08fufcrKmwqlTrWikXVN+K7DeRaEHrim6atgn6l0Yzshs6cwZfuVPyG/kvBFwDZwOrfr KS+rIvyqVDxgc7Gk/123Bm8msgUO6s4kvzkQ+qbbMhLQMA2VUtrz2kPI7Uz0wH9p97zvdy+QzB dBNntEhDmRfVb+H6+MNJdlgbWRYLOtg4nGCU+7WubVXDhzMhGvHAgjkXjcQHedj6A+mGlgbgKU F2qkOL1fodLfhCMmK6ceLxyGqBvX+6LMi073Jjr5TrHhAQGPfhL+ssy2/DyhiIJ0c5n+L76/P5 VD8= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 15 Apr 2021 20:05:35 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , linux-scsi@vger.kernel.org, "Martin K . Petersen" , linux-fsdevel@vger.kernel.org, linux-btrfs@vger.kernel.org, David Sterba , Josef Bacik Cc: Johannes Thumshirn , Shinichiro Kawasaki , Naohiro Aota Subject: [PATCH 2/4] dm crypt: Fix zoned block device support Date: Fri, 16 Apr 2021 12:05:26 +0900 Message-Id: <20210416030528.757513-3-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210416030528.757513-1-damien.lemoal@wdc.com> References: <20210416030528.757513-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Zone append BIOs (REQ_OP_ZONE_APPEND) always specify the start sector of the zone to be written instead of the actual location sector to write. The write location is determined by the device and returned to the host upon completion of the operation. This interface, while simple and efficient for writing into sequential zones of a zoned block device, is incompatible with the use of sector values to calculate a cypher block IV. All data written in a zone end up using the same IV values corresponding to the first sectors of the zone, but read operation will specify any sector within the zone, resulting in an IV mismatch between encryption and decryption. Using a single sector value (e.g. the zone start sector) for all read and writes into a zone can solve this problem, but at the cost of weakening the cypher chosen by the user. Instead, to solve this problem, explicitly disable support for zone append operations using the zone_append_not_supported field of struct dm_target if the IV mode used is sector-based, that is for all IVs modes except null and random. The cypher flag CRYPT_IV_NO_SECTORS iis introduced to indicate that the cypher does not use sector values. This flag is set in crypt_ctr_ivmode() for the null and random IV modes and checked in crypt_ctr() to set to true zone_append_not_supported if CRYPT_IV_NO_SECTORS is not set for the chosen cypher. Reported-by: Shin'ichiro Kawasaki Signed-off-by: Damien Le Moal --- drivers/md/dm-crypt.c | 48 +++++++++++++++++++++++++++++++++++-------- 1 file changed, 39 insertions(+), 9 deletions(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index b0ab080f2567..0a44bc0ff960 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -137,6 +137,7 @@ enum cipher_flags { CRYPT_MODE_INTEGRITY_AEAD, /* Use authenticated mode for cipher */ CRYPT_IV_LARGE_SECTORS, /* Calculate IV from sector_size, not 512B sectors */ CRYPT_ENCRYPT_PREPROCESS, /* Must preprocess data for encryption (elephant) */ + CRYPT_IV_NO_SECTORS, /* IV calculation does not use sectors */ }; /* @@ -2750,9 +2751,10 @@ static int crypt_ctr_ivmode(struct dm_target *ti, const char *ivmode) } /* Choose ivmode, see comments at iv code. */ - if (ivmode == NULL) + if (ivmode == NULL) { cc->iv_gen_ops = NULL; - else if (strcmp(ivmode, "plain") == 0) + set_bit(CRYPT_IV_NO_SECTORS, &cc->cipher_flags); + } else if (strcmp(ivmode, "plain") == 0) cc->iv_gen_ops = &crypt_iv_plain_ops; else if (strcmp(ivmode, "plain64") == 0) cc->iv_gen_ops = &crypt_iv_plain64_ops; @@ -2762,9 +2764,10 @@ static int crypt_ctr_ivmode(struct dm_target *ti, const char *ivmode) cc->iv_gen_ops = &crypt_iv_essiv_ops; else if (strcmp(ivmode, "benbi") == 0) cc->iv_gen_ops = &crypt_iv_benbi_ops; - else if (strcmp(ivmode, "null") == 0) + else if (strcmp(ivmode, "null") == 0) { cc->iv_gen_ops = &crypt_iv_null_ops; - else if (strcmp(ivmode, "eboiv") == 0) + set_bit(CRYPT_IV_NO_SECTORS, &cc->cipher_flags); + } else if (strcmp(ivmode, "eboiv") == 0) cc->iv_gen_ops = &crypt_iv_eboiv_ops; else if (strcmp(ivmode, "elephant") == 0) { cc->iv_gen_ops = &crypt_iv_elephant_ops; @@ -2791,6 +2794,7 @@ static int crypt_ctr_ivmode(struct dm_target *ti, const char *ivmode) cc->key_extra_size = cc->iv_size + TCW_WHITENING_SIZE; } else if (strcmp(ivmode, "random") == 0) { cc->iv_gen_ops = &crypt_iv_random_ops; + set_bit(CRYPT_IV_NO_SECTORS, &cc->cipher_flags); /* Need storage space in integrity fields. */ cc->integrity_iv_size = cc->iv_size; } else { @@ -3281,14 +3285,31 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv) } cc->start = tmpll; - /* - * For zoned block devices, we need to preserve the issuer write - * ordering. To do so, disable write workqueues and force inline - * encryption completion. - */ if (bdev_is_zoned(cc->dev->bdev)) { + /* + * For zoned block devices, we need to preserve the issuer write + * ordering. To do so, disable write workqueues and force inline + * encryption completion. + */ set_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags); set_bit(DM_CRYPT_WRITE_INLINE, &cc->flags); + + /* + * All zone append writes to a zone of a zoned block device will + * have the same BIO sector (the start of the zone). When the + * cypher IV mode uses sector values, all data targeting a + * zone will be encrypted using the first sector numbers of the + * zone. This will not result in write errors but will + * cause most reads to fail as reads will use the sector values + * for the actual data location, resulting in IV mismatch. + * To avoid this problem, allow zone append operations only for + * cyphers with an IV mode not using sector values (null and + * random IVs). + */ + if (!test_bit(CRYPT_IV_NO_SECTORS, &cc->cipher_flags)) { + DMWARN("Zone append is not supported with sector-based IV cyphers"); + ti->zone_append_not_supported = true; + } } if (crypt_integrity_aead(cc) || cc->integrity_iv_size) { @@ -3356,6 +3377,15 @@ static int crypt_map(struct dm_target *ti, struct bio *bio) struct dm_crypt_io *io; struct crypt_config *cc = ti->private; + /* + * For zoned targets using a sector based IV, zone append is not + * supported. We should not see any such operation in that case. + * In the unlikely case we do, warn and fail the request. + */ + if (WARN_ON_ONCE(bio_op(bio) == REQ_OP_ZONE_APPEND && + !test_bit(CRYPT_IV_NO_SECTORS, &cc->cipher_flags))) + return DM_MAPIO_KILL; + /* * If bio is REQ_PREFLUSH or REQ_OP_DISCARD, just bypass crypt queues. * - for REQ_PREFLUSH device-mapper core ensures that no IO is in-flight From patchwork Fri Apr 16 03:05:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12206587 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78993C43470 for ; Fri, 16 Apr 2021 03:05:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 55344611AC for ; Fri, 16 Apr 2021 03:05:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237593AbhDPDGH (ORCPT ); Thu, 15 Apr 2021 23:06:07 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:63380 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234751AbhDPDGD (ORCPT ); Thu, 15 Apr 2021 23:06:03 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1618542338; x=1650078338; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dQ6+wwXkURXembXlQMNkynRjOBYDsHmNyV1LL8+afz8=; b=fqkP3td3DpUZdTiIHlGI7oPNI1F5rZ9HgoDUkSfdSEu3CZobsMOUNlF0 qwjYLYD0gxE3P+kPe22QBiGbNk6eN1XBlBrc8erOjSa8dQYdDrKbKcw4z av7uw5TDBAFy8gUnRxwIYdwMsicMnqc92aIL0oq3g9jIQoHP0yC6Y29kr SeAFFb7ob5tlPKD8Q9Wfsgz7gnyjPQo0dv5YUT+CHpWxMMMO132/EDGZ7 ieAUh2FKBQfnJ+nHppFxslNUH5O6wsvU9ug3+tplN+faO2bfiHhV9vpxH ATEubWmeTQxNRTE1d3a9hXuAL2k2BqlcZKzTiCx2Rcpr+5QieQC8skTL6 g==; IronPort-SDR: ubaeraa/y/oGwdXcr44I2Ex6Lmdf9Nlm9bnqBZ0zDi2haPBjK/3AL19FAJ8deckj1LhMzPm5fn GypBFltnTDuM/jdK8bkXiImtZ5CG8EjhdfcYxuzeZFH96PXmcg08BRz3x0dLELqPsyYX8thDsB rLqSJz0r/KsiZZTgEk7Y6TZBJR+oZ9jyS51hLwsL8OXRou98ILzfPgP6fnTQSNiLyRXEQ51Rfg CZG/O8uWC0dlb4NseEDuXrEGn8yzolpr1znVH3kJm4jkODm4B3KjnhLf6h98Pz66txnHmCWdhc kXY= X-IronPort-AV: E=Sophos;i="5.82,226,1613404800"; d="scan'208";a="169567895" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 16 Apr 2021 11:05:37 +0800 IronPort-SDR: T4D4eMzl0JlQKHdbeIbzthfGeK904rX+zDMhJvMqBXRQhdDXHoYVE+061Iw7LpQItS6tw3jsI8 K/bHniXZ7JjiB0xI/Zkn8BlNwWt+pBA+84IwU89PMfLtU5Ytc+fCQbpMrEb/4fqmijha2dzEHk atBWFtCwgadNXxYjgRDfzQOEi2MFmaK9l5BOxr0PbXdRPJ5terNIkPrNVckOj19sDaa4RziANO F8F9VgWM3Vy/A9LZdEAWb3OTneM2vpj0obNRf4SI8B0BbYkUN/SSFwy4+GrHgv3uo65fvoZmUn IHZp/rLjxpj2A6vqNSymbRAV Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2021 19:44:56 -0700 IronPort-SDR: XeFcQprEuAFYDJO6DpiQxcsC7N+LCT2SR+Mu5LEf4Xaflth/DDQwNi/0LW5UM7TnbJeuRqTSwF A2caLaeVJPDIYpfpbqPdd+zoeTtgjcmwbGRtXPz+zFA/vSzLCL5oDZB1U1YXfv4rUoljrjQEml /GhFt+gXWizqvsuzIuD6xmX3mwwUI6vOaANrC/xXYqaNcz/AyDVJCnaZPyBki04K2v/eRMhvDK QTOVUkM8S+NS7fijSsafqxW0t+EdNdG09b4gvBZhddV08G/EQ+6pPc6oENmirSES3hRM8Xy5PG Eqo= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 15 Apr 2021 20:05:37 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , linux-scsi@vger.kernel.org, "Martin K . Petersen" , linux-fsdevel@vger.kernel.org, linux-btrfs@vger.kernel.org, David Sterba , Josef Bacik Cc: Johannes Thumshirn , Shinichiro Kawasaki , Naohiro Aota Subject: [PATCH 3/4] btrfs: zoned: fail mount if the device does not support zone append Date: Fri, 16 Apr 2021 12:05:27 +0900 Message-Id: <20210416030528.757513-4-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210416030528.757513-1-damien.lemoal@wdc.com> References: <20210416030528.757513-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Johannes Thumshirn For zoned btrfs, zone append is mandatory to write to a sequential write only zone, otherwise parallel writes to the same zone could result in unaligned write errors. If a zoned block device does not support zone append (e.g. a dm-crypt zoned device using a non-NULL IV cypher), fail to mount. Signed-off-by: Johannes Thumshirn Signed-off-by: Damien Le Moal --- fs/btrfs/zoned.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c index eeb3ebe11d7a..70b23a0d03b1 100644 --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -342,6 +342,13 @@ int btrfs_get_dev_zone_info(struct btrfs_device *device) if (!IS_ALIGNED(nr_sectors, zone_sectors)) zone_info->nr_zones++; + if (bdev_is_zoned(bdev) && zone_info->max_zone_append_size == 0) { + btrfs_err(fs_info, "zoned: device %pg does not support zone append", + bdev); + ret = -EINVAL; + goto out; + } + zone_info->seq_zones = bitmap_zalloc(zone_info->nr_zones, GFP_KERNEL); if (!zone_info->seq_zones) { ret = -ENOMEM; From patchwork Fri Apr 16 03:05:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12206589 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DCB3C43600 for ; Fri, 16 Apr 2021 03:05:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 20004611AC for ; Fri, 16 Apr 2021 03:05:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237912AbhDPDGI (ORCPT ); Thu, 15 Apr 2021 23:06:08 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:63371 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237760AbhDPDGF (ORCPT ); Thu, 15 Apr 2021 23:06:05 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1618542340; x=1650078340; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uVfh8D4+sBBq64E108Yvid21+UPvhkglPh0NjleQvb4=; b=cThEWTTds6021zuFqFhubo7E8SPoXUnhO+Padd4+Yh8/KN5/lgy0oDYg pJILsXn9FTc60mJ8DPdzi/QP9DjdOngo1QoPTtX0KFQyXOFwpxIoTTnxr HryykFIewOWozTMb3lVedMnBbrecEGNaMW2rIIudaijf19sgZK3ohUW4x 6bN6DJU6rByUtAk3yvOv6sfJBebBaXQsMvEj9Im1jTDnIG7sUuW8Inx2d 2qcNvhdFqCfWBYeaIho9NzwBAori/whJXBcpD8U8ydJMbpY7MPzJ18Kqj gNJ7C+y7Q5AuS8B3pehr+YQa4Ekh2UKnk8JJoywNSawnzjzrxz3HCzJag Q==; IronPort-SDR: Wjd0C07/CyaIOAtTIRGTCXKKfLlLsF+PtYnIkpg7JP0T1sqP2a6JP1IS+xpCQM2bzZ76HW9ZTb my0i22KrIx/fz9yOU9rqXVJKINZQ1riztKi3nTydkiJ/D4nbIbii9hufetEIjaZU46f5AFT6Sy xCC3F/QPl1vhzNSYRGkcaXj8PEbSI5BS+7fAj7o+bSZ2bBLHsEnevhC0bUQmS4UCAUe+ZMGs3N BSHZFs2bYJYpoP8zXAOXYQTlXtuygXIZl9JbajVdqH8jf6cpYTcQJGRcOV70gWJdJAU65IEIZj h4c= X-IronPort-AV: E=Sophos;i="5.82,226,1613404800"; d="scan'208";a="169567899" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 16 Apr 2021 11:05:40 +0800 IronPort-SDR: N+JLvR0N5aYsv7emEsPhN/tT0NVNgPJXcdSt0E/2pqaQSxIRvosiojEcT4wsXreg7GhNlGIuNn maxJ/SaKb8G03efUnWVERc3DJtdFC1Kmcf9YG+SNyuBKLvv9AR7Em3nrF5XZOu0WXz2BdxPKXF 6MlOxoZOeju++9BiwlhDA8CJAAfusx0RQCCF0fXUWbt7JYc0dMNLr1j3JyPCUxqiI3nNDdY197 tE8O9A4oGznyoshkejuYLBYSZeUFvY0NNsQ/ciR5CuTzBfztJFRpCKYXnjnkHTovmG7N4XoSJG DId+xOXwdD9L035ADnw0oVgo Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2021 19:44:59 -0700 IronPort-SDR: u5Gc1810z5GreDYJpvaZDBd+bkB81RNJhYok8IDgPhe6JYgYj4b/4/fYIpyX1cNfJVEHlvUAnA MvOAeuVc8FBQZXEYMRwsr7AEl4o9EH4ZWc0IZfF4PcbzBOhhh0kx5kXMSJLHk9aqaOUbscvFQO cnR+SMMtVcof5L8EjlIR/eC/8XN0d4vx2Lzk9Lhsnt21cd/URva91ybcVwFbQqbeHdz6R/7aav OkHZyDzHZ5SnIdccJAHt+KUB7t7KMxH5k97pMJAwdRTYfFo0Amj59jJpiDg1hrSqHsfS+f5qTA 0pc= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 15 Apr 2021 20:05:40 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , linux-scsi@vger.kernel.org, "Martin K . Petersen" , linux-fsdevel@vger.kernel.org, linux-btrfs@vger.kernel.org, David Sterba , Josef Bacik Cc: Johannes Thumshirn , Shinichiro Kawasaki , Naohiro Aota Subject: [PATCH 4/4] zonefs: fix synchronous write to sequential zone files Date: Fri, 16 Apr 2021 12:05:28 +0900 Message-Id: <20210416030528.757513-5-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210416030528.757513-1-damien.lemoal@wdc.com> References: <20210416030528.757513-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Synchronous writes to sequential zone files cannot use zone append operations if the underlying zoned device queue limit max_zone_append_sectors is 0, indicating that the device does not support this operation. In this case, fall back to using regular write operations. Signed-off-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- fs/zonefs/super.c | 16 ++++++++++++---- fs/zonefs/zonefs.h | 2 ++ 2 files changed, 14 insertions(+), 4 deletions(-) diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c index 049e36c69ed7..b97566b9dff7 100644 --- a/fs/zonefs/super.c +++ b/fs/zonefs/super.c @@ -689,14 +689,15 @@ static ssize_t zonefs_file_dio_append(struct kiocb *iocb, struct iov_iter *from) { struct inode *inode = file_inode(iocb->ki_filp); struct zonefs_inode_info *zi = ZONEFS_I(inode); - struct block_device *bdev = inode->i_sb->s_bdev; - unsigned int max; + struct super_block *sb = inode->i_sb; + struct zonefs_sb_info *sbi = ZONEFS_SB(sb); + struct block_device *bdev = sb->s_bdev; + sector_t max = sbi->s_max_zone_append_sectors; struct bio *bio; ssize_t size; int nr_pages; ssize_t ret; - max = queue_max_zone_append_sectors(bdev_get_queue(bdev)); max = ALIGN_DOWN(max << SECTOR_SHIFT, inode->i_sb->s_blocksize); iov_iter_truncate(from, max); @@ -853,6 +854,8 @@ static ssize_t zonefs_file_dio_write(struct kiocb *iocb, struct iov_iter *from) /* Enforce sequential writes (append only) in sequential zones */ if (zi->i_ztype == ZONEFS_ZTYPE_SEQ) { + struct zonefs_sb_info *sbi = ZONEFS_SB(sb); + mutex_lock(&zi->i_truncate_mutex); if (iocb->ki_pos != zi->i_wpoffset) { mutex_unlock(&zi->i_truncate_mutex); @@ -860,7 +863,7 @@ static ssize_t zonefs_file_dio_write(struct kiocb *iocb, struct iov_iter *from) goto inode_unlock; } mutex_unlock(&zi->i_truncate_mutex); - append = sync; + append = sync && sbi->s_max_zone_append_sectors; } if (append) @@ -1683,6 +1686,11 @@ static int zonefs_fill_super(struct super_block *sb, void *data, int silent) sbi->s_mount_opts &= ~ZONEFS_MNTOPT_EXPLICIT_OPEN; } + sbi->s_max_zone_append_sectors = + queue_max_zone_append_sectors(bdev_get_queue(sb->s_bdev)); + if (!sbi->s_max_zone_append_sectors) + zonefs_info(sb, "Zone append is not supported: falling back to using regular writes\n"); + ret = zonefs_read_super(sb); if (ret) return ret; diff --git a/fs/zonefs/zonefs.h b/fs/zonefs/zonefs.h index 51141907097c..2b8c3b1a32ea 100644 --- a/fs/zonefs/zonefs.h +++ b/fs/zonefs/zonefs.h @@ -185,6 +185,8 @@ struct zonefs_sb_info { unsigned int s_max_open_zones; atomic_t s_open_zones; + + sector_t s_max_zone_append_sectors; }; static inline struct zonefs_sb_info *ZONEFS_SB(struct super_block *sb)