From patchwork Tue Jun 27 18:36:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 13295394 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8E98EB64D7 for ; Wed, 28 Jun 2023 09:03:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233097AbjF1JDj (ORCPT ); Wed, 28 Jun 2023 05:03:39 -0400 Received: from mailout1.samsung.com ([203.254.224.24]:16469 "EHLO mailout1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234357AbjF1IWl (ORCPT ); Wed, 28 Jun 2023 04:22:41 -0400 Received: from epcas5p2.samsung.com (unknown [182.195.41.40]) by mailout1.samsung.com (KnoxPortal) with ESMTP id 20230628055507epoutp01a0e639ede784bb7d20a6f11691b26985~svPHuVc2L2369523695epoutp01T for ; Wed, 28 Jun 2023 05:55:07 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.samsung.com 20230628055507epoutp01a0e639ede784bb7d20a6f11691b26985~svPHuVc2L2369523695epoutp01T DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1687931707; bh=4VwEhnu6y9FZX57IV2SmXg/60qHIU2+pV0jbLYUq1BA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=awzPbsyJd1kUR3sLsQtJHBUVQwzuvOY0fHcn31uXZ1LaHDMfjyXPdxvZK0DgO7QK6 e4pwQyBXuhHWI9MCocsdljiZkaMQvvw6HDSv1QNa8lD+wQ2QuIRFSnXWOxIz1GhEoW WRs/z6GW/YPh4T5tjrA3v5IzM32eJq64iveGt2co= Received: from epsnrtp2.localdomain (unknown [182.195.42.163]) by epcas5p2.samsung.com (KnoxPortal) with ESMTP id 20230628055506epcas5p2de3b04607876dd4670d779fbfa51dfad~svPG7mdvH0721107211epcas5p2Q; Wed, 28 Jun 2023 05:55:06 +0000 (GMT) Received: from epsmges5p2new.samsung.com (unknown [182.195.38.174]) by epsnrtp2.localdomain (Postfix) with ESMTP id 4QrW55029Kz4x9QG; Wed, 28 Jun 2023 05:55:04 +0000 (GMT) Received: from epcas5p3.samsung.com ( [182.195.41.41]) by epsmges5p2new.samsung.com (Symantec Messaging Gateway) with SMTP id 64.C6.44250.83BCB946; Wed, 28 Jun 2023 14:55:04 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p1.samsung.com (KnoxPortal) with ESMTPA id 20230627184000epcas5p1c7cb01eb1c70bc5a19f76ce21f2ec3f8~smBq_EZIQ3272832728epcas5p1E; Tue, 27 Jun 2023 18:40:00 +0000 (GMT) Received: from epsmgms1p1new.samsung.com (unknown [182.195.42.41]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20230627184000epsmtrp1d75d61c9665105b36497d9ea07ee5335~smBq79H0W1784017840epsmtrp1x; Tue, 27 Jun 2023 18:40:00 +0000 (GMT) X-AuditID: b6c32a4a-4c3bea800000acda-d9-649bcb38795d Received: from epsmtip2.samsung.com ( [182.195.34.31]) by epsmgms1p1new.samsung.com (Symantec Messaging Gateway) with SMTP id 6C.39.34491.00D2B946; Wed, 28 Jun 2023 03:40:00 +0900 (KST) Received: from green245.sa.corp.samsungelectronics.net (unknown [107.99.41.245]) by epsmtip2.samsung.com (KnoxPortal) with ESMTPA id 20230627183956epsmtip295840d2007e2a19e54f23a74d8bd5c6f~smBnCt6L70383803838epsmtip2Z; Tue, 27 Jun 2023 18:39:56 +0000 (GMT) From: Nitesh Shetty To: Jens Axboe , Jonathan Corbet , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, Keith Busch , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , Alexander Viro , Christian Brauner Cc: martin.petersen@oracle.com, linux-scsi@vger.kernel.org, willy@infradead.org, hare@suse.de, djwong@kernel.org, bvanassche@acm.org, ming.lei@redhat.com, dlemoal@kernel.org, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty , Kanchan Joshi , Anuj Gupta , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v13 1/9] block: Introduce queue limits for copy-offload support Date: Wed, 28 Jun 2023 00:06:15 +0530 Message-Id: <20230627183629.26571-2-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20230627183629.26571-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA01Ta1BUZRie75zdswvT0hGwPmFQZqkhIC6LLH6QSCXiSZiiaZwpphHW3SMQ y+6ylwCdxuWyFhSsCBYsIChGCgwoCC2sEAMRN3WNiwQzFOAyjaDcpKA2MeBg+e95n3me732f 95uXi9vXcpy4CTI1rZSJpHzCltXc5eHhjQZKJH6/NAlRff9POMo4+wRHNRN6As11LQP09eJf OLJ0fA7QkMUOTf4QitrmS9horKMFQzcvncPQ1ZpuDHU/fUSgc533AJoZMWCobdwLXTxzmYVu tvWx0FBrKYHKq2Y46MtRI4G+61nHUGdBJoaMlnSAmq3lOKqbW2Ch3nFnZH7Sw0bWtVLiTRdq aDiCajFMcCjzr9dZVOMVT2rotoZqqM4mqMbLpynTmJagKvMK2FRu5jxBLc2Ms6iF9hGCyrtR DajGgVPU44bdVIPlERZFRifuj6dFElrpSsvEckmCLC6EH/FBzMEYYaCfwFsQhPbxXWWiJDqE HxYZ5R2eIN1YEd/1U5FUs0FFiVQqvu+B/Uq5Rk27xstV6hA+rZBIFQEKH5UoSaWRxfnIaHWw wM/PX7ghjE2Mtw5OsRXtUalFd7rYWjB6MAfYcCEZAH/O/ofIAbZce9IE4P31mu1iGcCh5Smc Kf4EcMz4BysHcLcsDX3uDN8G4IWOjm2RDoPFc5NbIoL0ggNPuZu8I6nF4TVTJdjsh5PzODRe iNzEDuT7sKZ0lbOJWeSr0PK4aUvDI4NhvrmUwzTzhfrfdmzSNuQb0GT+kc1IdsC+YguLeXIP zGwq2ZoBkndsoEmXx2ayhcHKAh1gsAOc7bnBYbATfKA/s41T4NXCKwRjzgLQMGrYNoRCXb8e 3xwCJz1gfasvQ7vA8/11GNPYDuZaLRjD8zZiPcNusLa+gmDwLnhvNX0bU3BluBhjlpUHYO/D KtZZ4Gp4LpDhuUCG/1tXALwa7KIVqqQ4WiVU+MvolP9+WSxPagBbZ+N5xAimJhd9OgHGBZ0A cnG+I++ltW8k9jyJKO0krZTHKDVSWtUJhBsLz8eddorlG3cnU8cIAoL8AgIDAwOC9gYK+C/z qoZzJfZknEhNJ9K0glY+82FcGyctlpC/U7mHlZZcJFzOWw9bCbWGD9dJEqcFrZ2veerCss5f XJMOv65JDpnLllWNvfjCx4uFp/0GhQuG+aLr7R1ltzK8ctimv8XiwwkTbq+EJN233PW++33w ianj3VTb7LGZZOe4AyfjjYNev+82fmTvk6EouFT+mXXlYb5HavFslu1R99XAyhGz7t3Ytw9F ux/NKdV7uuj0HvQJ7YexNZLJaUGkWM2z6as4Xu+Y9sBYhtv1AN/aY/u+qjNnf1IYruKnX3O+ Vfat22hKEY/f/o5+ySG1yzv6i3nhNN0bdapgHaXdjtA2Nae/dyTAZa/djF2mqd/x0MMWzD9Z suQu6nnLVipy47NU8SKBJ65Uif4FMvPA/78EAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA02Sa0hTYRjHe885OzuTFsdl7dUoYRGV1lQ0eMnKbsQhuxh0ww/Wcse50jk3 LS9drEGlaZqF5ZRsZVZqWXNe5r2ZLau1aKkkWIaTCnNagyjmpU4S9O33/C/wfPhTuGiQ8KOU qhRWo5IlSEgvor5T4r9y1soSefD1qlBU8/wpjs4UTOKoaiCfRCOd3wEqGv+FI2fHOYAczjlo sD0CtbpKeOhdhxlDLTcLMXSvqgtDXdOjJCq09AI03KPHUGt/IDKcLSdQS2s3gRxNpSQqqxjm owt9jSS6Y53CkOWyDkONztMA1XvKcPRgZIxAz/oXIPuklYc8P0vJ9QsZx9tIxqwf4DP2948I pvZuAOOwpTLGymySqS0/xTS/yyKZWxcv85g8nYtkvg33E8xYWw/JXDRVAqb2RSbjNi5ijM5R LIqO9lojZxOUR1lN0LqDXvGeNx956raotGuvOnlZoG9TDqAoSIdBY/fSHCCgRHQzgE0TGRxD 2hdWTD7BZ3guvDf1iZ8DvP5kdBg0uX+QXJekA+GLaYrTfegcHGabXQR34PQZAr6sGMK49lx6 J+yzDpEcE/QS6HTXAY6F9Gp4yV7Kn3kiCOZ/8OZkAR0Om+1PeJws+hMZs0bNpL1hd7GT4Bin /aGurgQvALT+P0v/n3UDYJXAl1VrExWJ2hB1iIo9JtXKErWpKoU0NinRCP6OIGB5I2ioHJda AEYBC4AULvERzv95VS4SymXpGawm6YAmNYHVWsACipCIheLPeXIRrZClsEdYVs1q/rkYJfDL wsK32GDcV/FJU3b1o0PJoU0KVV/uqi95MYdjjq4d2WaIAxZLFvjh2BOUJG5QSxcXRGeG+US4 yop6v2zQp7UhxdXA05m97RnFyZHh501iRbo1GuXuu1HW7Gdwbbk1EDE7YOdEQo1ferJLZi2y uD3+LRvkwYsd4SHSpg/VhWvGv9pTXnW5rcdsyVsNdqFgY1tLzxJB/zLdru37Ww27O8pLh+Gs pBPnp0+h1yu2aeqW7j28ebW5I+84ad48ZJ14vbAm7eHjURMv1zv4rHJ2WC8/9FtsTMP7I0p3 fSa2d9A3dd6hh0Tkdaq7xLZDfLvY2zZ1f77ywtrqKx5PPp5BS37pJYQ2XhYSgGu0st8tOjBj cwMAAA== X-CMS-MailID: 20230627184000epcas5p1c7cb01eb1c70bc5a19f76ce21f2ec3f8 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20230627184000epcas5p1c7cb01eb1c70bc5a19f76ce21f2ec3f8 References: <20230627183629.26571-1-nj.shetty@samsung.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add device limits as sysfs entries, - copy_offload (RW) - copy_max_bytes (RW) - copy_max_bytes_hw (RO) Above limits help to split the copy payload in block layer. copy_offload: used for setting copy offload(1) or emulation(0). copy_max_bytes: maximum total length of copy in single payload. copy_max_bytes_hw: Reflects the device supported maximum limit. Reviewed-by: Hannes Reinecke Signed-off-by: Nitesh Shetty Signed-off-by: Kanchan Joshi Signed-off-by: Anuj Gupta --- Documentation/ABI/stable/sysfs-block | 33 +++++++++++++++ block/blk-settings.c | 24 +++++++++++ block/blk-sysfs.c | 63 ++++++++++++++++++++++++++++ include/linux/blkdev.h | 12 ++++++ include/uapi/linux/fs.h | 3 ++ 5 files changed, 135 insertions(+) diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block index c57e5b7cb532..3c97303f658b 100644 --- a/Documentation/ABI/stable/sysfs-block +++ b/Documentation/ABI/stable/sysfs-block @@ -155,6 +155,39 @@ Description: last zone of the device which may be smaller. +What: /sys/block//queue/copy_offload +Date: June 2023 +Contact: linux-block@vger.kernel.org +Description: + [RW] When read, this file shows whether offloading copy to a + device is enabled (1) or disabled (0). Writing '0' to this + file will disable offloading copies for this device. + Writing any '1' value will enable this feature. If the device + does not support offloading, then writing 1, will result in an + error. + + +What: /sys/block//queue/copy_max_bytes +Date: June 2023 +Contact: linux-block@vger.kernel.org +Description: + [RW] This is the maximum number of bytes that the block layer + will allow for a copy request. This will is always smaller or + equal to the maximum size allowed by the hardware, indicated by + 'copy_max_bytes_hw'. An attempt to set a value higher than + 'copy_max_bytes_hw' will truncate this to 'copy_max_bytes_hw'. + + +What: /sys/block//queue/copy_max_bytes_hw +Date: June 2023 +Contact: linux-block@vger.kernel.org +Description: + [RO] This is the maximum number of bytes that the hardware + will allow for single data copy request. + A value of 0 means that the device does not support + copy offload. + + What: /sys/block//queue/crypto/ Date: February 2022 Contact: linux-block@vger.kernel.org diff --git a/block/blk-settings.c b/block/blk-settings.c index 4dd59059b788..738cd3f21259 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -59,6 +59,8 @@ void blk_set_default_limits(struct queue_limits *lim) lim->zoned = BLK_ZONED_NONE; lim->zone_write_granularity = 0; lim->dma_alignment = 511; + lim->max_copy_sectors_hw = 0; + lim->max_copy_sectors = 0; } /** @@ -82,6 +84,8 @@ void blk_set_stacking_limits(struct queue_limits *lim) lim->max_dev_sectors = UINT_MAX; lim->max_write_zeroes_sectors = UINT_MAX; lim->max_zone_append_sectors = UINT_MAX; + lim->max_copy_sectors_hw = UINT_MAX; + lim->max_copy_sectors = UINT_MAX; } EXPORT_SYMBOL(blk_set_stacking_limits); @@ -183,6 +187,22 @@ void blk_queue_max_discard_sectors(struct request_queue *q, } EXPORT_SYMBOL(blk_queue_max_discard_sectors); +/** + * blk_queue_max_copy_sectors_hw - set max sectors for a single copy payload + * @q: the request queue for the device + * @max_copy_sectors: maximum number of sectors to copy + **/ +void blk_queue_max_copy_sectors_hw(struct request_queue *q, + unsigned int max_copy_sectors) +{ + if (max_copy_sectors > (COPY_MAX_BYTES >> SECTOR_SHIFT)) + max_copy_sectors = COPY_MAX_BYTES >> SECTOR_SHIFT; + + q->limits.max_copy_sectors_hw = max_copy_sectors; + q->limits.max_copy_sectors = max_copy_sectors; +} +EXPORT_SYMBOL_GPL(blk_queue_max_copy_sectors_hw); + /** * blk_queue_max_secure_erase_sectors - set max sectors for a secure erase * @q: the request queue for the device @@ -578,6 +598,10 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, t->max_segment_size = min_not_zero(t->max_segment_size, b->max_segment_size); + t->max_copy_sectors = min(t->max_copy_sectors, b->max_copy_sectors); + t->max_copy_sectors_hw = min(t->max_copy_sectors_hw, + b->max_copy_sectors_hw); + t->misaligned |= b->misaligned; alignment = queue_limit_alignment_offset(b, start); diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index afc797fb0dfc..43551778d035 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -199,6 +199,62 @@ static ssize_t queue_discard_zeroes_data_show(struct request_queue *q, char *pag return queue_var_show(0, page); } +static ssize_t queue_copy_offload_show(struct request_queue *q, char *page) +{ + return queue_var_show(blk_queue_copy(q), page); +} + +static ssize_t queue_copy_offload_store(struct request_queue *q, + const char *page, size_t count) +{ + unsigned long copy_offload; + ssize_t ret = queue_var_store(©_offload, page, count); + + if (ret < 0) + return ret; + + if (copy_offload && !q->limits.max_copy_sectors_hw) + return -EINVAL; + + if (copy_offload) + blk_queue_flag_set(QUEUE_FLAG_COPY, q); + else + blk_queue_flag_clear(QUEUE_FLAG_COPY, q); + + return count; +} + +static ssize_t queue_copy_max_hw_show(struct request_queue *q, char *page) +{ + return sprintf(page, "%llu\n", (unsigned long long) + q->limits.max_copy_sectors_hw << SECTOR_SHIFT); +} + +static ssize_t queue_copy_max_show(struct request_queue *q, char *page) +{ + return sprintf(page, "%llu\n", (unsigned long long) + q->limits.max_copy_sectors << SECTOR_SHIFT); +} + +static ssize_t queue_copy_max_store(struct request_queue *q, + const char *page, size_t count) +{ + unsigned long max_copy; + ssize_t ret = queue_var_store(&max_copy, page, count); + + if (ret < 0) + return ret; + + if (max_copy & (queue_logical_block_size(q) - 1)) + return -EINVAL; + + max_copy >>= SECTOR_SHIFT; + q->limits.max_copy_sectors = min_t(unsigned int, max_copy, + q->limits.max_copy_sectors_hw); + + return count; +} + static ssize_t queue_write_same_max_show(struct request_queue *q, char *page) { return queue_var_show(0, page); @@ -522,6 +578,10 @@ QUEUE_RO_ENTRY(queue_nr_zones, "nr_zones"); QUEUE_RO_ENTRY(queue_max_open_zones, "max_open_zones"); QUEUE_RO_ENTRY(queue_max_active_zones, "max_active_zones"); +QUEUE_RW_ENTRY(queue_copy_offload, "copy_offload"); +QUEUE_RO_ENTRY(queue_copy_max_hw, "copy_max_bytes_hw"); +QUEUE_RW_ENTRY(queue_copy_max, "copy_max_bytes"); + QUEUE_RW_ENTRY(queue_nomerges, "nomerges"); QUEUE_RW_ENTRY(queue_rq_affinity, "rq_affinity"); QUEUE_RW_ENTRY(queue_poll, "io_poll"); @@ -638,6 +698,9 @@ static struct attribute *queue_attrs[] = { &queue_discard_max_entry.attr, &queue_discard_max_hw_entry.attr, &queue_discard_zeroes_data_entry.attr, + &queue_copy_offload_entry.attr, + &queue_copy_max_hw_entry.attr, + &queue_copy_max_entry.attr, &queue_write_same_max_entry.attr, &queue_write_zeroes_max_entry.attr, &queue_zone_append_max_entry.attr, diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index ed44a997f629..6098665953e6 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -309,6 +309,9 @@ struct queue_limits { unsigned int discard_alignment; unsigned int zone_write_granularity; + unsigned int max_copy_sectors_hw; + unsigned int max_copy_sectors; + unsigned short max_segments; unsigned short max_integrity_segments; unsigned short max_discard_segments; @@ -554,6 +557,7 @@ struct request_queue { #define QUEUE_FLAG_NOWAIT 29 /* device supports NOWAIT */ #define QUEUE_FLAG_SQ_SCHED 30 /* single queue style io dispatch */ #define QUEUE_FLAG_SKIP_TAGSET_QUIESCE 31 /* quiesce_tagset skip the queue*/ +#define QUEUE_FLAG_COPY 32 /* enable/disable device copy offload */ #define QUEUE_FLAG_MQ_DEFAULT ((1UL << QUEUE_FLAG_IO_STAT) | \ (1UL << QUEUE_FLAG_SAME_COMP) | \ @@ -574,6 +578,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q); test_bit(QUEUE_FLAG_STABLE_WRITES, &(q)->queue_flags) #define blk_queue_io_stat(q) test_bit(QUEUE_FLAG_IO_STAT, &(q)->queue_flags) #define blk_queue_add_random(q) test_bit(QUEUE_FLAG_ADD_RANDOM, &(q)->queue_flags) +#define blk_queue_copy(q) test_bit(QUEUE_FLAG_COPY, &(q)->queue_flags) #define blk_queue_zone_resetall(q) \ test_bit(QUEUE_FLAG_ZONE_RESETALL, &(q)->queue_flags) #define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags) @@ -891,6 +896,8 @@ extern void blk_queue_chunk_sectors(struct request_queue *, unsigned int); extern void blk_queue_max_segments(struct request_queue *, unsigned short); extern void blk_queue_max_discard_segments(struct request_queue *, unsigned short); +extern void blk_queue_max_copy_sectors_hw(struct request_queue *q, + unsigned int max_copy_sectors); void blk_queue_max_secure_erase_sectors(struct request_queue *q, unsigned int max_sectors); extern void blk_queue_max_segment_size(struct request_queue *, unsigned int); @@ -1210,6 +1217,11 @@ static inline unsigned int bdev_discard_granularity(struct block_device *bdev) return bdev_get_queue(bdev)->limits.discard_granularity; } +static inline unsigned int bdev_max_copy_sectors(struct block_device *bdev) +{ + return bdev_get_queue(bdev)->limits.max_copy_sectors; +} + static inline unsigned int bdev_max_secure_erase_sectors(struct block_device *bdev) { diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h index b7b56871029c..dff56813f95a 100644 --- a/include/uapi/linux/fs.h +++ b/include/uapi/linux/fs.h @@ -64,6 +64,9 @@ struct fstrim_range { __u64 minlen; }; +/* maximum copy offload length, this is set to 128MB based on current testing */ +#define COPY_MAX_BYTES (1 << 27) + /* extent-same (dedupe) ioctls; these MUST match the btrfs ioctl definitions */ #define FILE_DEDUPE_RANGE_SAME 0 #define FILE_DEDUPE_RANGE_DIFFERS 1 From patchwork Tue Jun 27 18:36:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 13295393 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95B11EB64D7 for ; Wed, 28 Jun 2023 09:03:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233003AbjF1JDc (ORCPT ); Wed, 28 Jun 2023 05:03:32 -0400 Received: from mailout1.samsung.com ([203.254.224.24]:16471 "EHLO mailout1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234360AbjF1IWl (ORCPT ); Wed, 28 Jun 2023 04:22:41 -0400 Received: from epcas5p2.samsung.com (unknown [182.195.41.40]) by mailout1.samsung.com (KnoxPortal) with ESMTP id 20230628055511epoutp01960bbb1965a017ba98788ab38bbcfe38~svPLqXz7_2269922699epoutp01t for ; Wed, 28 Jun 2023 05:55:11 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.samsung.com 20230628055511epoutp01960bbb1965a017ba98788ab38bbcfe38~svPLqXz7_2269922699epoutp01t DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1687931711; bh=NTNc+ItbCseABSQzv97aAbiEUsQCH4xnwesge+a7EpI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OtOyLG5J97wBNt09koDybIQKq+iWoruU4B3zbgGLhMhzOq+BbM2B+dOcplcl6OUO7 YKcPpWhkNBziGZ9PjIM8R9C/qt7hw/pD/Iwnp/ntdtEO+iFB9elHbp+L4Ur5zhga5O KOQfq6U+6/8ZNKnUEErA5xf1ojS3QW+fA88j/b60= Received: from epsnrtp4.localdomain (unknown [182.195.42.165]) by epcas5p3.samsung.com (KnoxPortal) with ESMTP id 20230628055510epcas5p3581da359ff5c1cb2ab56c3fb616c7237~svPK1qwWr1930919309epcas5p38; Wed, 28 Jun 2023 05:55:10 +0000 (GMT) Received: from epsmges5p3new.samsung.com (unknown [182.195.38.181]) by epsnrtp4.localdomain (Postfix) with ESMTP id 4QrW591PSPz4x9Px; Wed, 28 Jun 2023 05:55:09 +0000 (GMT) Received: from epcas5p4.samsung.com ( [182.195.41.42]) by epsmges5p3new.samsung.com (Symantec Messaging Gateway) with SMTP id 03.7E.06099.C3BCB946; Wed, 28 Jun 2023 14:55:09 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p4.samsung.com (KnoxPortal) with ESMTPA id 20230627184010epcas5p4bb6581408d9b67bbbcad633fb26689c9~smBz6usKp0481604816epcas5p42; Tue, 27 Jun 2023 18:40:10 +0000 (GMT) Received: from epsmgmcp1.samsung.com (unknown [182.195.42.82]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20230627184010epsmtrp1e62f43f8d5e9edee5eee964fb3729ddf~smBz5RKXE1856518565epsmtrp1I; Tue, 27 Jun 2023 18:40:10 +0000 (GMT) X-AuditID: b6c32a4b-d308d700000017d3-7c-649bcb3c4ecd Received: from epsmtip2.samsung.com ( [182.195.34.31]) by epsmgmcp1.samsung.com (Symantec Messaging Gateway) with SMTP id C4.FC.64355.A0D2B946; Wed, 28 Jun 2023 03:40:10 +0900 (KST) Received: from green245.sa.corp.samsungelectronics.net (unknown [107.99.41.245]) by epsmtip2.samsung.com (KnoxPortal) with ESMTPA id 20230627184006epsmtip21e67d1f396e8d0186c25289c09310537~smBwE2oOQ0383803838epsmtip2c; Tue, 27 Jun 2023 18:40:06 +0000 (GMT) From: Nitesh Shetty To: Jens Axboe , Jonathan Corbet , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, Keith Busch , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , Alexander Viro , Christian Brauner Cc: martin.petersen@oracle.com, linux-scsi@vger.kernel.org, willy@infradead.org, hare@suse.de, djwong@kernel.org, bvanassche@acm.org, ming.lei@redhat.com, dlemoal@kernel.org, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty , Anuj Gupta , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v13 2/9] block: Add copy offload support infrastructure Date: Wed, 28 Jun 2023 00:06:16 +0530 Message-Id: <20230627183629.26571-3-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20230627183629.26571-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA01Te1STZRzm/b6Pb2Me6AOBXpGM5gEF4jJh9EIQevDUl9RpJ+J0Ks6hyb4D i7GtbUil5wQCEsRFJTM2k2uJUBLXBuPmJi4g4RgCSiFqWxcMQTrliIA2BuZ/z+/5Pc/7u7zn x8bdzrG82GKpilFIhRIuySE6DP4BQTHDGlHodQ1ATUOXcXT0+AqOGqfLSHTXsAjQpwtLODL1 FwA0ZnJBt/piUc89jSO60d+Joe6akxg63ziAoZP6CYDM42oM9UwFoupjdQTq7hkk0FjXGRJV fmlmoY8ntSQ6Z1zFkL48F0NaUw5AHcuVOLpwd55A301tR6MrRke0bDlD7vWmx67F053qaRY9 erOZoFvrA+ixK5l0S0MhSbfWfUjrbmSTdG1puSNdknuPpO+bpwh6vnecpEvbGgDdOnyY/rNl B91imsME1Jvp0WmMUMQofBhpikwklqbGcOMTkuOS+RGhvCBeJHqG6yMVZjAx3P0vCYKeF0us 2+H6HBJKMq2UQKhUckOei1bIMlWMT5pMqYrhMnKRRB4uD1YKM5SZ0tRgKaOK4oWG7uFbhW+n p5ksFkJekfbepe+3Z4OcxCLgxIZUOOwxDbOKAIftRukAzJsrJe3BIoAPlopZD4O6Hy2sTUt9 Xg5hT3QC+O/R3A1LPgaLmk1WFZtNUoFweI1t492pbBx+o6sFtgCnjDi8fD8Ht4m2Ui/AgZUD tlcJyhd2/NxO2LAzFQVzr84CmwRSIbBsxtVGO1HPQt3oJUe7xBUOVpjW5Tj1JMxt1+C25yGl c4J/dJcS9k73w+u6XzfwVjhrbNuYwAv+XnZsA2fB85/Uk3ZzHoDqSTWwJ2Jh/lDZep845Q+b ukLs9BPw1NAFzF7YBZYsmzA77wy1ZzfxTvhVUxVpx9vgxIOcDUzDZvUEbl9WKYDGpl7iOPBR PzKQ+pGB1P+XrgJ4A9jGyJUZqYySLw+TMlkPfzlFltEC1i8mIF4L7txaCNYDjA30ALJxrruz p+W0yM1ZJHz/A0YhS1ZkShilHvCtCz+Be3mkyKwnJ1Ul88IjQ8MjIiLCI8MieNzHna9Olojc qFShiklnGDmj2PRhbCevbEwsKSjyEwz9tfS0od1S3FY8FVa9J73Gk1N/pCv9dsWOdOlawbsp Dp81/rY0PWuu+1yT9MWrWXvN/8zf5LzVkSRTtwzGubxe+9ORb4Uhsa4Jtz3iB31VhZKLM4kN vfx+2Bx8TezxdaU/P3fXR6lxW15ZE4QY+rCEBL2HQ98P40R8fdBB0t3owtG5V7NWOf3vNJqj EkMvDngEz6wu/SILXGziGBzeiD6LzY0oRtbYwdqFXeXm/KqdUS+nnPJz862M7w7ydIzcV2Mw Gk8vl2QIDu9+SuvAqy18ccQ78rHXevVDW4LE7PKKQ+CO/O+1gxr+gegGf23HvhN7YaJfyhVv lyTf3X5cQpkm5AXgCqXwP73o3tS6BAAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrGIsWRmVeSWpSXmKPExsWy7bCSvC6X7uwUg6NzJSzWnzrGbNE04S+z xeq7/WwWrw9/YrSY9uEns8WTA+2MFpef8Fk82G9vsffdbFaLmwd2MlnsWTSJyWLl6qNMFpMO XWO0eHp1FpPF3lvaFgvblrBY7Nl7ksXi8q45bBbzlz1lt+i+voPNYvnxf0wWhyY3M1nseNLI aLHt93xmi3Wv37NYnLglbXH+73FWi98/5rA5yHhcvuLtsXPWXXaP8/c2snhsXqHlcflsqcem VZ1sHpuX1HvsvtnA5rG4bzKrR2/zOzaPj09vsXi833eVzaNvyypGj82nqz0+b5Lz2PTkLVOA QBSXTUpqTmZZapG+XQJXxpMfP1gKZmZUHDkj3cDYGNrFyMkhIWAisaKlkaWLkYtDSGA7o8Sn D7uZIRKSEsv+HoGyhSVW/nvODlHUzCRx6cMloAQHB5uAtsTp/xwgcRGBLmaJzp3vWEAamAVu M0vMPCsDUiMs4C5x9K8XSJhFQFVi2+OtYCW8AlYSzRdfMYKUSAjoS/TfFwQJcwpYS+w+f4QV JCwEVPL+eABEtaDEyZlPoIbLSzRvnc08gVFgFpLULCSpBYxMqxhFUwuKc9NzkwsM9YoTc4tL 89L1kvNzNzGCo14raAfjsvV/9Q4xMnEwHmKU4GBWEuEV+zE9RYg3JbGyKrUoP76oNCe1+BCj NAeLkjivck5nipBAemJJanZqakFqEUyWiYNTqoFph2BmEXcgX/z2ZMHTDme/7OCZclHz8FT3 SO6CuoZWBYEJWw/aS9rrvv9mZrrvckhJWNfs3TXtbI/tQp1a65m3hM8zUGHdWq3SzS4/T+H8 9r/XHgsnnGOT79K+vLZeomrbx52nDSvCO35xHTmR4K1y8rqYdLhBZdnSR6LuX7lCNQK7Dn/e 8ruevepL4Tzm5zxiLcLKP9Ys6H6numr5giY1i6daidES3yre39PuNjx2Ub+p94+mvuxFpRnt juwn+7q7P5af2RdykYdV9L6m1IPlV/xuPOQ/L/Rv0sqgiANrIxxbjh92FH6z+odK9fyrYa/k ku4yuGS76UhbMV5Udcn2imfYEj3VJMGOYVufhxJLcUaioRZzUXEiAAjW5SppAwAA X-CMS-MailID: 20230627184010epcas5p4bb6581408d9b67bbbcad633fb26689c9 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20230627184010epcas5p4bb6581408d9b67bbbcad633fb26689c9 References: <20230627183629.26571-1-nj.shetty@samsung.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Introduce blkdev_copy_offload which takes similar arguments as copy_file_range and performs copy offload between two bdevs. Introduce REQ_OP_COPY_DST, REQ_OP_COPY_SRC operation. Issue REQ_OP_COPY_DST with destination info along with taking a plug. This flows till request layer and waits for src bio to get merged. Issue REQ_OP_COPY_SRC with source info and this bio reaches request layer and merges with dst request. For any reason, if request comes to driver with either only one of src/dst info we fail the copy offload. Larger copy will be divided, based on max_copy_sectors limit. Suggested-by: Christoph Hellwig Signed-off-by: Anuj Gupta Signed-off-by: Nitesh Shetty --- block/blk-core.c | 5 ++ block/blk-lib.c | 177 ++++++++++++++++++++++++++++++++++++++ block/blk-merge.c | 21 +++++ block/blk.h | 9 ++ block/elevator.h | 1 + include/linux/bio.h | 4 +- include/linux/blk_types.h | 21 +++++ include/linux/blkdev.h | 4 + 8 files changed, 241 insertions(+), 1 deletion(-) diff --git a/block/blk-core.c b/block/blk-core.c index 99d8b9812b18..e6714391c93f 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -796,6 +796,11 @@ void submit_bio_noacct(struct bio *bio) if (!q->limits.max_write_zeroes_sectors) goto not_supported; break; + case REQ_OP_COPY_SRC: + case REQ_OP_COPY_DST: + if (!blk_queue_copy(q)) + goto not_supported; + break; default: break; } diff --git a/block/blk-lib.c b/block/blk-lib.c index e59c3069e835..10c3eadd5bf6 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -115,6 +115,183 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector, } EXPORT_SYMBOL(blkdev_issue_discard); +/* + * For synchronous copy offload/emulation, wait and process all in-flight BIOs. + * This must only be called once all bios have been issued so that the refcount + * can only decrease. This just waits for all bios to make it through + * blkdev_copy_(offload/emulate)_(read/write)_endio. + */ +static ssize_t blkdev_copy_wait_io_completion(struct cio *cio) +{ + ssize_t ret; + + if (cio->endio) + return 0; + + if (atomic_read(&cio->refcount)) { + __set_current_state(TASK_UNINTERRUPTIBLE); + blk_io_schedule(); + } + + ret = cio->comp_len; + kfree(cio); + + return ret; +} + +static void blkdev_copy_offload_read_endio(struct bio *bio) +{ + struct cio *cio = bio->bi_private; + sector_t clen; + + if (bio->bi_status) { + clen = (bio->bi_iter.bi_sector << SECTOR_SHIFT) - cio->pos_out; + cio->comp_len = min_t(sector_t, clen, cio->comp_len); + } + bio_put(bio); + + if (!atomic_dec_and_test(&cio->refcount)) + return; + if (cio->endio) { + cio->endio(cio->private, cio->comp_len); + kfree(cio); + } else + blk_wake_io_task(cio->waiter); +} + +/* + * __blkdev_copy_offload - Use device's native copy offload feature. + * we perform copy operation by sending 2 bio. + * 1. We take a plug and send a REQ_OP_COPY_DST bio along with destination + * sector and length. Once this bio reaches request layer, we form a request and + * wait for src bio to arrive. + * 2. We issue REQ_OP_COPY_SRC bio along with source sector and length. Once + * this bio reaches request layer and find a request with previously sent + * destination info we merge the source bio and return. + * 3. Release the plug and request is sent to driver + * + * Returns the length of bytes copied or error if encountered + */ +static ssize_t __blkdev_copy_offload( + struct block_device *bdev_in, loff_t pos_in, + struct block_device *bdev_out, loff_t pos_out, + size_t len, cio_iodone_t endio, void *private, gfp_t gfp_mask) +{ + struct cio *cio; + struct bio *read_bio, *write_bio; + sector_t rem, copy_len, max_copy_len; + struct blk_plug plug; + + cio = kzalloc(sizeof(struct cio), GFP_KERNEL); + if (!cio) + return -ENOMEM; + atomic_set(&cio->refcount, 0); + cio->waiter = current; + cio->endio = endio; + cio->private = private; + + max_copy_len = min(bdev_max_copy_sectors(bdev_in), + bdev_max_copy_sectors(bdev_out)) << SECTOR_SHIFT; + + cio->pos_in = pos_in; + cio->pos_out = pos_out; + /* If there is a error, comp_len will be set to least successfully + * completed copied length + */ + cio->comp_len = len; + for (rem = len; rem > 0; rem -= copy_len) { + copy_len = min(rem, max_copy_len); + + write_bio = bio_alloc(bdev_out, 0, REQ_OP_COPY_DST, gfp_mask); + if (!write_bio) + goto err_write_bio_alloc; + write_bio->bi_iter.bi_size = copy_len; + write_bio->bi_iter.bi_sector = pos_out >> SECTOR_SHIFT; + + blk_start_plug(&plug); + read_bio = blk_next_bio(write_bio, bdev_in, 0, REQ_OP_COPY_SRC, + gfp_mask); + read_bio->bi_iter.bi_size = copy_len; + read_bio->bi_iter.bi_sector = pos_in >> SECTOR_SHIFT; + read_bio->bi_end_io = blkdev_copy_offload_read_endio; + read_bio->bi_private = cio; + + atomic_inc(&cio->refcount); + submit_bio(read_bio); + blk_finish_plug(&plug); + pos_in += copy_len; + pos_out += copy_len; + } + + return blkdev_copy_wait_io_completion(cio); + +err_write_bio_alloc: + cio->comp_len = min_t(sector_t, cio->comp_len, (len - rem)); + if (!atomic_read(&cio->refcount)) { + kfree(cio); + return -ENOMEM; + } + return blkdev_copy_wait_io_completion(cio); +} + +static inline ssize_t blkdev_copy_sanity_check( + struct block_device *bdev_in, loff_t pos_in, + struct block_device *bdev_out, loff_t pos_out, + size_t len) +{ + unsigned int align = max(bdev_logical_block_size(bdev_out), + bdev_logical_block_size(bdev_in)) - 1; + + if (bdev_read_only(bdev_out)) + return -EPERM; + + if ((pos_in & align) || (pos_out & align) || (len & align) || !len || + len >= COPY_MAX_BYTES) + return -EINVAL; + + return 0; +} + +/* + * @bdev_in: source block device + * @pos_in: source offset + * @bdev_out: destination block device + * @pos_out: destination offset + * @len: length in bytes to be copied + * @endio: endio function to be called on completion of copy operation, + * for synchronous operation this should be NULL + * @private: endio function will be called with this private data, should be + * NULL, if operation is synchronous in nature + * @gfp_mask: memory allocation flags (for bio_alloc) + * + * Returns the length of bytes copied or error if encountered + * + * Description: + * Copy source offset from source block device to destination block + * device. If copy offload is not supported or fails, fallback to + * emulation. Max total length of copy is limited to COPY_MAX_BYTES + */ +ssize_t blkdev_copy_offload( + struct block_device *bdev_in, loff_t pos_in, + struct block_device *bdev_out, loff_t pos_out, + size_t len, cio_iodone_t endio, void *private, gfp_t gfp_mask) +{ + struct request_queue *q_in = bdev_get_queue(bdev_in); + struct request_queue *q_out = bdev_get_queue(bdev_out); + ssize_t ret; + + ret = blkdev_copy_sanity_check(bdev_in, pos_in, bdev_out, pos_out, len); + if (ret) + return ret; + + if (blk_queue_copy(q_in) && blk_queue_copy(q_out)) + ret = __blkdev_copy_offload(bdev_in, pos_in, bdev_out, pos_out, + len, endio, private, gfp_mask); + + return ret; +} +EXPORT_SYMBOL_GPL(blkdev_copy_offload); + static int __blkdev_issue_write_zeroes(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, struct bio **biop, unsigned flags) diff --git a/block/blk-merge.c b/block/blk-merge.c index 65e75efa9bd3..bfd86c54df22 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -922,6 +922,9 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio) if (!rq_mergeable(rq) || !bio_mergeable(bio)) return false; + if ((req_op(rq) == REQ_OP_COPY_DST) && (bio_op(bio) == REQ_OP_COPY_SRC)) + return true; + if (req_op(rq) != bio_op(bio)) return false; @@ -951,6 +954,8 @@ enum elv_merge blk_try_merge(struct request *rq, struct bio *bio) { if (blk_discard_mergable(rq)) return ELEVATOR_DISCARD_MERGE; + else if (blk_copy_offload_mergable(rq, bio)) + return ELEVATOR_COPY_OFFLOAD_MERGE; else if (blk_rq_pos(rq) + blk_rq_sectors(rq) == bio->bi_iter.bi_sector) return ELEVATOR_BACK_MERGE; else if (blk_rq_pos(rq) - bio_sectors(bio) == bio->bi_iter.bi_sector) @@ -1053,6 +1058,20 @@ static enum bio_merge_status bio_attempt_discard_merge(struct request_queue *q, return BIO_MERGE_FAILED; } +static enum bio_merge_status bio_attempt_copy_offload_merge( + struct request_queue *q, struct request *req, struct bio *bio) +{ + if (req->__data_len != bio->bi_iter.bi_size) + return BIO_MERGE_FAILED; + + req->biotail->bi_next = bio; + req->biotail = bio; + req->nr_phys_segments = blk_rq_nr_phys_segments(req) + 1; + req->__data_len += bio->bi_iter.bi_size; + + return BIO_MERGE_OK; +} + static enum bio_merge_status blk_attempt_bio_merge(struct request_queue *q, struct request *rq, struct bio *bio, @@ -1073,6 +1092,8 @@ static enum bio_merge_status blk_attempt_bio_merge(struct request_queue *q, break; case ELEVATOR_DISCARD_MERGE: return bio_attempt_discard_merge(q, rq, bio); + case ELEVATOR_COPY_OFFLOAD_MERGE: + return bio_attempt_copy_offload_merge(q, rq, bio); default: return BIO_MERGE_NONE; } diff --git a/block/blk.h b/block/blk.h index 608c5dcc516b..440bfa148461 100644 --- a/block/blk.h +++ b/block/blk.h @@ -156,6 +156,13 @@ static inline bool blk_discard_mergable(struct request *req) return false; } +static inline bool blk_copy_offload_mergable(struct request *req, + struct bio *bio) +{ + return ((req_op(req) == REQ_OP_COPY_DST) && + (bio_op(bio) == REQ_OP_COPY_SRC)); +} + static inline unsigned int blk_rq_get_max_segments(struct request *rq) { if (req_op(rq) == REQ_OP_DISCARD) @@ -303,6 +310,8 @@ static inline bool bio_may_exceed_limits(struct bio *bio, break; } + if (unlikely(op_is_copy(bio->bi_opf))) + return false; /* * All drivers must accept single-segments bios that are <= PAGE_SIZE. * This is a quick and dirty check that relies on the fact that diff --git a/block/elevator.h b/block/elevator.h index 7ca3d7b6ed82..eec442bbf384 100644 --- a/block/elevator.h +++ b/block/elevator.h @@ -18,6 +18,7 @@ enum elv_merge { ELEVATOR_FRONT_MERGE = 1, ELEVATOR_BACK_MERGE = 2, ELEVATOR_DISCARD_MERGE = 3, + ELEVATOR_COPY_OFFLOAD_MERGE = 4, }; struct blk_mq_alloc_data; diff --git a/include/linux/bio.h b/include/linux/bio.h index c4f5b5228105..a2673f24e493 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -57,7 +57,9 @@ static inline bool bio_has_data(struct bio *bio) bio->bi_iter.bi_size && bio_op(bio) != REQ_OP_DISCARD && bio_op(bio) != REQ_OP_SECURE_ERASE && - bio_op(bio) != REQ_OP_WRITE_ZEROES) + bio_op(bio) != REQ_OP_WRITE_ZEROES && + bio_op(bio) != REQ_OP_COPY_DST && + bio_op(bio) != REQ_OP_COPY_SRC) return true; return false; diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 0bad62cca3d0..336146798e56 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -394,6 +394,9 @@ enum req_op { /* reset all the zone present on the device */ REQ_OP_ZONE_RESET_ALL = (__force blk_opf_t)17, + REQ_OP_COPY_SRC = (__force blk_opf_t)18, + REQ_OP_COPY_DST = (__force blk_opf_t)19, + /* Driver private requests */ REQ_OP_DRV_IN = (__force blk_opf_t)34, REQ_OP_DRV_OUT = (__force blk_opf_t)35, @@ -482,6 +485,12 @@ static inline bool op_is_write(blk_opf_t op) return !!(op & (__force blk_opf_t)1); } +static inline bool op_is_copy(blk_opf_t op) +{ + return (((op & REQ_OP_MASK) == REQ_OP_COPY_SRC) || + ((op & REQ_OP_MASK) == REQ_OP_COPY_DST)); +} + /* * Check if the bio or request is one that needs special treatment in the * flush state machine. @@ -541,4 +550,16 @@ struct blk_rq_stat { u64 batch; }; +typedef void (cio_iodone_t)(void *private, int comp_len); + +struct cio { + struct task_struct *waiter; /* waiting task (NULL if none) */ + loff_t pos_in; + loff_t pos_out; + ssize_t comp_len; + cio_iodone_t *endio; /* applicable for async operation */ + void *private; /* applicable for async operation */ + atomic_t refcount; +}; + #endif /* __LINUX_BLK_TYPES_H */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 6098665953e6..963f5c97dec0 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1043,6 +1043,10 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, struct bio **biop); int blkdev_issue_secure_erase(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp); +ssize_t blkdev_copy_offload( + struct block_device *bdev_in, loff_t pos_in, + struct block_device *bdev_out, loff_t pos_out, + size_t len, cio_iodone_t end_io, void *private, gfp_t gfp_mask); #define BLKDEV_ZERO_NOUNMAP (1 << 0) /* do not free blocks */ #define BLKDEV_ZERO_NOFALLBACK (1 << 1) /* don't write explicit zeroes */ From patchwork Tue Jun 27 18:36:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 13295363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3138EB64D7 for ; Wed, 28 Jun 2023 08:25:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234071AbjF1IZi (ORCPT ); Wed, 28 Jun 2023 04:25:38 -0400 Received: from mailout2.samsung.com ([203.254.224.25]:42503 "EHLO mailout2.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233728AbjF1IXN (ORCPT ); Wed, 28 Jun 2023 04:23:13 -0400 Received: from epcas5p4.samsung.com (unknown [182.195.41.42]) by mailout2.samsung.com (KnoxPortal) with ESMTP id 20230628055520epoutp02194e99eabbf24c5229ffab6895ab75c8~svPTXd9qr0980409804epoutp02R for ; Wed, 28 Jun 2023 05:55:20 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.samsung.com 20230628055520epoutp02194e99eabbf24c5229ffab6895ab75c8~svPTXd9qr0980409804epoutp02R DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1687931720; bh=Haygm2b+gtZl42Peo39O7w0H1TA2zNsYh7BAvI1pzXY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dN2OxaNa9TSyaQc1QEw0iEpUgGUQO3DQ2Q/2PCMOHCuapMsUCtjiI/7ZLEUASRHvr tGuSOE+2JeT0LT4nu5dygHxld+H7Cw+XhqcDE95B5UNA81CVQqmOM6j4ElXsp+LIQv K+ewDsqqt3GBlR/VXX/Mpcs1ofPzjUZ5jHr29LEA= Received: from epsnrtp1.localdomain (unknown [182.195.42.162]) by epcas5p3.samsung.com (KnoxPortal) with ESMTP id 20230628055519epcas5p39c789088bba29dacf3193a6619b747cf~svPSmbKu91655316553epcas5p3B; Wed, 28 Jun 2023 05:55:19 +0000 (GMT) Received: from epsmgec5p1-new.samsung.com (unknown [182.195.38.180]) by epsnrtp1.localdomain (Postfix) with ESMTP id 4QrW5K3grXz4x9Q0; Wed, 28 Jun 2023 05:55:17 +0000 (GMT) Received: from epcas5p3.samsung.com ( [182.195.41.41]) by epsmgec5p1-new.samsung.com (Symantec Messaging Gateway) with SMTP id 13.59.55173.54BCB946; Wed, 28 Jun 2023 14:55:17 +0900 (KST) Received: from epsmtrp2.samsung.com (unknown [182.195.40.14]) by epcas5p1.samsung.com (KnoxPortal) with ESMTPA id 20230627184020epcas5p13fdcea52edead5ffa3fae444f923439e~smB9H_YYq0172101721epcas5p1r; Tue, 27 Jun 2023 18:40:20 +0000 (GMT) Received: from epsmgms1p1new.samsung.com (unknown [182.195.42.41]) by epsmtrp2.samsung.com (KnoxPortal) with ESMTP id 20230627184020epsmtrp2901427a1af9091266d00b9cedb5989bc~smB9Gzt1c2845228452epsmtrp2Q; Tue, 27 Jun 2023 18:40:20 +0000 (GMT) X-AuditID: b6c32a50-e61c07000001d785-1e-649bcb45da0a Received: from epsmtip2.samsung.com ( [182.195.34.31]) by epsmgms1p1new.samsung.com (Symantec Messaging Gateway) with SMTP id DF.39.34491.41D2B946; Wed, 28 Jun 2023 03:40:20 +0900 (KST) Received: from green245.sa.corp.samsungelectronics.net (unknown [107.99.41.245]) by epsmtip2.samsung.com (KnoxPortal) with ESMTPA id 20230627184016epsmtip27f2682562e4d83a09532283579af691c~smB5KnDhk0383803838epsmtip2d; Tue, 27 Jun 2023 18:40:16 +0000 (GMT) From: Nitesh Shetty To: Jens Axboe , Jonathan Corbet , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, Keith Busch , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , Alexander Viro , Christian Brauner Cc: martin.petersen@oracle.com, linux-scsi@vger.kernel.org, willy@infradead.org, hare@suse.de, djwong@kernel.org, bvanassche@acm.org, ming.lei@redhat.com, dlemoal@kernel.org, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty , Vincent Fu , Anuj Gupta , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v13 3/9] block: add emulation for copy Date: Wed, 28 Jun 2023 00:06:17 +0530 Message-Id: <20230627183629.26571-4-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20230627183629.26571-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA01TaVBbVRTufS95CTjpvLJ5xVaZaHWgBpIawgVBkEXeWJ3iuMzAqDRD3gBC FrOIUm2hLLVlTwWmAU0FlLIILWAbllhMVbYq0yIgWGgdQ2cMZWuVUsJi4gPtv+983znnO+fc uVzc7WuONzdVoaXVCmk6n3BlXbjs6yuIGaqSCX9r80atgz/i6FjpOo6apkoINHv5DkAVi/dx ZO09DtCIdSe6eSkcmeer2GiitxNDPTV6DDU0/YAhvWUMoJlRA4bMk/vQF/l1LNRjHmChka5q Ahm/muGggnETger7NjBkOZWDIZM1G6ALdiOOWmYXWKh/8lH0R8EnAA2v97GRfaWaiNhDjfxy gOo0THGo4enzLKr9rB818pOOams8QVDtdUep7oksgqotPsWminLmCWppZpJFLXw7SlDFHY2A ah86TN1te4xqs85hcWRCWmgKLZXRah9akaSUpSqSw/gHXkuMSgyUCEUCUTAK4vsopHI6jB/9 cpzgxdR0x4n4Pu9L03UOKk6q0fADng9VK3Va2idFqdGG8WmVLF0lVvlrpHKNTpHsr6C1ISKh cH+gI/FQWkp7eTZLZYr84HahgZUF+iUngQsXkmKYc3WWcxK4ct3IHgBnq/UYE9wBsLzw1pay 7FA2buDbJSXTLWxGMANovFcGmCAPg5Zqp8LlEuQ+OLTJdfIeZBYOz3XXAmc1Ts7jsLbMxYnd SQnsWOjFnJhF7oWFg0uEE/PIENhT1YQ5+0AyAJbc2OWkXcjnYPfw92wmZRccOG1lMS0fhznf VOFOL0hOusBbU39izKTRcKC+n2CwO7T1dXAY7A3vzpu3+AzY8OlZginOBdAwbgCMEA7zBktw 5xA46QtbuwIYeg8sH2zBGOOdsMhu3fLiQdPn2/gJ2Nx6Zqv/I3DsXjbB7ELBn0eimFsVA7h6 upVTCnwMD+xjeGAfw//OZwDeCLxplUaeTCcFqkQCBZ3x3zMnKeVt4N9/4xdnAk3n1v0tAOMC C4BcnO/B81qplLnxZNIPM2m1MlGtS6c1FhDoOHgZ7u2ZpHR8PIU2USQOFoolEok4+FmJiP8w bzr2hMyNTJZq6TSaVtHq7TqM6+KdhX20I2BMthBrvBQ8HTVjb99dcLCsC2ijTIvI7lqx6tEG Jiqla89svjA1V56h34y8Pxpbaftdnvt2+Pm/3OqXU9z7EmTG4o9TYzKbv+vceDrx9bKA/fHX Kqre+tLDboxlExGegtjoDkH85Lz2yVdfEYS+h4RBxUc0zYsTeyMX1sjluFbbkfHjtFdN9rUV ju1mzNih4foG6GIzH0zIY8t2t9fySPOQamnVHvFGTZn5erTIXbi8Q9ywVpqveujKS3L9lc+E QdFqsddVQ2/K9V/Fb3rq85+yphUNclwLjl3823w4zb37Ym5SfN3Au0fNJ2ld0Fwmx2YKqoCb nu9sWEJu94v4LE2KVOSHqzXSfwBAUQPYwAQAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrEIsWRmVeSWpSXmKPExsWy7bCSvK6I7uwUgycfBC3WnzrGbNE04S+z xeq7/WwWrw9/YrSY9uEns8WTA+2MFpef8Fk82G9vsffdbFaLmwd2MlnsWTSJyWLl6qNMFpMO XWO0eHp1FpPF3lvaFgvblrBY7Nl7ksXi8q45bBbzlz1lt+i+voPNYvnxf0wWhyY3M1nseNLI aLHt93xmi3Wv37NYnLglbfG4u4PR4vzf46wWv3/MYXOQ9bh8xdtj56y77B7n721k8di8Qsvj 8tlSj02rOtk8Ni+p99h9s4HNY3HfZFaP3uZ3bB4fn95i8Xi/7yqbR9+WVYwem09Xe3zeJOex 6clbpgCBKC6blNSczLLUIn27BK6MzVMbWQp2OFW86ZnF0sB4wqyLkZNDQsBEov/eOtYuRi4O IYHdjBLzX1xhg0hISiz7e4QZwhaWWPnvOTtEUTOTxJFPC1i6GDk42AS0JU7/5wCJiwh0MUt0 7nzHAuIwCzSxSNy88wqsW1jATGLL+wNMIDaLgKpEz6mPYBt4Bawk9sxezQQySEJAX6L/viBI mFPAWmL3+SOsIGEhoJL3xwMgqgUlTs58wgJiMwvISzRvnc08gVFgFpLULCSpBYxMqxglUwuK c9Nziw0LDPNSy/WKE3OLS/PS9ZLzczcxgpOBluYOxu2rPugdYmTiYDzEKMHBrCTCK/ZjeooQ b0piZVVqUX58UWlOavEhRmkOFiVxXvEXvSlCAumJJanZqakFqUUwWSYOTqkGJp2mOl+/hUf+ 7bgS8crjZWiKd8Nrud9cT981NagFBfjwnV3/6k3dD2VhFT7GtNk7ewquPLsa3Xb2VfqnTxO8 NVxTl/u7vVhff3hieuQjXtuXCwub129OXCjm+n39Se7PFWb8bVMd7Q4x1+3S36J7e/v33mN/ 1yfd653XXtXee9XLVerIjcsbvxsHh8+oX/LptznzlfU6lRXrf7v3HmlweTVBtCnxlN3jE19k LB8+Opd5IcpV3ynXc2fqmtUsqww+Tks0/G0ezzxxWcHF0oy0jRXXjRxOxhSsm3hKUGuGz8tZ 9Rq2Bx0Deh4cX1GlNVfC9ct1keD4qUsXtTTNyuVrXjTbdc3ug9ULI85GFLzkmajEUpyRaKjF XFScCAAbHPCQdQMAAA== X-CMS-MailID: 20230627184020epcas5p13fdcea52edead5ffa3fae444f923439e X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20230627184020epcas5p13fdcea52edead5ffa3fae444f923439e References: <20230627183629.26571-1-nj.shetty@samsung.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org For the devices which does not support copy, copy emulation is added. It is required for in-kernel users like fabrics, where file descriptor is not available and hence they can't use copy_file_range. Copy-emulation is implemented by reading from source into memory and writing to the corresponding destination asynchronously. Also emulation is used, if copy offload fails or partially completes. Signed-off-by: Nitesh Shetty Signed-off-by: Vincent Fu Signed-off-by: Anuj Gupta --- block/blk-lib.c | 183 +++++++++++++++++++++++++++++++++++++- block/blk-map.c | 4 +- include/linux/blk_types.h | 5 ++ include/linux/blkdev.h | 3 + 4 files changed, 192 insertions(+), 3 deletions(-) diff --git a/block/blk-lib.c b/block/blk-lib.c index 10c3eadd5bf6..09e0d5d51d03 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -234,6 +234,180 @@ static ssize_t __blkdev_copy_offload( return blkdev_copy_wait_io_completion(cio); } +static void *blkdev_copy_alloc_buf(sector_t req_size, sector_t *alloc_size, + gfp_t gfp_mask) +{ + int min_size = PAGE_SIZE; + void *buf; + + while (req_size >= min_size) { + buf = kvmalloc(req_size, gfp_mask); + if (buf) { + *alloc_size = req_size; + return buf; + } + /* retry half the requested size */ + req_size >>= 1; + } + + return NULL; +} + +static void blkdev_copy_emulate_write_endio(struct bio *bio) +{ + struct copy_ctx *ctx = bio->bi_private; + struct cio *cio = ctx->cio; + sector_t clen; + + if (bio->bi_status) { + clen = (bio->bi_iter.bi_sector << SECTOR_SHIFT) - cio->pos_out; + cio->comp_len = min_t(sector_t, clen, cio->comp_len); + } + kfree(bvec_virt(&bio->bi_io_vec[0])); + bio_map_kern_endio(bio); + kfree(ctx); + if (atomic_dec_and_test(&cio->refcount)) { + if (cio->endio) { + cio->endio(cio->private, cio->comp_len); + kfree(cio); + } else + blk_wake_io_task(cio->waiter); + } +} + +static void blkdev_copy_emulate_read_endio(struct bio *read_bio) +{ + struct copy_ctx *ctx = read_bio->bi_private; + struct cio *cio = ctx->cio; + sector_t clen; + + if (read_bio->bi_status) { + clen = (read_bio->bi_iter.bi_sector << SECTOR_SHIFT) - + cio->pos_in; + cio->comp_len = min_t(sector_t, clen, cio->comp_len); + kfree(bvec_virt(&read_bio->bi_io_vec[0])); + bio_map_kern_endio(read_bio); + kfree(ctx); + + if (atomic_dec_and_test(&cio->refcount)) { + if (cio->endio) { + cio->endio(cio->private, cio->comp_len); + kfree(cio); + } else + blk_wake_io_task(cio->waiter); + } + } + schedule_work(&ctx->dispatch_work); + kfree(read_bio); +} + +static void blkdev_copy_dispatch_work(struct work_struct *work) +{ + struct copy_ctx *ctx = container_of(work, struct copy_ctx, + dispatch_work); + + submit_bio(ctx->write_bio); +} + +/* + * If native copy offload feature is absent, this function tries to emulate, + * by copying data from source to a temporary buffer and from buffer to + * destination device. + * Returns the length of bytes copied or error if encountered + */ +static ssize_t __blkdev_copy_emulate( + struct block_device *bdev_in, loff_t pos_in, + struct block_device *bdev_out, loff_t pos_out, + size_t len, cio_iodone_t endio, void *private, gfp_t gfp_mask) +{ + struct request_queue *in = bdev_get_queue(bdev_in); + struct request_queue *out = bdev_get_queue(bdev_out); + struct bio *read_bio, *write_bio; + void *buf = NULL; + struct copy_ctx *ctx; + struct cio *cio; + sector_t buf_len, req_len, rem = 0; + sector_t max_src_hw_len = min_t(unsigned int, + queue_max_hw_sectors(in), + queue_max_segments(in) << (PAGE_SHIFT - SECTOR_SHIFT)) + << SECTOR_SHIFT; + sector_t max_dst_hw_len = min_t(unsigned int, + queue_max_hw_sectors(out), + queue_max_segments(out) << (PAGE_SHIFT - SECTOR_SHIFT)) + << SECTOR_SHIFT; + sector_t max_hw_len = min_t(unsigned int, + max_src_hw_len, max_dst_hw_len); + + cio = kzalloc(sizeof(struct cio), GFP_KERNEL); + if (!cio) + return -ENOMEM; + atomic_set(&cio->refcount, 0); + cio->pos_in = pos_in; + cio->pos_out = pos_out; + cio->waiter = current; + cio->endio = endio; + cio->private = private; + + for (rem = len; rem > 0; rem -= buf_len) { + req_len = min_t(int, max_hw_len, rem); + + buf = blkdev_copy_alloc_buf(req_len, &buf_len, gfp_mask); + if (!buf) + goto err_alloc_buf; + + ctx = kzalloc(sizeof(struct copy_ctx), gfp_mask); + if (!ctx) + goto err_ctx; + + read_bio = bio_map_kern(in, buf, buf_len, gfp_mask); + if (IS_ERR(read_bio)) + goto err_read_bio; + + write_bio = bio_map_kern(out, buf, buf_len, gfp_mask); + if (IS_ERR(write_bio)) + goto err_write_bio; + + ctx->cio = cio; + ctx->write_bio = write_bio; + INIT_WORK(&ctx->dispatch_work, blkdev_copy_dispatch_work); + + read_bio->bi_iter.bi_sector = pos_in >> SECTOR_SHIFT; + read_bio->bi_iter.bi_size = buf_len; + read_bio->bi_opf = REQ_OP_READ | REQ_SYNC; + bio_set_dev(read_bio, bdev_in); + read_bio->bi_end_io = blkdev_copy_emulate_read_endio; + read_bio->bi_private = ctx; + + write_bio->bi_iter.bi_size = buf_len; + write_bio->bi_opf = REQ_OP_WRITE | REQ_SYNC; + bio_set_dev(write_bio, bdev_out); + write_bio->bi_end_io = blkdev_copy_emulate_write_endio; + write_bio->bi_iter.bi_sector = pos_out >> SECTOR_SHIFT; + write_bio->bi_private = ctx; + + atomic_inc(&cio->refcount); + submit_bio(read_bio); + + pos_in += buf_len; + pos_out += buf_len; + } + return blkdev_copy_wait_io_completion(cio); + +err_write_bio: + bio_put(read_bio); +err_read_bio: + kfree(ctx); +err_ctx: + kvfree(buf); +err_alloc_buf: + cio->comp_len -= min_t(sector_t, cio->comp_len, len - rem); + if (!atomic_read(&cio->refcount)) { + kfree(cio); + return -ENOMEM; + } + return blkdev_copy_wait_io_completion(cio); +} + static inline ssize_t blkdev_copy_sanity_check( struct block_device *bdev_in, loff_t pos_in, struct block_device *bdev_out, loff_t pos_out, @@ -284,9 +458,16 @@ ssize_t blkdev_copy_offload( if (ret) return ret; - if (blk_queue_copy(q_in) && blk_queue_copy(q_out)) + if (blk_queue_copy(q_in) && blk_queue_copy(q_out)) { ret = __blkdev_copy_offload(bdev_in, pos_in, bdev_out, pos_out, len, endio, private, gfp_mask); + if (ret < 0) + ret = 0; + } + + if (ret != len) + ret = __blkdev_copy_emulate(bdev_in, pos_in + ret, bdev_out, + pos_out + ret, len - ret, endio, private, gfp_mask); return ret; } diff --git a/block/blk-map.c b/block/blk-map.c index 44d74a30ddac..ceeb70a95fd1 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -363,7 +363,7 @@ static void bio_invalidate_vmalloc_pages(struct bio *bio) #endif } -static void bio_map_kern_endio(struct bio *bio) +void bio_map_kern_endio(struct bio *bio) { bio_invalidate_vmalloc_pages(bio); bio_uninit(bio); @@ -380,7 +380,7 @@ static void bio_map_kern_endio(struct bio *bio) * Map the kernel address into a bio suitable for io to a block * device. Returns an error pointer in case of error. */ -static struct bio *bio_map_kern(struct request_queue *q, void *data, +struct bio *bio_map_kern(struct request_queue *q, void *data, unsigned int len, gfp_t gfp_mask) { unsigned long kaddr = (unsigned long)data; diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 336146798e56..f8c80940c7ad 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -562,4 +562,9 @@ struct cio { atomic_t refcount; }; +struct copy_ctx { + struct cio *cio; + struct work_struct dispatch_work; + struct bio *write_bio; +}; #endif /* __LINUX_BLK_TYPES_H */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 963f5c97dec0..c176bf6173c5 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1047,6 +1047,9 @@ ssize_t blkdev_copy_offload( struct block_device *bdev_in, loff_t pos_in, struct block_device *bdev_out, loff_t pos_out, size_t len, cio_iodone_t end_io, void *private, gfp_t gfp_mask); +struct bio *bio_map_kern(struct request_queue *q, void *data, unsigned int len, + gfp_t gfp_mask); +void bio_map_kern_endio(struct bio *bio); #define BLKDEV_ZERO_NOUNMAP (1 << 0) /* do not free blocks */ #define BLKDEV_ZERO_NOFALLBACK (1 << 1) /* don't write explicit zeroes */ From patchwork Tue Jun 27 18:36:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 13295364 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1ED36EB64D7 for ; Wed, 28 Jun 2023 08:25:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234002AbjF1IZn (ORCPT ); Wed, 28 Jun 2023 04:25:43 -0400 Received: from mailout3.samsung.com ([203.254.224.33]:43837 "EHLO mailout3.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232455AbjF1IXe (ORCPT ); Wed, 28 Jun 2023 04:23:34 -0400 Received: from epcas5p1.samsung.com (unknown [182.195.41.39]) by mailout3.samsung.com (KnoxPortal) with ESMTP id 20230628055533epoutp034d4877de42d1fc264b5b4dbad4b3280e~svPgBfveA1622216222epoutp03c for ; Wed, 28 Jun 2023 05:55:33 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout3.samsung.com 20230628055533epoutp034d4877de42d1fc264b5b4dbad4b3280e~svPgBfveA1622216222epoutp03c DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1687931733; bh=2k06SsCyE8XCKmE3uiUsmsIP0XeMXFEAOMP7DsLMqvw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EOPRMBmkskT3rZkI3AfnEu+2/uzafUjACX3luYb4JPzxFG0s/iUHMuw65SsOOonjd tK04/oG8+yS060BPkudna2ug+g6CHQM/dvtc5xbmfwsjRy45Pb65XytATYikeY9/Jp baN0E/25/1Ka6rYEjAjR44hdR5RMHTFIzdIYv53Q= Received: from epsnrtp3.localdomain (unknown [182.195.42.164]) by epcas5p1.samsung.com (KnoxPortal) with ESMTP id 20230628055533epcas5p1d90ddefd97c5688b187733ddf5dbebeb~svPfb10Ik0262002620epcas5p1t; Wed, 28 Jun 2023 05:55:33 +0000 (GMT) Received: from epsmges5p3new.samsung.com (unknown [182.195.38.175]) by epsnrtp3.localdomain (Postfix) with ESMTP id 4QrW5b19Byz4x9Pr; Wed, 28 Jun 2023 05:55:31 +0000 (GMT) Received: from epcas5p2.samsung.com ( [182.195.41.40]) by epsmges5p3new.samsung.com (Symantec Messaging Gateway) with SMTP id 83.8E.06099.35BCB946; Wed, 28 Jun 2023 14:55:31 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p4.samsung.com (KnoxPortal) with ESMTPA id 20230627184029epcas5p49a29676fa6dff5f24ddfa5c64e525a51~smCF5to6h2352723527epcas5p4f; Tue, 27 Jun 2023 18:40:29 +0000 (GMT) Received: from epsmgmcp1.samsung.com (unknown [182.195.42.82]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20230627184029epsmtrp181d40cfb5c679e112af496bbca418575~smCF4W4OK1856518565epsmtrp1O; Tue, 27 Jun 2023 18:40:29 +0000 (GMT) X-AuditID: b6c32a4b-d308d700000017d3-c3-649bcb53888e Received: from epsmtip2.samsung.com ( [182.195.34.31]) by epsmgmcp1.samsung.com (Symantec Messaging Gateway) with SMTP id C8.FC.64355.D1D2B946; Wed, 28 Jun 2023 03:40:29 +0900 (KST) Received: from green245.sa.corp.samsungelectronics.net (unknown [107.99.41.245]) by epsmtip2.samsung.com (KnoxPortal) with ESMTPA id 20230627184025epsmtip2fcb01f93c33488654abdf69187defb4a~smCCLPPCC0374003740epsmtip2m; Tue, 27 Jun 2023 18:40:25 +0000 (GMT) From: Nitesh Shetty To: Jens Axboe , Jonathan Corbet , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, Keith Busch , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , Alexander Viro , Christian Brauner Cc: martin.petersen@oracle.com, linux-scsi@vger.kernel.org, willy@infradead.org, hare@suse.de, djwong@kernel.org, bvanassche@acm.org, ming.lei@redhat.com, dlemoal@kernel.org, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty , Anuj Gupta , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v13 4/9] fs, block: copy_file_range for def_blk_ops for direct block device Date: Wed, 28 Jun 2023 00:06:18 +0530 Message-Id: <20230627183629.26571-5-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20230627183629.26571-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA01TbUxTVxjOubctF5Xu8qXHDgercxsYoJVSDgpzGWZcByYMli1TF2zoTUFK 27WAE5dZvlkVUHAyChEmupXiioAaEBACCAJBwviSgg60RIXJh4MMh8Iohem/53nO+5z3fd6T Q+B2OisOESWLpZUykZTL2sC40ez6oXtYZ4GYpy5novKOVhwlnXmFo7L72Sw02fwcoPMzL3Bk akwHqNfERqMNe1H9VAETDTXWYKjuYg6GSstuYyinaQCg8X4thuqNO9EvaZcYqK6+nYF6bxay UNGv41bo1GA1C/3WtoShptxkDFWbEgG6sViEI8PkNAPdMb6Nul+1MdHiQiHrYyeqty+IqtHe t6K6H1QwqCqdG9XbFUdV6n9kUVWXTlK1Q2oWVZKVy6Qyk6dY1Oy4kUFN3+pnUVnX9ICq6jxB /V35DlVpeoaFkAej/SJpkZhWutCyCLk4Sibx5waFhQeEewt5fHe+L/LhushEMbQ/d19wiPun UdKV7XBd4kXSuBUpRKRScT0/8lPK42Jpl0i5KtafSyvEUoVA4aESxajiZBIPGR27m8/j7fJe KTwSHanpz8QUSzu+++NiEq4GQ84aYE1AUgDVram4Bmwg7MhaAJunLjAs5DmAmmfzr0niqNFq 3TLxpI5pOagBcKzMTKxXSCoGizNsNYAgWORO2LlMmGscSDUOr9aWADPByTYcts4m4maDPfkN vGyYWzUzyB3w39nZVd2G3A3T7uQzzRdB0hNm/2lrlq3JPbC2u4VpKbGF7fkmhhnjpDNMvl6w mgGSzdawuGeYZZl0H1T/ngcs2B5OtF1bS8CBT7PT1vAxWHpOx7KYUwDUDmrXDHthakc2bh4C J11h+U1Pi7wN/tRhwCyN2TBz0YRZdBtYfWEdb4dXyovXZtgKB/5JXMMUfHrqHrBsLgvA/O5b VmeAi/aNQNo3Amlfty4GuB5spRWqGAmt8lZ4yehj/z9zhDymEqx+GbegavBwdMajCWAEaAKQ wLkONpsX8sR2NmLR8QRaKQ9XxklpVRPwXln4WZzjGCFf+XOy2HC+wJcnEAqFAl8vIZ+7xaZn MFNsR0pEsXQ0TSto5boPI6w5aiw4YcG+vym0Zf7lloRlr8Ann2QKeBzDdXGE1JBqbxzLHps0 +Ax4O+TOuw+e7+l2GjkJ0i8fYYKYjE0OeypaSrt4R68M47r9Gs2hqasv84b18yOR9zbP8OPj 9cRXj3WSLlXcHLvixaGqDF25Y0Nowed94ZtqGpNT/MWn7Z6++5D9aHnjCfnB96jFktrtd0dK A4ZSpw98MdywpIEOgerks8KUsI2pc3pcMSkhcpz1HId0U+jhR7drAxz9jN9GPBh3Ot74mG08 4Np77u5fPwT3pH9NqxxDds0FSqeWB49GK78c2l8oCv6AkFTOVnzG/vl9d5/Ct6jD37cntW7r Qz2NE0VLNJehihTx3XClSvQfp93ZwLsEAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA02Re1BMYRjG5zvn7OnUWE5b8VGKda1GyWA+oZhxOTSGMQajho72TBdddnZF yWVr3XYbakTU5pZkKzRd1Fab2JbKbVGL1nXG7qSilQyRDZvM9N9vnt/zPv+8FC54TkygouN3 cpJ4NlZIOhFVjUKvWRNnqUSzOxt9Uem9uzhKy7ThqOR1Bom6G78AlP35B47Mt44A1Goejd41 BKP6HhUPtd+qwZA2/wSGikruYOiE7hlAFmMuhupNvuji4QICaetbCNRam0ei84UWB5T+XEOi K02DGNJlyTGkMacCVDVwHkfXu60Eaja5I4OtiYcG+vPIJR5Ma1sIU5P72oExvCkjmAq1D9P6 MJEpL1aQTEXBAaauXUYyl45n8Zhj8h6S6bWYCMZ600gyxyuLAVNxP4XpK/dkys2fsHX0FqdF Ii42ehcn8Q8Kd4pSGo9h4sFpSU/z03AZaPdSAkcK0nNh1wctTwmcKAFdDaDhlRX/J8bDQpt+ mF1g0WCHg50FtByDBbIDSkBRJO0L7/+m7LeutBKHipoewt7B6Zc4zHnoYWcXOhS+u/CItDNB T4M/e3uHNvl0IDzcnMOz70DaH2a8dbbHjvRCWGfQD8WCvxVr07p/bWfYkmMeXveC8hsqPBPQ uSNU7gh1AWDFwI0TS+Mi4yLEAX5SNk6aGB/pF5EQVw6Gfu+zXgMKS21+OoBRQAcghQtd+WP7 T4sEfBGbvIeTJGyTJMZyUh1wpwjhOP6UWIVIQEeyO7kdHCfmJP8tRjlOkGFBXRV1i1w4g043 v6AtfP9v9w/OHW76o6YOSbPJ+1RCLS/CmIlXvdhUZvZoyuu1PE4+Z6wOS92Y5IEm53teebNJ bMFiWka9LJJdTZvXOcPqXRP4cfBy0EJbmOLgmsXfUkLw0H2d6Qe7QfT8LwE23yO1feEN7JMB 9erKjLXsA8XiUrXr9S61v9ed0LKkHSrxtVVPVd/fXx3InLpBMr3s1NYf36N2hSzQp2Tzl9/c qE1d8dXsZpAHHzo5Q2prKVoxKSpi6Vm9oe129i9r4+UNbP5Zt4ZLm1/Nfqa2FJ6ZCcceFcRk 4VOTZ55e4qzpX7lbq9Fj/lWOJWNits9Ztlcd+CLY0jdXSEij2AAfXCJl/wBrsHDkagMAAA== X-CMS-MailID: 20230627184029epcas5p49a29676fa6dff5f24ddfa5c64e525a51 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20230627184029epcas5p49a29676fa6dff5f24ddfa5c64e525a51 References: <20230627183629.26571-1-nj.shetty@samsung.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org For direct block device opened with O_DIRECT, use copy_file_range to issue device copy offload, and fallback to generic_copy_file_range incase device copy offload capability is absent. Modify checks to allow bdevs to use copy_file_range. Suggested-by: Ming Lei Signed-off-by: Anuj Gupta Signed-off-by: Nitesh Shetty --- block/blk-lib.c | 26 ++++++++++++++++++++++++++ block/fops.c | 20 ++++++++++++++++++++ fs/read_write.c | 7 +++++-- include/linux/blkdev.h | 4 ++++ 4 files changed, 55 insertions(+), 2 deletions(-) diff --git a/block/blk-lib.c b/block/blk-lib.c index 09e0d5d51d03..7d8e09a99254 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -473,6 +473,32 @@ ssize_t blkdev_copy_offload( } EXPORT_SYMBOL_GPL(blkdev_copy_offload); +/* Copy source offset from source block device to destination block + * device. Returns the length of bytes copied. + */ +ssize_t blkdev_copy_offload_failfast( + struct block_device *bdev_in, loff_t pos_in, + struct block_device *bdev_out, loff_t pos_out, + size_t len, gfp_t gfp_mask) +{ + struct request_queue *in_q = bdev_get_queue(bdev_in); + struct request_queue *out_q = bdev_get_queue(bdev_out); + ssize_t ret = 0; + + if (blkdev_copy_sanity_check(bdev_in, pos_in, bdev_out, pos_out, len)) + return 0; + + if (blk_queue_copy(in_q) && blk_queue_copy(out_q)) { + ret = __blkdev_copy_offload(bdev_in, pos_in, bdev_out, pos_out, + len, NULL, NULL, gfp_mask); + if (ret < 0) + return 0; + } + + return ret; +} +EXPORT_SYMBOL_GPL(blkdev_copy_offload_failfast); + static int __blkdev_issue_write_zeroes(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, struct bio **biop, unsigned flags) diff --git a/block/fops.c b/block/fops.c index a286bf3325c5..a1576304f269 100644 --- a/block/fops.c +++ b/block/fops.c @@ -621,6 +621,25 @@ static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to) return ret; } +static ssize_t blkdev_copy_file_range(struct file *file_in, loff_t pos_in, + struct file *file_out, loff_t pos_out, + size_t len, unsigned int flags) +{ + struct block_device *in_bdev = I_BDEV(bdev_file_inode(file_in)); + struct block_device *out_bdev = I_BDEV(bdev_file_inode(file_out)); + ssize_t comp_len = 0; + + if ((file_in->f_iocb_flags & IOCB_DIRECT) && + (file_out->f_iocb_flags & IOCB_DIRECT)) + comp_len = blkdev_copy_offload_failfast(in_bdev, pos_in, + out_bdev, pos_out, len, GFP_KERNEL); + if (comp_len != len) + comp_len = generic_copy_file_range(file_in, pos_in + comp_len, + file_out, pos_out + comp_len, len - comp_len, flags); + + return comp_len; +} + #define BLKDEV_FALLOC_FL_SUPPORTED \ (FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE | \ FALLOC_FL_ZERO_RANGE | FALLOC_FL_NO_HIDE_STALE) @@ -714,6 +733,7 @@ const struct file_operations def_blk_fops = { .splice_read = filemap_splice_read, .splice_write = iter_file_splice_write, .fallocate = blkdev_fallocate, + .copy_file_range = blkdev_copy_file_range, }; static __init int blkdev_init(void) diff --git a/fs/read_write.c b/fs/read_write.c index b07de77ef126..d27148a2543f 100644 --- a/fs/read_write.c +++ b/fs/read_write.c @@ -1447,7 +1447,8 @@ static int generic_copy_file_checks(struct file *file_in, loff_t pos_in, return -EOVERFLOW; /* Shorten the copy to EOF */ - size_in = i_size_read(inode_in); + size_in = i_size_read(file_in->f_mapping->host); + if (pos_in >= size_in) count = 0; else @@ -1708,7 +1709,9 @@ int generic_file_rw_checks(struct file *file_in, struct file *file_out) /* Don't copy dirs, pipes, sockets... */ if (S_ISDIR(inode_in->i_mode) || S_ISDIR(inode_out->i_mode)) return -EISDIR; - if (!S_ISREG(inode_in->i_mode) || !S_ISREG(inode_out->i_mode)) + + if ((!S_ISREG(inode_in->i_mode) || !S_ISREG(inode_out->i_mode)) && + (!S_ISBLK(inode_in->i_mode) || !S_ISBLK(inode_out->i_mode))) return -EINVAL; if (!(file_in->f_mode & FMODE_READ) || diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index c176bf6173c5..850168cad080 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1047,6 +1047,10 @@ ssize_t blkdev_copy_offload( struct block_device *bdev_in, loff_t pos_in, struct block_device *bdev_out, loff_t pos_out, size_t len, cio_iodone_t end_io, void *private, gfp_t gfp_mask); +ssize_t blkdev_copy_offload_failfast( + struct block_device *bdev_in, loff_t pos_in, + struct block_device *bdev_out, loff_t pos_out, + size_t len, gfp_t gfp_mask); struct bio *bio_map_kern(struct request_queue *q, void *data, unsigned int len, gfp_t gfp_mask); void bio_map_kern_endio(struct bio *bio); From patchwork Tue Jun 27 18:36:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 13295370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 760FFEB64D7 for ; Wed, 28 Jun 2023 08:29:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233718AbjF1I3h (ORCPT ); Wed, 28 Jun 2023 04:29:37 -0400 Received: from mailout4.samsung.com ([203.254.224.34]:14598 "EHLO mailout4.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232450AbjF1I1d (ORCPT ); Wed, 28 Jun 2023 04:27:33 -0400 Received: from epcas5p2.samsung.com (unknown [182.195.41.40]) by mailout4.samsung.com (KnoxPortal) with ESMTP id 20230628070820epoutp045c78352db7c905fdbdb82d39cd1e54b6~swPC8n2E62162121621epoutp04O for ; Wed, 28 Jun 2023 07:08:20 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout4.samsung.com 20230628070820epoutp045c78352db7c905fdbdb82d39cd1e54b6~swPC8n2E62162121621epoutp04O DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1687936100; bh=wSMlvdPvFwtAB8/e51uNbvN3bYNPRI0mv4QVH7XwV0g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GtUrjkbXFwZE2Iie3Wdv5x6ajsRUyMRVGm5vN49TnOn8xFEhX4pPy61EDHbclGc4K gth+TY02Cf8LqAWw5cBoILQUF70QH+VXP2KqnN94DASqqJUNmVcayMtCydvC9uNiyU eEzPyHEKJnLuJcmNLt5PxXZuvMMfm5VsoUP95JSQ= Received: from epsnrtp3.localdomain (unknown [182.195.42.164]) by epcas5p3.samsung.com (KnoxPortal) with ESMTP id 20230628070819epcas5p3e7c2a0b2d01fb1574faeedbabeec20f4~swPCSJqi93258332583epcas5p3o; Wed, 28 Jun 2023 07:08:19 +0000 (GMT) Received: from epsmgec5p1-new.samsung.com (unknown [182.195.38.181]) by epsnrtp3.localdomain (Postfix) with ESMTP id 4QrXjZ4RSSz4x9Q9; Wed, 28 Jun 2023 07:08:18 +0000 (GMT) Received: from epcas5p1.samsung.com ( [182.195.41.39]) by epsmgec5p1-new.samsung.com (Symantec Messaging Gateway) with SMTP id 47.8E.55173.26CDB946; Wed, 28 Jun 2023 16:08:18 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p2.samsung.com (KnoxPortal) with ESMTPA id 20230627184039epcas5p2decb92731d3e7dfdf9f2c05309a90bd7~smCPBuM882123321233epcas5p2J; Tue, 27 Jun 2023 18:40:39 +0000 (GMT) Received: from epsmgms1p2new.samsung.com (unknown [182.195.42.42]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20230627184039epsmtrp15d4f6e80f7ce7681f39f956a2f505b85~smCPAg96I1856518565epsmtrp1S; Tue, 27 Jun 2023 18:40:39 +0000 (GMT) X-AuditID: b6c32a50-df1ff7000001d785-fa-649bdc628335 Received: from epsmtip2.samsung.com ( [182.195.34.31]) by epsmgms1p2new.samsung.com (Symantec Messaging Gateway) with SMTP id 2E.08.30535.72D2B946; Wed, 28 Jun 2023 03:40:39 +0900 (KST) Received: from green245.sa.corp.samsungelectronics.net (unknown [107.99.41.245]) by epsmtip2.samsung.com (KnoxPortal) with ESMTPA id 20230627184035epsmtip2cba789726f5463839dd350f251ac2be5~smCK8QnuT0383803838epsmtip2j; Tue, 27 Jun 2023 18:40:35 +0000 (GMT) From: Nitesh Shetty To: Jens Axboe , Jonathan Corbet , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, Keith Busch , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , Alexander Viro , Christian Brauner Cc: martin.petersen@oracle.com, linux-scsi@vger.kernel.org, willy@infradead.org, hare@suse.de, djwong@kernel.org, bvanassche@acm.org, ming.lei@redhat.com, dlemoal@kernel.org, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty , Kanchan Joshi , =?utf-8?q?Javier_Gonz=C3=A1lez?= , Anuj Gupta , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v13 5/9] nvme: add copy offload support Date: Wed, 28 Jun 2023 00:06:19 +0530 Message-Id: <20230627183629.26571-6-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20230627183629.26571-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA02TfzBcVxTHc997+3alZZ4fTW90KmbTxqDsbli5hGqnIk+oajQ6zUyqG/uK Lmtnd4n+mNhESUL8ioRaP7JK6mcQQrGkipQwGVUNZUQpIhiCJJpEVK2lzX+fc+753u85587l 4CbVbHNOqFTJyKWiMC65nahrs7ayOz6cLeb/NbILVXb9gqPTqas4KrubQqLZtiWAMhae4mii 5QxAfRNGaPQnd9Q8n81Cgy0NGGr6/gKGSspuYmh8+CEb3VybI9GF1n6AJu+oMdQ8ZIvy4wsJ 1NR8i0B9jTkkuvzDJBslDtSTqKjjHwy1psdiqH7iFEB1K5dxVDH7gECdQ6+hntUOFlp5kkO+ Y0H3/e5NN6jvsumekWsEXVNsQ/fdjqSrS8+RdE1hDK0dVJF0QXI6i06KnSfpxckhgn5w4w5J J18vBXRN99f0w2oLunpiDvOjjkpcQxiRmJFbMtKgCHGoNNiN6+0f+F6g0IkvsBM4o31cS6ko nHHjevj42XmGhq3viWsZJQqLXE/5iRQKLu9tV3lEpJKxDIlQKN24jEwcJnOU2StE4YpIabC9 lFG6CPj8vcL1ws8kIb8VP8JlsV7R57W/kirQ4pIADDiQcoTdnem4jk2oJgDnslgJYPs6LwGY 1TyJ64NlAEvO9mJbimtjiyy9ohnA2Yz9+qI4DCb3F7ATAIdDUrawe42jy5tRKhxWaQuALsCp CgJeLHq+oTal9sGcpPENb4J6E9bUxZI6NqRc4FqulqW7CFI8mPKnsS5tQO2H2p52lr7EGN7K miB0jFO7YGxt9kankJo3gGO1JaS+Uw/YPpO2yaZwpuM6W8/mcDolfpNPwJKLxaRe/C2A6gE1 0B+4w7iuFFzXBE5Zw8pGnj79OrzUVYHpjY1g0srE5lYMYX3eFu+G5ZWaTd+dsP/vU5tMw4GF BUK/rWQAp6bS2KnAUv3CQOoXBlL/b60BeCkwZ2SK8GAmSCgT2EmZE/89c1BEeDXY+Dw2fvWg rGrVvhVgHNAKIAfnmhnueJIpNjEUi778ipFHBMojwxhFKxCubzwNN38lKGL990mVgQJHZ76j k5OTo7ODk4D7quHIwXNiEypYpGQkDCNj5Fs6jGNgrsKKCg0PDgdnTEe1HFINjy6xPrmSc6TF /qznp85XfzxuobU//7hD4DyUf6gvuiAqKVFlPb7U9Ua+5L6NZU1lQRjjEcBjpM+ifFR7z4zZ WsUcnmFHa/xP8z96KV5zZf5ds7fKA7x6rNzrAuLacwmN7x6np5d6s5/1EwlL5TbPc1zbMht2 GPtMelkfcBjkL3/3sgMYvD8d0yDUHqiSdHppMmfII0dPnhTm3ZNIMud9c298vjI7ReZqxOa9 H74/4HvM5IPYQfTF4cht3pKPPaMa79UyvAylacyjnx/zdm5bSWv74xthiSol0XgPg46N+iQO pvKK8/yX7ayGdi+NLRqxb5ePXA114RKKEJHABpcrRP8CWoohbsUEAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrLIsWRmVeSWpSXmKPExsWy7bCSvK667uwUg42X5SzWnzrGbNE04S+z xeq7/WwWrw9/YrSY9uEns8WTA+2MFpef8Fk82G9vsffdbFaLmwd2MlnsWTSJyWLl6qNMFo/v fGa3OPr/LZvFpEPXGC2eXp3FZLH3lrbFwrYlLBZ79p5ksbi8aw6bxfxlT9ktuq/vYLNYfvwf k8Whyc1MFjueNDJabPs9n9li3ev3LBYnbklbnP97nNXi9485bA5yHpeveHvsnHWX3eP8vY0s HptXaHlcPlvqsWlVJ5vH5iX1HrtvNrB5LO6bzOrR2/yOzePj01ssHu/3XWXz6NuyitFj8+lq j8+b5Dw2PXnLFCAQxWWTkpqTWZZapG+XwJVxacUX5oJmz4qe3RfYGhgPWHUxcnJICJhIbHz4 kbWLkYtDSGA3o8TruRfYIBKSEsv+HmGGsIUlVv57zg5R1MwkcfXEO5YuRg4ONgFtidP/OUDi IgJdzBKdO0HiXBzMAodYJO58XMgI0i0sYC4xp/cx2CQWAVWJzduawTbwClhJ/J+7mxVkkISA vkT/fUGQMKeAtcTu80fAwkJAJe+PB0BUC0qcnPkEbC2zgLrE+nlCIGFmAXmJ5q2zmScwCs5C UjULoWoWkqoFjMyrGCVTC4pz03OLDQuM8lLL9YoTc4tL89L1kvNzNzGCU4iW1g7GPas+6B1i ZOJgPMQowcGsJMIr9mN6ihBvSmJlVWpRfnxRaU5q8SFGaQ4WJXHeb697U4QE0hNLUrNTUwtS i2CyTBycUg1M9RHi0aySd9Sk777SuBQj8VBnM8M8od3fVZRaxKfl6mxxdimyNnn5arZN5781 TR83B6pOOLOTOzX339nIZzrFLw6sMFhQnNr+YN0Mqynlyq++B8W/yclXWyfwOPjNmgvHJvRx vmzIuTiv9/E/lqXzl+qscZOyOVehu5tb85RelWl7oWCO4an/F1osLdV4kvcs//PY9tj/cjbZ WZ9uTCxwfJLy62ueyddPDLGTO/JPJAR8CcvraVhRcjJw6W7L6g9ndXa3m1f6KKikK5/aM2XN 7gsPCmbPf7wmc7Ljg+OKVZk/2Jsv2t3P5+jZlrt3GvekzbW3Ja78zUy7tWbNJpErRX0nHp08 OCdup2hRY6zZDCWW4oxEQy3mouJEAD5H9IaQAwAA X-CMS-MailID: 20230627184039epcas5p2decb92731d3e7dfdf9f2c05309a90bd7 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20230627184039epcas5p2decb92731d3e7dfdf9f2c05309a90bd7 References: <20230627183629.26571-1-nj.shetty@samsung.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Current design only supports single source range. We receive a request with REQ_OP_COPY_DST. Parse this request which consists of dst(1st) and src(2nd) bios. Form a copy command (TP 4065) trace event support for nvme_copy_cmd. Set the device copy limits to queue limits. Signed-off-by: Kanchan Joshi Signed-off-by: Nitesh Shetty Signed-off-by: Javier González Signed-off-by: Anuj Gupta --- drivers/nvme/host/constants.c | 1 + drivers/nvme/host/core.c | 79 +++++++++++++++++++++++++++++++++++ drivers/nvme/host/trace.c | 19 +++++++++ include/linux/nvme.h | 43 +++++++++++++++++-- 4 files changed, 139 insertions(+), 3 deletions(-) diff --git a/drivers/nvme/host/constants.c b/drivers/nvme/host/constants.c index 5e4f8848dce0..311ad67e9cf3 100644 --- a/drivers/nvme/host/constants.c +++ b/drivers/nvme/host/constants.c @@ -19,6 +19,7 @@ static const char * const nvme_ops[] = { [nvme_cmd_resv_report] = "Reservation Report", [nvme_cmd_resv_acquire] = "Reservation Acquire", [nvme_cmd_resv_release] = "Reservation Release", + [nvme_cmd_copy] = "Copy Offload", [nvme_cmd_zone_mgmt_send] = "Zone Management Send", [nvme_cmd_zone_mgmt_recv] = "Zone Management Receive", [nvme_cmd_zone_append] = "Zone Append", diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 98bfb3d9c22a..d4063e981492 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -763,6 +763,60 @@ static inline void nvme_setup_flush(struct nvme_ns *ns, cmnd->common.nsid = cpu_to_le32(ns->head->ns_id); } +static inline blk_status_t nvme_setup_copy_write(struct nvme_ns *ns, + struct request *req, struct nvme_command *cmnd) +{ + struct nvme_copy_range *range = NULL; + struct bio *bio; + u64 dst_lba, src_lba, n_lba; + u16 nr_range = 1, control = 0; + + if (blk_rq_nr_phys_segments(req) != 2) + return BLK_STS_IOERR; + + /* +1 shift as dst+src length is added in request merging, we send copy + * for half the length. + */ + n_lba = blk_rq_bytes(req) >> (ns->lba_shift + 1); + if (WARN_ON(!n_lba)) + return BLK_STS_NOTSUPP; + + dst_lba = nvme_sect_to_lba(ns, blk_rq_pos(req)); + __rq_for_each_bio(bio, req) { + src_lba = nvme_sect_to_lba(ns, bio->bi_iter.bi_sector); + if (n_lba != bio->bi_iter.bi_size >> ns->lba_shift) + return BLK_STS_IOERR; + } + + if (req->cmd_flags & REQ_FUA) + control |= NVME_RW_FUA; + + if (req->cmd_flags & REQ_FAILFAST_DEV) + control |= NVME_RW_LR; + + memset(cmnd, 0, sizeof(*cmnd)); + cmnd->copy.opcode = nvme_cmd_copy; + cmnd->copy.nsid = cpu_to_le32(ns->head->ns_id); + cmnd->copy.control = cpu_to_le16(control); + cmnd->copy.sdlba = cpu_to_le64(dst_lba); + cmnd->copy.nr_range = 0; + + range = kmalloc_array(nr_range, sizeof(*range), + GFP_ATOMIC | __GFP_NOWARN); + if (!range) + return BLK_STS_RESOURCE; + + range[0].slba = cpu_to_le64(src_lba); + range[0].nlb = cpu_to_le16(n_lba - 1); + + req->special_vec.bv_page = virt_to_page(range); + req->special_vec.bv_offset = offset_in_page(range); + req->special_vec.bv_len = sizeof(*range) * nr_range; + req->rq_flags |= RQF_SPECIAL_PAYLOAD; + + return BLK_STS_OK; +} + static blk_status_t nvme_setup_discard(struct nvme_ns *ns, struct request *req, struct nvme_command *cmnd) { @@ -1005,6 +1059,9 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req) case REQ_OP_ZONE_APPEND: ret = nvme_setup_rw(ns, req, cmd, nvme_cmd_zone_append); break; + case REQ_OP_COPY_DST: + ret = nvme_setup_copy_write(ns, req, cmd); + break; default: WARN_ON_ONCE(1); return BLK_STS_IOERR; @@ -1742,6 +1799,26 @@ static void nvme_config_discard(struct gendisk *disk, struct nvme_ns *ns) blk_queue_max_write_zeroes_sectors(queue, UINT_MAX); } +static void nvme_config_copy(struct gendisk *disk, struct nvme_ns *ns, + struct nvme_id_ns *id) +{ + struct nvme_ctrl *ctrl = ns->ctrl; + struct request_queue *q = disk->queue; + + if (!(ctrl->oncs & NVME_CTRL_ONCS_COPY)) { + blk_queue_max_copy_sectors_hw(q, 0); + blk_queue_flag_clear(QUEUE_FLAG_COPY, q); + return; + } + + /* setting copy limits */ + if (blk_queue_flag_test_and_set(QUEUE_FLAG_COPY, q)) + return; + + blk_queue_max_copy_sectors_hw(q, + nvme_lba_to_sect(ns, le16_to_cpu(id->mssrl))); +} + static bool nvme_ns_ids_equal(struct nvme_ns_ids *a, struct nvme_ns_ids *b) { return uuid_equal(&a->uuid, &b->uuid) && @@ -1941,6 +2018,7 @@ static void nvme_update_disk_info(struct gendisk *disk, set_capacity_and_notify(disk, capacity); nvme_config_discard(disk, ns); + nvme_config_copy(disk, ns, id); blk_queue_max_write_zeroes_sectors(disk->queue, ns->ctrl->max_zeroes_sectors); } @@ -4600,6 +4678,7 @@ static inline void _nvme_check_size(void) BUILD_BUG_ON(sizeof(struct nvme_download_firmware) != 64); BUILD_BUG_ON(sizeof(struct nvme_format_cmd) != 64); BUILD_BUG_ON(sizeof(struct nvme_dsm_cmd) != 64); + BUILD_BUG_ON(sizeof(struct nvme_copy_command) != 64); BUILD_BUG_ON(sizeof(struct nvme_write_zeroes_cmd) != 64); BUILD_BUG_ON(sizeof(struct nvme_abort_cmd) != 64); BUILD_BUG_ON(sizeof(struct nvme_get_log_page_command) != 64); diff --git a/drivers/nvme/host/trace.c b/drivers/nvme/host/trace.c index 1c36fcedea20..da4a7494e5a7 100644 --- a/drivers/nvme/host/trace.c +++ b/drivers/nvme/host/trace.c @@ -150,6 +150,23 @@ static const char *nvme_trace_read_write(struct trace_seq *p, u8 *cdw10) return ret; } +static const char *nvme_trace_copy(struct trace_seq *p, u8 *cdw10) +{ + const char *ret = trace_seq_buffer_ptr(p); + u64 slba = get_unaligned_le64(cdw10); + u8 nr_range = get_unaligned_le16(cdw10 + 8); + u16 control = get_unaligned_le16(cdw10 + 10); + u32 dsmgmt = get_unaligned_le32(cdw10 + 12); + u32 reftag = get_unaligned_le32(cdw10 + 16); + + trace_seq_printf(p, + "slba=%llu, nr_range=%u, ctrl=0x%x, dsmgmt=%u, reftag=%u", + slba, nr_range, control, dsmgmt, reftag); + trace_seq_putc(p, 0); + + return ret; +} + static const char *nvme_trace_dsm(struct trace_seq *p, u8 *cdw10) { const char *ret = trace_seq_buffer_ptr(p); @@ -243,6 +260,8 @@ const char *nvme_trace_parse_nvm_cmd(struct trace_seq *p, return nvme_trace_zone_mgmt_send(p, cdw10); case nvme_cmd_zone_mgmt_recv: return nvme_trace_zone_mgmt_recv(p, cdw10); + case nvme_cmd_copy: + return nvme_trace_copy(p, cdw10); default: return nvme_trace_common(p, cdw10); } diff --git a/include/linux/nvme.h b/include/linux/nvme.h index 182b6d614eb1..bbd877111b57 100644 --- a/include/linux/nvme.h +++ b/include/linux/nvme.h @@ -337,7 +337,7 @@ struct nvme_id_ctrl { __u8 nvscc; __u8 nwpc; __le16 acwu; - __u8 rsvd534[2]; + __le16 ocfs; __le32 sgls; __le32 mnan; __u8 rsvd544[224]; @@ -365,6 +365,7 @@ enum { NVME_CTRL_ONCS_WRITE_ZEROES = 1 << 3, NVME_CTRL_ONCS_RESERVATIONS = 1 << 5, NVME_CTRL_ONCS_TIMESTAMP = 1 << 6, + NVME_CTRL_ONCS_COPY = 1 << 8, NVME_CTRL_VWC_PRESENT = 1 << 0, NVME_CTRL_OACS_SEC_SUPP = 1 << 0, NVME_CTRL_OACS_NS_MNGT_SUPP = 1 << 3, @@ -414,7 +415,10 @@ struct nvme_id_ns { __le16 npdg; __le16 npda; __le16 nows; - __u8 rsvd74[18]; + __le16 mssrl; + __le32 mcl; + __u8 msrc; + __u8 rsvd91[11]; __le32 anagrpid; __u8 rsvd96[3]; __u8 nsattr; @@ -831,6 +835,7 @@ enum nvme_opcode { nvme_cmd_resv_report = 0x0e, nvme_cmd_resv_acquire = 0x11, nvme_cmd_resv_release = 0x15, + nvme_cmd_copy = 0x19, nvme_cmd_zone_mgmt_send = 0x79, nvme_cmd_zone_mgmt_recv = 0x7a, nvme_cmd_zone_append = 0x7d, @@ -854,7 +859,8 @@ enum nvme_opcode { nvme_opcode_name(nvme_cmd_resv_release), \ nvme_opcode_name(nvme_cmd_zone_mgmt_send), \ nvme_opcode_name(nvme_cmd_zone_mgmt_recv), \ - nvme_opcode_name(nvme_cmd_zone_append)) + nvme_opcode_name(nvme_cmd_zone_append), \ + nvme_opcode_name(nvme_cmd_copy)) @@ -1031,6 +1037,36 @@ struct nvme_dsm_range { __le64 slba; }; +struct nvme_copy_command { + __u8 opcode; + __u8 flags; + __u16 command_id; + __le32 nsid; + __u64 rsvd2; + __le64 metadata; + union nvme_data_ptr dptr; + __le64 sdlba; + __u8 nr_range; + __u8 rsvd12; + __le16 control; + __le16 rsvd13; + __le16 dspec; + __le32 ilbrt; + __le16 lbat; + __le16 lbatm; +}; + +struct nvme_copy_range { + __le64 rsvd0; + __le64 slba; + __le16 nlb; + __le16 rsvd18; + __le32 rsvd20; + __le32 eilbrt; + __le16 elbat; + __le16 elbatm; +}; + struct nvme_write_zeroes_cmd { __u8 opcode; __u8 flags; @@ -1792,6 +1828,7 @@ struct nvme_command { struct nvme_download_firmware dlfw; struct nvme_format_cmd format; struct nvme_dsm_cmd dsm; + struct nvme_copy_command copy; struct nvme_write_zeroes_cmd write_zeroes; struct nvme_zone_mgmt_send_cmd zms; struct nvme_zone_mgmt_recv_cmd zmr; From patchwork Tue Jun 27 18:36:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 13295376 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1A6EEB64D7 for ; Wed, 28 Jun 2023 08:32:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233742AbjF1I3q (ORCPT ); Wed, 28 Jun 2023 04:29:46 -0400 Received: from mailout1.samsung.com ([203.254.224.24]:23155 "EHLO mailout1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232478AbjF1I1k (ORCPT ); Wed, 28 Jun 2023 04:27:40 -0400 Received: from epcas5p3.samsung.com (unknown [182.195.41.41]) by mailout1.samsung.com (KnoxPortal) with ESMTP id 20230628070826epoutp01dac07d9fc85c44950f9166aff9b32601~swPIubWcb3217832178epoutp01A for ; Wed, 28 Jun 2023 07:08:26 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.samsung.com 20230628070826epoutp01dac07d9fc85c44950f9166aff9b32601~swPIubWcb3217832178epoutp01A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1687936106; bh=gM3eQkiO35/pYv9p8lp24XhZkp6F8WtcDIXRI/8PkPM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OZRd5NR6WT6Hx2nAdakSCtS9G/dXTn3I0n5wFZOhdmefvZyyePCcvHdc7GOWKPHRM STH1ufw38RnGT6Z2FXzsxBMRUNOa0SORlF7AcIXHMf5dP9Bcw2LY7VFhDd3N5rTT64 TTMlGoVtWR5J9lnh9dHoZhrDX2ZarD1syq47pbH4= Received: from epsnrtp3.localdomain (unknown [182.195.42.164]) by epcas5p1.samsung.com (KnoxPortal) with ESMTP id 20230628070825epcas5p12f0105c8988933eb58c7a33998c975d6~swPIAeKkI0239002390epcas5p1b; Wed, 28 Jun 2023 07:08:25 +0000 (GMT) Received: from epsmgec5p1-new.samsung.com (unknown [182.195.38.183]) by epsnrtp3.localdomain (Postfix) with ESMTP id 4QrXjg0FkXz4x9Pw; Wed, 28 Jun 2023 07:08:23 +0000 (GMT) Received: from epcas5p3.samsung.com ( [182.195.41.41]) by epsmgec5p1-new.samsung.com (Symantec Messaging Gateway) with SMTP id AE.8E.55173.66CDB946; Wed, 28 Jun 2023 16:08:22 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p2.samsung.com (KnoxPortal) with ESMTPA id 20230627184049epcas5p293a6e6b75c93e39c7fca1a702e3e3774~smCXwvC5p0137701377epcas5p2z; Tue, 27 Jun 2023 18:40:49 +0000 (GMT) Received: from epsmgms1p1new.samsung.com (unknown [182.195.42.41]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20230627184049epsmtrp1b3edff9691c1a969e4c08d7fc25d4841~smCXvlbDA1784017840epsmtrp1E; Tue, 27 Jun 2023 18:40:49 +0000 (GMT) X-AuditID: b6c32a50-e61c07000001d785-0b-649bdc660b65 Received: from epsmtip2.samsung.com ( [182.195.34.31]) by epsmgms1p1new.samsung.com (Symantec Messaging Gateway) with SMTP id 15.49.34491.03D2B946; Wed, 28 Jun 2023 03:40:48 +0900 (KST) Received: from green245.sa.corp.samsungelectronics.net (unknown [107.99.41.245]) by epsmtip2.samsung.com (KnoxPortal) with ESMTPA id 20230627184045epsmtip20437419e8f057cf4b4ca6c996362aed3~smCUHgHlT0383003830epsmtip2g; Tue, 27 Jun 2023 18:40:45 +0000 (GMT) From: Nitesh Shetty To: Jens Axboe , Jonathan Corbet , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, Keith Busch , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , Alexander Viro , Christian Brauner Cc: martin.petersen@oracle.com, linux-scsi@vger.kernel.org, willy@infradead.org, hare@suse.de, djwong@kernel.org, bvanassche@acm.org, ming.lei@redhat.com, dlemoal@kernel.org, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty , Anuj Gupta , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v13 6/9] nvmet: add copy command support for bdev and file ns Date: Wed, 28 Jun 2023 00:06:20 +0530 Message-Id: <20230627183629.26571-7-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20230627183629.26571-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA02Te1BUVRzH59x7ubug6OWhnDYr5lYQII/FfRxeaZMT19QgycmSoI29AQK7 6z4yrRkWFRowWMCAWBwkYQiB4R2zCJu4oLxqqAEkGAXSpSICBJJsEIhtofzv8/vO7/t7nTlc 3LGCw+PGy9SsUiZJpEk7ornDw8P7wztFUr++wkBU23sLR2ezV3BUdVdHoumOBYDyH/yNI3P7 ZwANmLehiet7kXG2yAaNtLdgqO1KLoauVt3EUK7pNkCTQ3oMGUe90FdpZQRqM/YQaODaJRJd Lp/koAvDBhJ93bWKIdPFcxgymFMAal6+jKOa6TkCdY8+jfpXumzQ8qNL5L5dzMDgQaZFf5fD 9I/VE0xjhScz8L2GaahMJ5nGsmSmdURLMqVZF22YzHOzJDM/OUowc98OkUxWUyVgGvs+YRYb nmUazDNYOPVuQnAcK5GySldWFiOXxstiQ+iDEdGvRgtFfnxvfgAS064ySRIbQu8/FO79Wnzi +nVo148kiZp1KVyiUtG+Lwcr5Ro16xonV6lDaFYhTVQIFD4qSZJKI4v1kbHqQL6fn79wPfH9 hLjGzDlccSP44zvjPwAtqPbPALZcSAmgqXqNzAB2XEeqDcC1nw021mABwKKuBxvBEoArpWtg 0/IwJw+zsCNlBLBghbAmpWLwRvGf67W4XJLygn1rXIvuTGlxWNdaCiwBTnXh8NZ8Cm5xO1Fh sCy1m7AYCOpF+Ph+kEW2pwJhWX0nxyJDyhfqxh0ssi0VBFv7O22sKQ6wp9BMWBinnoPnvinC LeUh1WsLW4zTuHXQ/bCko22DneDvXU0cK/PglC5tg0/Bq19UkFbzeQD1w/qNLffC1F4dbhkC pzxg7TVfq/wMzOutwayNt8HMZTNm1e2hoXiTn4fVtSWklZ+Ct/9K2WAGTmV9zrEeKwvAyoUJ TjZw1T+xkP6JhfT/ty4BeCXgsQpVUiwbI1TwvWXsqf+eOUae1AD+/TKe4QZQVbfiYwIYF5gA 5OK0s/3ORwVSR3up5PQZVimPVmoSWZUJCNcPnoPzdsTI1/+cTB3NFwT4CUQikSBgj4hPu9iP haZLHalYiZpNYFkFq9z0YVxbnhY7oj3RImw5nEu21pWMicg3Zk6nmdIcyu/fe0UR2uRh1H2K VvMiw6j6Cf6bQz0Oe9oP/TSvTcYn5ruHY3zFZ+UFYt3brx/JR0LHwshuH8Y36sBKRvj1Y8f0 6Sf1i7998B2Bhh6+VTba5D478GPZSPrhX6a27ir3CZq2XRK8kHreYNx9PGpOfG9HKRxkbL32 LYy6u/zafWbSziBUd9b5746MmBXX1DvRwUlz45rt7B9RRTPNYXk738vpveJ2oeboyWqXrY/F FSGLgUtbaHHI6vEo3lFTfZB9xIHtE1K3Yn9NbuCX2frUxFBw4qXe6nylyFnpvsVNwnunJf9m WnKmZtB9tpAmVHESvieuVEn+AbqXXlW7BAAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrNIsWRmVeSWpSXmKPExsWy7bCSvK6B7uwUg4vvrS3WnzrGbNE04S+z xeq7/WwWrw9/YrSY9uEns8WTA+2MFpef8Fk82G9vsffdbFaLmwd2MlnsWTSJyWLl6qNMFpMO XWO0eHp1FpPF3lvaFgvblrBY7Nl7ksXi8q45bBbzlz1lt+i+voPNYvnxf0wWhyY3M1nseNLI aLHt93xmi3Wv37NYnLglbXH+73FWi98/5rA5yHhcvuLtsXPWXXaP8/c2snhsXqHlcflsqcem VZ1sHpuX1HvsvtnA5rG4bzKrR2/zOzaPj09vsXi833eVzaNvyypGj82nqz0+b5Lz2PTkLVOA QBSXTUpqTmZZapG+XQJXxube98wFB20q7ty/wNjAuMaoi5GTQ0LAROLrxKlMXYxcHEICuxkl Zv3exwqRkJRY9vcIM4QtLLHy33N2iKJmJon2HbeAOjg42AS0JU7/5wCJiwh0MUt07nzHAtLA LHCbWWLmWRkQW1jAV+Jc21k2kHoWAVWJP4+tQcK8AlYSSzYeYQcJSwjoS/TfFwQJcwpYS+w+ f4QVJCwEVPL+eABEtaDEyZlPoIbLSzRvnc08gVFgFpLULCSpBYxMqxglUwuKc9Nziw0LDPNS y/WKE3OLS/PS9ZLzczcxguNfS3MH4/ZVH/QOMTJxMB5ilOBgVhLhFfsxPUWINyWxsiq1KD++ qDQntfgQozQHi5I4r/iL3hQhgfTEktTs1NSC1CKYLBMHp1QD05HLIv0ftOJFOpq4J78sPeL6 ImF/wsuwTQvZJ/AK7/ojdF5I6d9/ZakJ1bnOX1MP2e183ySQN0lFzLgv/rzWvm1nwq8aOjmn lwqtNJ7r2JtemS7yOG1mjv/Gte4OTgbSIjY/K95N/XSpVGfHfjF+6ZjN3P/65lZnnXGKDW77 tOu6jvaLCUY9Vx+8MF5sdO9p4Cnb8HU7og9Vc2t4c1YY7J9WK5HkIPlja+ccjafeXHGfTT4U accdq7WRvmcrWXWxu+7Whx3vuww8+3jib9zIV5CL3TPjjA5bnEjEWq7ygycXflk742iL8KLM gr17FXM394kVua0S1Vo7882B6KyfLEKii1osIz/8v1Rd+GeaEktxRqKhFnNRcSIAPl7Pmm4D AAA= X-CMS-MailID: 20230627184049epcas5p293a6e6b75c93e39c7fca1a702e3e3774 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20230627184049epcas5p293a6e6b75c93e39c7fca1a702e3e3774 References: <20230627183629.26571-1-nj.shetty@samsung.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add support for handling nvme_cmd_copy command on target. For bdev-ns we call into blkdev_copy_offload, which the block layer completes by a offloaded copy request to backend bdev or by emulating the request. For file-ns we call vfs_copy_file_range to service our request. Currently target always shows copy capability by setting NVME_CTRL_ONCS_COPY in controller ONCS. loop target has copy support, which can be used to test copy offload. Signed-off-by: Nitesh Shetty Signed-off-by: Anuj Gupta --- drivers/nvme/target/admin-cmd.c | 9 ++++- drivers/nvme/target/io-cmd-bdev.c | 62 +++++++++++++++++++++++++++++++ drivers/nvme/target/io-cmd-file.c | 52 ++++++++++++++++++++++++++ drivers/nvme/target/nvmet.h | 1 + 4 files changed, 122 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c index 39cb570f833d..8e644b8ec0fd 100644 --- a/drivers/nvme/target/admin-cmd.c +++ b/drivers/nvme/target/admin-cmd.c @@ -433,8 +433,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req) id->nn = cpu_to_le32(NVMET_MAX_NAMESPACES); id->mnan = cpu_to_le32(NVMET_MAX_NAMESPACES); id->oncs = cpu_to_le16(NVME_CTRL_ONCS_DSM | - NVME_CTRL_ONCS_WRITE_ZEROES); - + NVME_CTRL_ONCS_WRITE_ZEROES | NVME_CTRL_ONCS_COPY); /* XXX: don't report vwc if the underlying device is write through */ id->vwc = NVME_CTRL_VWC_PRESENT; @@ -536,6 +535,12 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req) if (req->ns->bdev) nvmet_bdev_set_limits(req->ns->bdev, id); + else { + id->msrc = (__force u8)to0based(BIO_MAX_VECS - 1); + id->mssrl = cpu_to_le16(BIO_MAX_VECS << + (PAGE_SHIFT - SECTOR_SHIFT)); + id->mcl = cpu_to_le32(le16_to_cpu(id->mssrl)); + } /* * We just provide a single LBA format that matches what the diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index 2733e0158585..5c4c6a460cfa 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -46,6 +46,18 @@ void nvmet_bdev_set_limits(struct block_device *bdev, struct nvme_id_ns *id) id->npda = id->npdg; /* NOWS = Namespace Optimal Write Size */ id->nows = to0based(bdev_io_opt(bdev) / bdev_logical_block_size(bdev)); + + if (bdev_max_copy_sectors(bdev)) { + id->msrc = id->msrc; + id->mssrl = cpu_to_le16((bdev_max_copy_sectors(bdev) << + SECTOR_SHIFT) / bdev_logical_block_size(bdev)); + id->mcl = cpu_to_le32((__force u32)id->mssrl); + } else { + id->msrc = (__force u8)to0based(BIO_MAX_VECS - 1); + id->mssrl = cpu_to_le16((BIO_MAX_VECS << PAGE_SHIFT) / + bdev_logical_block_size(bdev)); + id->mcl = cpu_to_le32((__force u32)id->mssrl); + } } void nvmet_bdev_ns_disable(struct nvmet_ns *ns) @@ -184,6 +196,21 @@ static void nvmet_bio_done(struct bio *bio) nvmet_req_bio_put(req, bio); } +static void nvmet_bdev_copy_end_io(void *private, int comp_len) +{ + struct nvmet_req *req = (struct nvmet_req *)private; + u16 status; + + if (comp_len == req->copy_len) { + req->cqe->result.u32 = cpu_to_le32(1); + status = errno_to_nvme_status(req, 0); + } else { + req->cqe->result.u32 = cpu_to_le32(0); + status = errno_to_nvme_status(req, (__force u16)BLK_STS_IOERR); + } + nvmet_req_complete(req, status); +} + #ifdef CONFIG_BLK_DEV_INTEGRITY static int nvmet_bdev_alloc_bip(struct nvmet_req *req, struct bio *bio, struct sg_mapping_iter *miter) @@ -450,6 +477,37 @@ static void nvmet_bdev_execute_write_zeroes(struct nvmet_req *req) } } +/* At present we handle only one range entry, since copy offload is aligned with + * copy_file_range, only one entry is passed from block layer. + */ +static void nvmet_bdev_execute_copy(struct nvmet_req *req) +{ + struct nvme_copy_range range; + struct nvme_command *cmd = req->cmd; + ssize_t ret; + u16 status; + + status = nvmet_copy_from_sgl(req, 0, &range, sizeof(range)); + if (status) + goto out; + + ret = blkdev_copy_offload(req->ns->bdev, + le64_to_cpu(cmd->copy.sdlba) << req->ns->blksize_shift, + req->ns->bdev, + le64_to_cpu(range.slba) << req->ns->blksize_shift, + (le16_to_cpu(range.nlb) + 1) << req->ns->blksize_shift, + nvmet_bdev_copy_end_io, (void *)req, GFP_KERNEL); + if (ret) { + req->cqe->result.u32 = cpu_to_le32(0); + status = blk_to_nvme_status(req, BLK_STS_IOERR); + goto out; + } + + return; +out: + nvmet_req_complete(req, errno_to_nvme_status(req, status)); +} + u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req) { switch (req->cmd->common.opcode) { @@ -468,6 +526,10 @@ u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req) case nvme_cmd_write_zeroes: req->execute = nvmet_bdev_execute_write_zeroes; return 0; + case nvme_cmd_copy: + req->execute = nvmet_bdev_execute_copy; + return 0; + default: return nvmet_report_invalid_opcode(req); } diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c index 2d068439b129..f61aa834f7a5 100644 --- a/drivers/nvme/target/io-cmd-file.c +++ b/drivers/nvme/target/io-cmd-file.c @@ -322,6 +322,49 @@ static void nvmet_file_dsm_work(struct work_struct *w) } } +static void nvmet_file_copy_work(struct work_struct *w) +{ + struct nvmet_req *req = container_of(w, struct nvmet_req, f.work); + int nr_range = req->cmd->copy.nr_range + 1; + u16 status = 0; + int src, id; + ssize_t len, ret; + loff_t pos; + + pos = le64_to_cpu(req->cmd->copy.sdlba) << req->ns->blksize_shift; + if (unlikely(pos + req->transfer_len > req->ns->size)) { + nvmet_req_complete(req, errno_to_nvme_status(req, -ENOSPC)); + return; + } + + for (id = 0 ; id < nr_range; id++) { + struct nvme_copy_range range; + + status = nvmet_copy_from_sgl(req, id * sizeof(range), &range, + sizeof(range)); + if (status) + goto out; + + src = (le64_to_cpu(range.slba) << (req->ns->blksize_shift)); + len = (le16_to_cpu(range.nlb) + 1) << (req->ns->blksize_shift); + ret = vfs_copy_file_range(req->ns->file, src, req->ns->file, + pos, len, 0); + if (ret != len) { + pos += ret; + req->cqe->result.u32 = cpu_to_le32(id); + if (ret < 0) + status = errno_to_nvme_status(req, ret); + else + status = errno_to_nvme_status(req, -EIO); + goto out; + } else + pos += len; + } + +out: + nvmet_req_complete(req, status); +} + static void nvmet_file_execute_dsm(struct nvmet_req *req) { if (!nvmet_check_data_len_lte(req, nvmet_dsm_len(req))) @@ -330,6 +373,12 @@ static void nvmet_file_execute_dsm(struct nvmet_req *req) queue_work(nvmet_wq, &req->f.work); } +static void nvmet_file_execute_copy(struct nvmet_req *req) +{ + INIT_WORK(&req->f.work, nvmet_file_copy_work); + queue_work(nvmet_wq, &req->f.work); +} + static void nvmet_file_write_zeroes_work(struct work_struct *w) { struct nvmet_req *req = container_of(w, struct nvmet_req, f.work); @@ -376,6 +425,9 @@ u16 nvmet_file_parse_io_cmd(struct nvmet_req *req) case nvme_cmd_write_zeroes: req->execute = nvmet_file_execute_write_zeroes; return 0; + case nvme_cmd_copy: + req->execute = nvmet_file_execute_copy; + return 0; default: return nvmet_report_invalid_opcode(req); } diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 6cf723bc664e..c6fb515ee1c5 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -393,6 +393,7 @@ struct nvmet_req { struct device *p2p_client; u16 error_loc; u64 error_slba; + size_t copy_len; }; #define NVMET_MAX_MPOOL_BVEC 16 From patchwork Tue Jun 27 18:36:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 13295342 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60F18EB64DC for ; Wed, 28 Jun 2023 08:06:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232495AbjF1IG3 (ORCPT ); Wed, 28 Jun 2023 04:06:29 -0400 Received: from mailout2.samsung.com ([203.254.224.25]:15421 "EHLO mailout2.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232896AbjF1IDM (ORCPT ); Wed, 28 Jun 2023 04:03:12 -0400 Received: from epcas5p2.samsung.com (unknown [182.195.41.40]) by mailout2.samsung.com (KnoxPortal) with ESMTP id 20230628064451epoutp027099d2c1c95d895611b2f07125c69273~sv6iRf2U02756127561epoutp02g for ; Wed, 28 Jun 2023 06:44:51 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.samsung.com 20230628064451epoutp027099d2c1c95d895611b2f07125c69273~sv6iRf2U02756127561epoutp02g DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1687934691; bh=HaM5wl7d6XlAT/Tg8+7Uibwdp6IRklJIPzVlJl8O1BY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=s54d1LujHUFBcUdjgIC0OYg7Gh+sKKoOyaPXMQnxP++sNbaXZ3tsMeSWaqWTFlzvv /udaIDrvOsCaUQDLVfmX6zG23k2GJrKmyECAZJ+JzA3xNQ0sbdt61tmMx8FPP3YLHe UvGsxRPVhAkzSwhj5Gk5f4nXJ+pZFGo+8RAdGR2Q= Received: from epsnrtp2.localdomain (unknown [182.195.42.163]) by epcas5p2.samsung.com (KnoxPortal) with ESMTP id 20230628064450epcas5p27072c59c3f70a29396a8ecdc75477e06~sv6hUlEmG1602616026epcas5p2A; Wed, 28 Jun 2023 06:44:50 +0000 (GMT) Received: from epsmgec5p1-new.samsung.com (unknown [182.195.38.177]) by epsnrtp2.localdomain (Postfix) with ESMTP id 4QrXBS6H7Lz4x9Q6; Wed, 28 Jun 2023 06:44:48 +0000 (GMT) Received: from epcas5p4.samsung.com ( [182.195.41.42]) by epsmgec5p1-new.samsung.com (Symantec Messaging Gateway) with SMTP id 9E.28.55173.0E6DB946; Wed, 28 Jun 2023 15:44:48 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p2.samsung.com (KnoxPortal) with ESMTPA id 20230627184058epcas5p2226835b15381b856859b162e58572d63~smCgdzRC72124021240epcas5p2b; Tue, 27 Jun 2023 18:40:58 +0000 (GMT) Received: from epsmgms1p2new.samsung.com (unknown [182.195.42.42]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20230627184058epsmtrp1a67c7afb9091feee7c469eb5c89a9c8e~smCgcr4Fd1856518565epsmtrp1Y; Tue, 27 Jun 2023 18:40:58 +0000 (GMT) X-AuditID: b6c32a50-df1ff7000001d785-66-649bd6e04566 Received: from epsmtip2.samsung.com ( [182.195.34.31]) by epsmgms1p2new.samsung.com (Symantec Messaging Gateway) with SMTP id 73.18.30535.A3D2B946; Wed, 28 Jun 2023 03:40:58 +0900 (KST) Received: from green245.sa.corp.samsungelectronics.net (unknown [107.99.41.245]) by epsmtip2.samsung.com (KnoxPortal) with ESMTPA id 20230627184054epsmtip2c141669cc20c09e0780a9f2f3970ed70~smCc5nDIq0374003740epsmtip2p; Tue, 27 Jun 2023 18:40:54 +0000 (GMT) From: Nitesh Shetty To: Jens Axboe , Jonathan Corbet , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, Keith Busch , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , Alexander Viro , Christian Brauner Cc: martin.petersen@oracle.com, linux-scsi@vger.kernel.org, willy@infradead.org, hare@suse.de, djwong@kernel.org, bvanassche@acm.org, ming.lei@redhat.com, dlemoal@kernel.org, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v13 7/9] dm: Add support for copy offload Date: Wed, 28 Jun 2023 00:06:21 +0530 Message-Id: <20230627183629.26571-8-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20230627183629.26571-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA01Te0xTVxzeube9LWR116Lu2CprLi4RWKXVgqcoYjKid3HZyPyDzU1Zbe8A oY/1gW5mG4g8rOExQBzVWBGC1KogD1MF1BQQKVMTQRhM0JAyM2WoOBFkwigXNv/7vu/3/k4O HxdW8ET8JJ2ZMepUKRThz7nYEhwifdBzTCPrfyFD1Z7rOHIO5BPoccsYQCVPJ3HkvZYNUJd3 EXpwNRo1jx7jor5rlzDUdKoQQw5nG4YK3T0ADd+1Yai5PxSVZVVwUFNzBwd1XT5OIHvlMA8d 7nUR6HT7NIbcRRkYcnnTAbo4ZcfR+cdPOOhGvxjdft3ORVMTx4nNYrqrext9yTbAo28PXuDQ dVUhdNdNC1175hBB11X8RDf2pRF0eV4Rl87NGCXoZ8P9HPrJlbsEnVd/BtB1nfvp57WBdK33 LyyW3JG8MZFRaRijhNGp9ZokXUIUtW17/Ifx4REyuVSuROspiU6lZaKomI9jpVuSUmaNoSSp qhTLrBSrMpmosE0bjXqLmZEk6k3mKIoxaFIMCsMak0prsugS1ugYc6RcJlsbPpv4dXJi3vgw MNSL9tlzCkEauLzUCvz4kFTAAvsgbgX+fCHZBOCrO93zZAzAUecRjCXjALpeTmILJZPTExw2 0Axg8VgrlyWZGOzwXABWwOcTZCjsnOH79CVkGg5rGsuBj+BkBQ4dI0+5vlYBpBKWdQ/NYQ75 PjzryCZ8WEBGwoLxwrlGkAyD+fcX+2Q/cgNsvN3KZVMWw45SL8eHcfI9mNFwbG5vSHr84K2G AYytjYEPO35ktw6Aj9rreSwWwT/zs+bxXugoriLY2oMA2nptgA1Ew0xPPu7rg5PBsPpyGCuv hEc85zF27iKYO+Wdd0UAXScWcBA8W32SYPFy2PMyfR7T8MnNXsCalQegx2rFC4DE9sY9tjfu sf0/+iTAzwARYzBpExh1uEEu1TF7/3tntV5bC+a+S0isCzhrXq9xA4wP3ADycWqJYNnEUY1Q oFF99z1j1McbLSmMyQ3CZw3/GRctVetn/5vOHC9XKGWKiIgIhXJdhJx6VzC49ZBGSCaozEwy wxgY40IdxvcTpWFlDaKZ1GSeqrf8s9/Lb+TmnLBetZJnd9d9mXWuovYAlU2lcoIt4gdBXEvc 6j1xfs5Bc1GZI2PI9cGrd0bUzAmFfjpq/0frbKmXYtafcxxsFd9rJir7JaWSTx7+HSy4yg+q 2emMu8J7K1JbTFM5Vdc2D10PHabWtu9aFQjGxwQxhcoB93FhZTp52m7/avAbqee3pWFVkQ47 f98K5TKq1Foz1JpI/vC5MbbE8Kv0+Z57zz59++SqDRHRp3oOl9xq27rpn1/62r7wKAN45kqb eKc4c4U6cnp3S8mLgEe77si224V1lUcD+7TqEXfJjpVZ/vdXf4sfQBKxcOiP5V3PvN2dM1EU x5SokofgRpPqX9IKRJW3BAAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrBIsWRmVeSWpSXmKPExsWy7bCSvK6V7uwUgyctghbrTx1jtlh9t5/N 4vXhT4wW0z78ZLZ4cqCd0eLyEz6LB/vtLfa+m81qcfPATiaLPYsmMVmsXH2UyWLSoWuMFk+v zmKy2HtL22Jh2xIWiz17T7JYXN41h81i/rKn7Bbd13ewWSw//o/J4tDkZiaLHU8aGS22/Z7P bLHu9XsWixO3pC3O/z3OavH7xxw2B2mPy1e8PXbOusvucf7eRhaPzSu0PC6fLfXYtKqTzWPz knqP3Tcb2DwW901m9ehtfsfm8fHpLRaP9/uusnn0bVnF6LH5dLXH501yHpuevGUKEIjisklJ zcksSy3St0vgyuj79pSxYItUxfyOSYwNjLtEuxg5OSQETCR+/vvB0sXIxSEksJtR4tq1S+wQ CUmJZX+PMEPYwhIr/z1nhyhqZpK4e+sqkMPBwSagLXH6PwdIXESgi1mic+c7sEnMAtuYJT58 +AI2SVjAUmLhlUesIDaLgKrEmpXtbCA2r4CVxIRvkxhBBkkI6Ev03xcECXMKWEvsPn+EFSQs BFTy/ngARLWgxMmZT1hAbGYBeYnmrbOZJzAKzEKSmoUktYCRaRWjZGpBcW56brFhgVFearle cWJucWleul5yfu4mRnDka2ntYNyz6oPeIUYmDsZDjBIczEoivGI/pqcI8aYkVlalFuXHF5Xm pBYfYpTmYFES5/32ujdFSCA9sSQ1OzW1ILUIJsvEwSnVwCT5bv3ug4VinW5rP7WddnaO0FBe WHRx2bsrulfDX2w+q/XVqdpcz0YkxIJ/ibeZcND5iBrxvmfymkzmLeJz3yyYVJaYz6Zo3CNu pSQ+f8OqhP87T0W2nPLU+7rpnu6N0trEhz+mFAcpVFvJmSp4hGXO//1R60XZoYfnNWsFXOS1 DieIFtz957U1bU+N5wzHA4XRjvfTT6zfd+T6n2m6kyJO2Tt+qHxiEjb3r+gKP6uk4oP13yWM LiX1rTZPcIndaiw9oXWVl5Tkgbu81/OOqN65xNBR/i006xW3/qV95dXXJ5zz6g6f4nSb/dvP Bwm/pr5adl3llvHvU49M3wfc2+G1p1O9xvLMTmfmK3b3bimxFGckGmoxFxUnAgD68nK/awMA AA== X-CMS-MailID: 20230627184058epcas5p2226835b15381b856859b162e58572d63 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20230627184058epcas5p2226835b15381b856859b162e58572d63 References: <20230627183629.26571-1-nj.shetty@samsung.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Before enabling copy for dm target, check if underlying devices and dm target support copy. Avoid split happening inside dm target. Fail early if the request needs split, currently splitting copy request is not supported. Signed-off-by: Nitesh Shetty --- drivers/md/dm-table.c | 41 +++++++++++++++++++++++++++++++++++ drivers/md/dm.c | 7 ++++++ include/linux/device-mapper.h | 5 +++++ 3 files changed, 53 insertions(+) diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 7d208b2b1a19..2d08a890d7e1 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1862,6 +1862,39 @@ static bool dm_table_supports_nowait(struct dm_table *t) return true; } +static int device_not_copy_capable(struct dm_target *ti, struct dm_dev *dev, + sector_t start, sector_t len, void *data) +{ + struct request_queue *q = bdev_get_queue(dev->bdev); + + return !blk_queue_copy(q); +} + +static bool dm_table_supports_copy(struct dm_table *t) +{ + struct dm_target *ti; + unsigned int i; + + for (i = 0; i < t->num_targets; i++) { + ti = dm_table_get_target(t, i); + + if (!ti->copy_offload_supported) + return false; + + /* + * target provides copy support (as implied by setting + * 'copy_offload_supported') + * and it relies on _all_ data devices having copy support. + */ + if (!ti->type->iterate_devices || + ti->type->iterate_devices(ti, + device_not_copy_capable, NULL)) + return false; + } + + return true; +} + static int device_not_discard_capable(struct dm_target *ti, struct dm_dev *dev, sector_t start, sector_t len, void *data) { @@ -1944,6 +1977,14 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, q->limits.discard_misaligned = 0; } + if (!dm_table_supports_copy(t)) { + blk_queue_flag_clear(QUEUE_FLAG_COPY, q); + q->limits.max_copy_sectors = 0; + q->limits.max_copy_sectors_hw = 0; + } else { + blk_queue_flag_set(QUEUE_FLAG_COPY, q); + } + if (!dm_table_supports_secure_erase(t)) q->limits.max_secure_erase_sectors = 0; diff --git a/drivers/md/dm.c b/drivers/md/dm.c index f0f118ab20fa..6245e16bf066 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1732,6 +1732,13 @@ static blk_status_t __split_and_process_bio(struct clone_info *ci) if (unlikely(ci->is_abnormal_io)) return __process_abnormal_io(ci, ti); + if ((unlikely(op_is_copy(ci->bio->bi_opf)) && + max_io_len(ti, ci->sector) < ci->sector_count)) { + DMERR("Error, IO size(%u) > max target size(%llu)\n", + ci->sector_count, max_io_len(ti, ci->sector)); + return BLK_STS_IOERR; + } + /* * Only support bio polling for normal IO, and the target io is * exactly inside the dm_io instance (verified in dm_poll_dm_io) diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h index 69d0435c7ebb..8ffee7e8cd06 100644 --- a/include/linux/device-mapper.h +++ b/include/linux/device-mapper.h @@ -396,6 +396,11 @@ struct dm_target { * bio_set_dev(). NOTE: ideally a target should _not_ need this. */ bool needs_bio_set_dev:1; + + /* + * copy offload is supported + */ + bool copy_offload_supported:1; }; void *dm_per_bio_data(struct bio *bio, size_t data_size); From patchwork Tue Jun 27 18:36:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 13295343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CA16EB64DA for ; Wed, 28 Jun 2023 08:06:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233034AbjF1IGd (ORCPT ); Wed, 28 Jun 2023 04:06:33 -0400 Received: from mailout1.samsung.com ([203.254.224.24]:44628 "EHLO mailout1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232809AbjF1ICk (ORCPT ); Wed, 28 Jun 2023 04:02:40 -0400 Received: from epcas5p4.samsung.com (unknown [182.195.41.42]) by mailout1.samsung.com (KnoxPortal) with ESMTP id 20230628064455epoutp01097e9d988399b3d7a44b4ab67e4ffd49~sv6mnnaKL0907709077epoutp01h for ; Wed, 28 Jun 2023 06:44:55 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.samsung.com 20230628064455epoutp01097e9d988399b3d7a44b4ab67e4ffd49~sv6mnnaKL0907709077epoutp01h DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1687934695; bh=cp2ikyfpBdIhGRYkH19yhVPh/sP+Xdj/kjqikt163DM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NRd+lGcaozhMWc1I/VDbAqgD65RbrdsNheCym648l2bDAAo/kddI7DELMytBSKndC 9nGq3GXvK8xXayYda/z5mTyUJTo9PK9cVNtkqyP16a8nDhsJ0bXFrhA5PQnc8vXBzH LTD+zSLwfM56hREpY/nPJ2BLaoHS93KsbXjT00Ng= Received: from epsnrtp1.localdomain (unknown [182.195.42.162]) by epcas5p1.samsung.com (KnoxPortal) with ESMTP id 20230628064454epcas5p1105def2a4a48815b269ffb5a749e4a74~sv6l3pwdt2101721017epcas5p1Q; Wed, 28 Jun 2023 06:44:54 +0000 (GMT) Received: from epsmges5p3new.samsung.com (unknown [182.195.38.183]) by epsnrtp1.localdomain (Postfix) with ESMTP id 4QrXBY5TSZz4x9Q1; Wed, 28 Jun 2023 06:44:53 +0000 (GMT) Received: from epcas5p3.samsung.com ( [182.195.41.41]) by epsmges5p3new.samsung.com (Symantec Messaging Gateway) with SMTP id 46.4D.06099.5E6DB946; Wed, 28 Jun 2023 15:44:53 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p3.samsung.com (KnoxPortal) with ESMTPA id 20230627184107epcas5p3e01453c42bafa3ba08b8c8ba183927e6~smCo_byDD0780707807epcas5p3p; Tue, 27 Jun 2023 18:41:07 +0000 (GMT) Received: from epsmgms1p1new.samsung.com (unknown [182.195.42.41]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20230627184107epsmtrp10062d7cd1e21b793dd2a7d1a851343b4~smCo9f2dH1856518565epsmtrp1a; Tue, 27 Jun 2023 18:41:07 +0000 (GMT) X-AuditID: b6c32a4b-cafff700000017d3-cc-649bd6e5a170 Received: from epsmtip2.samsung.com ( [182.195.34.31]) by epsmgms1p1new.samsung.com (Symantec Messaging Gateway) with SMTP id 09.49.34491.34D2B946; Wed, 28 Jun 2023 03:41:07 +0900 (KST) Received: from green245.sa.corp.samsungelectronics.net (unknown [107.99.41.245]) by epsmtip2.samsung.com (KnoxPortal) with ESMTPA id 20230627184103epsmtip21d124411e8c89e5b7683f606750615a0~smClcbl4h0383203832epsmtip2j; Tue, 27 Jun 2023 18:41:03 +0000 (GMT) From: Nitesh Shetty To: Jens Axboe , Jonathan Corbet , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, Keith Busch , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , Alexander Viro , Christian Brauner Cc: martin.petersen@oracle.com, linux-scsi@vger.kernel.org, willy@infradead.org, hare@suse.de, djwong@kernel.org, bvanassche@acm.org, ming.lei@redhat.com, dlemoal@kernel.org, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v13 8/9] dm: Enable copy offload for dm-linear target Date: Wed, 28 Jun 2023 00:06:22 +0530 Message-Id: <20230627183629.26571-9-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20230627183629.26571-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA02TfzCbdxzH+32e5BFu0UdQX3qzXLZ2h5GE0C+jdiv1qF3nzna36/6ITJ7D iSRNomV2N6q0rJrWj7LI0eG2+p3SWgilWjO6nZqWlWMO2c60GF0xZ1sibP3v9fl8P+/Pr+99 WDin2s6DlSzX0Cq5RMYjHBjt97y8fM1jFVLB1EVn1DL0HY4aprQEWry3CtC1lU0czfdeAGh0 3hHN9ISj7qUKJnrS24GhruoiDNU19GOoqG8MIPNjHYa6J3zQV3m1DNTVPchAo516AlV9bbZD X4wbCfTNwN8Y6ivOwZBxPhug9q0qHDUvLjPQ9xMH0fD2ABNtbeiJdw5So49iqA7dlB01PH2T QbXd8KZGf0yjWuvzCaqt9nPK9CSLoGouFzOpwpwlgvrDPMGglu88JqjLt+oB1fYgk1pr9aRa 559hseSplNAkWiKlVVxanqCQJssTw3gxceJj4sAggdBXGIyO8LhySSodxot4L9b3eLLMshge 94xElmZxxUrUah7/aKhKkaahuUkKtSaMRyulMqVI6aeWpKrT5Il+cloTIhQI/AMtgfEpSWX1 ZUxlNjP91+27RBYoZxQAexYkRbCy6S5RABxYHNIEoH590s5mrAJoql5i2IwXAF6aasb3JBf/ MgArc8huAAc7Q21BuRg0Gs5bcrFYBOkDH/zDsvpdyCwcGkw1wGrgZC0O656uMK1qZzIS6v8c tLMygzwEF8zbO2I2GQKbJvytCEk+1P7iZI2wJ9+GpuH7O0o26QQHv5zfGQEnX4M5tytwa3pI DtnDH3L0mK3RCKgv3dplZ/j7wC07G3vABW3eLp+FdSU3CJv4PIC6cR2wPYTD3CEtbm0CJ71g Syff5n4Vlg41Y7bCjrBwa343PxsaK/f4ddjYcp2wsTscW8/eZQoW9z/cXe9lAC/U92NXAFf3 0kC6lwbS/V/6OsDrgTutVKcm0upAZYCcPvvfLycoUlvBzrF4xxjB7MyKXx/AWKAPQBbOc2Ef 2CiTcthSScantEohVqXJaHUfCLTs+yru4ZqgsFybXCMWioIFoqCgIFFwQJCQ58YeGS+UcshE iYZOoWklrdrTYSx7jyys0e3FUk97tO+J8lXKaJ78WVpy5tTmm3GFEdsHuub6p9dWMjO1s+dq VKcV68OfeV+J/NYU9a56M9u13SX6FUWj+0YTL+D4c96++KcZxWX5prw49wLRzaM1zw7JmSvv Lxr4czHayoQZGGmmqejD/CX/24bqOIOnMOoTcfLIkcN1uucc5vBpbrbAnhPVGJ8b7bxQEjk7 Ev6xLOOYmM23/zDdT6+fGxTd3+x49MZGcIBS4RPQ89bCSfYH7qVrWR+d3MoJdkoMEJ5IL2+6 Ni5vMuuX3bSODHqu/U7B8G/nUiOxqnjX/SH7ZJ4J7IfbRfkdkz81zE2Hdo7u1wtiFb2ZDpdE ZTyGOkki9MZVasm/d9PekrUEAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrOIsWRmVeSWpSXmKPExsWy7bCSvK6z7uwUg2efpC3WnzrGbLH6bj+b xevDnxgtpn34yWzx5EA7o8XlJ3wWD/bbW+x9N5vV4uaBnUwWexZNYrJYufook8WkQ9cYLZ5e ncVksfeWtsXCtiUsFnv2nmSxuLxrDpvF/GVP2S26r+9gs1h+/B+TxaHJzUwWO540Mlps+z2f 2WLd6/csFiduSVuc/3uc1eL3jzlsDtIel694e+ycdZfd4/y9jSwem1doeVw+W+qxaVUnm8fm JfUeu282sHks7pvM6tHb/I7N4+PTWywe7/ddZfPo27KK0WPz6WqPz5vkPDY9ecsUIBDFZZOS mpNZllqkb5fAlTF91XTWgkbWimd/D7I1MM5g6WLk5JAQMJHo+LWBsYuRi0NIYDejxK27ixkh EpISy/4eYYawhSVW/nvODlHUzCRx9dUjoG4ODjYBbYnT/zlA4iICXcwSnTvfsYA4zALbmCU+ fPjCDtItLOAqMefrSTCbRUBV4uXTv2wgzbwCVhJrbxmBmBIC+hL99wVBKjgFrCV2nz/CChIW Aqp4fzwAJMwrIChxcuYTsJuZBeQlmrfOZp7AKDALSWoWktQCRqZVjJKpBcW56bnFhgWGeanl esWJucWleel6yfm5mxjBca+luYNx+6oPeocYmTgYDzFKcDArifCK/ZieIsSbklhZlVqUH19U mpNafIhRmoNFSZxX/EVvipBAemJJanZqakFqEUyWiYNTqoFpUVz7lax1vkX732xdJf3RU333 QfMKFfE8LyGOxYH1mvmnFs3tW37KI+XahRj/GVLrslxFz2kbtfXc/+fevPaaxCrRqQ/qotbP MLlkP0PUPzVl5xltoe7mrM3OVob2K5bOC2H6x/th4trOUL4m55kn+eYc4ds/V+nvLvfsyyu5 HRxdDhU+/P0sPvfainV9jysS23dFXvmYHXdT5867Vb0uhwun/H5e1zrNrcPD5pjGhc9Fa15J vAq+Uvf999+G2LA8w7pXB6u28bxiNlg2/fvib2fKNuiYr992auV92+dfWHsDL9qsCdC5Iljx b+Es7mbGJu5TO9MfiMnzJzX5P1nDfPJW/fmwuaExxzd0HFvrqsRSnJFoqMVcVJwIAJFLCRlq AwAA X-CMS-MailID: 20230627184107epcas5p3e01453c42bafa3ba08b8c8ba183927e6 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20230627184107epcas5p3e01453c42bafa3ba08b8c8ba183927e6 References: <20230627183629.26571-1-nj.shetty@samsung.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Setting copy_offload_supported flag to enable offload. Signed-off-by: Nitesh Shetty --- drivers/md/dm-linear.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c index f4448d520ee9..1d1ee30bbefb 100644 --- a/drivers/md/dm-linear.c +++ b/drivers/md/dm-linear.c @@ -62,6 +62,7 @@ static int linear_ctr(struct dm_target *ti, unsigned int argc, char **argv) ti->num_discard_bios = 1; ti->num_secure_erase_bios = 1; ti->num_write_zeroes_bios = 1; + ti->copy_offload_supported = 1; ti->private = lc; return 0; From patchwork Tue Jun 27 18:36:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Shetty X-Patchwork-Id: 13295345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 632D0EB64DC for ; Wed, 28 Jun 2023 08:07:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232731AbjF1IHj (ORCPT ); Wed, 28 Jun 2023 04:07:39 -0400 Received: from mailout1.samsung.com ([203.254.224.24]:47486 "EHLO mailout1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232062AbjF1IEm (ORCPT ); Wed, 28 Jun 2023 04:04:42 -0400 Received: from epcas5p1.samsung.com (unknown [182.195.41.39]) by mailout1.samsung.com (KnoxPortal) with ESMTP id 20230628064459epoutp01377245d6e37ba757ea6b83acb9bea3f1~sv6qCop4v0832308323epoutp01F for ; Wed, 28 Jun 2023 06:44:59 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.samsung.com 20230628064459epoutp01377245d6e37ba757ea6b83acb9bea3f1~sv6qCop4v0832308323epoutp01F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1687934699; bh=kSunFsCYmbL+fZnS1g1aUFPpWAVuxrnXDZ3DH0imMyQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WYsHuHMKKI5Ul3vxnw1SDQnMI8JEJrPwoEm4Gme9vEJXHPClwt3fhYZVtaZU28Lhs hd+319CR3o2p+puav3/Tcn7Wq05acJLoJnCES4mAMkRqxJDdmY2fRiNZ2EIPQBg9yZ pwGrDf+Il9eZ2s89e9Eb+ViNvoRWRlbgfgSBoOV4= Received: from epsnrtp3.localdomain (unknown [182.195.42.164]) by epcas5p4.samsung.com (KnoxPortal) with ESMTP id 20230628064458epcas5p4dfb71fabb7e845688f0d0544b68a650b~sv6pV6pjj2985329853epcas5p4d; Wed, 28 Jun 2023 06:44:58 +0000 (GMT) Received: from epsmges5p2new.samsung.com (unknown [182.195.38.174]) by epsnrtp3.localdomain (Postfix) with ESMTP id 4QrXBd0qW5z4x9QG; Wed, 28 Jun 2023 06:44:57 +0000 (GMT) Received: from epcas5p3.samsung.com ( [182.195.41.41]) by epsmges5p2new.samsung.com (Symantec Messaging Gateway) with SMTP id F5.36.44250.8E6DB946; Wed, 28 Jun 2023 15:44:56 +0900 (KST) Received: from epsmtrp2.samsung.com (unknown [182.195.40.14]) by epcas5p3.samsung.com (KnoxPortal) with ESMTPA id 20230627184117epcas5p3a9102988870743b20127422928f072bd~smCx4nFGU0780707807epcas5p3z; Tue, 27 Jun 2023 18:41:17 +0000 (GMT) Received: from epsmgmcp1.samsung.com (unknown [182.195.42.82]) by epsmtrp2.samsung.com (KnoxPortal) with ESMTP id 20230627184117epsmtrp2812d3ec232f11f2b039cfc76bf06bdf1~smCx3iv1i2845228452epsmtrp22; Tue, 27 Jun 2023 18:41:17 +0000 (GMT) X-AuditID: b6c32a4a-ec1fd7000000acda-1a-649bd6e8b64d Received: from epsmtip2.samsung.com ( [182.195.34.31]) by epsmgmcp1.samsung.com (Symantec Messaging Gateway) with SMTP id 60.0D.64355.C4D2B946; Wed, 28 Jun 2023 03:41:16 +0900 (KST) Received: from green245.sa.corp.samsungelectronics.net (unknown [107.99.41.245]) by epsmtip2.samsung.com (KnoxPortal) with ESMTPA id 20230627184112epsmtip21d03faeb240fce47dbdf754dc4660d97~smCt8YCRz0383003830epsmtip2k; Tue, 27 Jun 2023 18:41:12 +0000 (GMT) From: Nitesh Shetty To: Jens Axboe , Jonathan Corbet , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, Keith Busch , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , Alexander Viro , Christian Brauner Cc: martin.petersen@oracle.com, linux-scsi@vger.kernel.org, willy@infradead.org, hare@suse.de, djwong@kernel.org, bvanassche@acm.org, ming.lei@redhat.com, dlemoal@kernel.org, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty , Damien Le Moal , Anuj Gupta , Vincent Fu , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v13 9/9] null_blk: add support for copy offload Date: Wed, 28 Jun 2023 00:06:23 +0530 Message-Id: <20230627183629.26571-10-nj.shetty@samsung.com> X-Mailer: git-send-email 2.35.1.500.gb896f729e2 In-Reply-To: <20230627183629.26571-1-nj.shetty@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA02TeVBTVxTGe99LHsE29hGoXGhFDJUpe4IBLggFlTpvtNOhU9s67bTwSp6A CSGTBETUkUUsssuibUABpVQCI5VNIGApCsjWWDaHWIta6KCU1c5IB5EmRKz//c53v3vOPefO 4eC8ajNbTpRMxShktJRPbGA13nBycns0WiQW5G5GNb1dOErOXcFR1b0cAk3fWATo7Py/OJpo /xag5QEdjoYmNqL7PweittkiNhprb8ZQ68U8DFVWdWIor2MUoMkRNYba9C6o7FQ5C7W29bDQ UEsxgUoqJs1Qxp0mAv3Y/RxDHfkpGGqaSAKocbkER1em51jolv5t9GdGGkC6lW42Wl4qJoLs qKHhfVSz+p4ZpfvjKouqu+xMDQ3EUrWa0wRVV36C0o4lEtSl7Hw2lZUyS1DNqeNsamFSz6Lm ro8QVHa9BlB1fUepJ7V2IRZfSPwjGVrMKOwZWXiMOEoWEcDf90no7lAvb4HQTeiLfPj2Mjqa CeAHfxjitidKahgT3z6OlsYapBBaqeR7vO+viIlVMfaRMUpVAJ+Ri6VykdxdSUcrY2UR7jJG 5ScUCDy9DMYwSeTT0cdsuc43/uTwCEgENYJ0wOFAUgT704+mgw0cHqkFsDe9mEgH5oZgEcCp U/AlZ5QfWveXd1mb/M0AXuu9gpmCVAw+UOvMjCaCdIF9qxyjbkUm4vAn7SVgDHCyjAXv/pUJ jFktyUCo7ctnGZlFboOlqX1sI3PJHfB86TTLVM0D5oxbGGVzg6zV3XxhsYA930+sXcXJLTCl oQg3MiQHzeGZpGMmDobDLTqWiS3h4+56MxPbwiezbYSJD8PKgsuE8W2QPAmg+o4amA4CYWpv Dm58A046wZoWD5O8GRauNWysuxFmLU9gJp0Lmy6sswOsril9kd8Gjj5NIkytUPBqtp1pVtkA Pk9OA7nAXv1KO+pX2lH/X7kU4Bpgw8iV0RGM0kvuKWMOv/zh8JjoWrC2Ns57m8CD+/PuHQDj gA4AOTjfirtp6ZyYxxXTRxIYRUyoIlbKKDuAl2HcZ3Dbt8JjDHsnU4UKRb4Ckbe3t8h3u7eQ b82tGM4S88gIWsVIGEbOKNbvYRxz20TMZmAlIdnRtvzrhyqHLJ/b8cG835TvOClCz9qZaSpn +neWmPsJHN+onCo5UH087jvgVf7Dewf3q6475u6lfZqvjS5v5cWRQW86vWstGXC+23DAYiZv ML6963VPdSR2flPfjkK6DR7U4mHzZb9bb/2y87NGq2LKITPz2K3BOPfJv+cOLeSmwV+Dduur tj1znSm2XHXddbNClqaqF5WVfro9LOK0rHN/auuJfz7eIvXv/OWrWv/jq/23L5a6cL2O1NV4 FIzrNVPx+rQ9kmfTuyQlrQ/1TnYF6Vnc2EL7R83hwsUPXKdWNIvU590LvOKGnnm/1745J5Ur Fy98RNdFsyOsE8Y6lir5LGUkLXTGFUr6P2P8Gfm/BAAA X-Brightmail-Tracker: H4sIAAAAAAAAA02Sf0yMcRzHfZ/nuaen7OzpUn0LZXciNXFm9tW1ypr1XYyY3/l1c4+iK+eu LDKlJB2Ja5R+EE66jJuO6iR0+iGxY9exrnHhkmkrv2Pt/LiazX/vz+v9fn/2+ePDkII3lD+z MzWNU6ZK5ULag6p/IAycs2xOuWxeS40E6R+1kyjnpJNEV18W0WjwwWeAznz8SSLH/XyARp+Y SWRxTEJ996JQ81A5D/XcNxLozkUNgXRX2wikMT0HqN9aRqBmWyi6cERLoTvNnRSy3K6g0fnq fjd07EUjja50/CKQqTiXQI2OQwDVj54n0fXBYQo9tE1Bb48dBcjs7OCh0R8VdHQAtnQvxcay l27Y/OoGhQ01IdjyJB3X1RbQ2KDNwk092TS+dKKYhwtzh2hszLPz8Kd+G4WH71ppfOJmLcCG rkz8pS4g3nOjR4SMk+/cyynnRm7zSBp5/oGnMC/KONxtBdlAP08NGAayC6C23VcNPBgB2wBg fUMloQbuf7kfrHa2kuPaC+p+DbiNh3IJWHHcSbrKNBsKu34zLj6ZVZOwwDhEuQaSbaCgVn+Z 52p7sVGwqauYcmmKDYJVeV1jnM9KYGXVIDV+xVxYZPd0Yfe/uMncynNhARsOhzvix9OesPOs Y2wLyQbC3Fvl5EnAlv1nlf1nVQGiFnhzClVKYsp2hThMJU1Rpacmhm3fnVIHxt4gZFUjqNY7 w0yAYIAJQIYUTub7/CiRCfgy6b79nHL3VmW6nFOZwBSGEvryRfICmYBNlKZxyRyn4JT/XIJx 988m8t8XZmyIjD10esWE5OrhGkvpp3BNW6nAbmQEpX0Hgld2T024FrPrG92Z1mIKiCZFanFO 4bT3JT9n2D+sm1qrQoMiw5qv0QnxcRGyhIEgW9wLXar+S1aiKNYPPz1cuWAhaRhauUkyO1rT Lt8YK5lvzEyy5G1evIsvTBYZ3ozEBTHeTyfF0D49JTrv9e881+/JQeEXPvoEWi3FURnLpWEj 2PeW40aIFuolPAnM85odyxSdaytc9SzCP5iJC8p/7Readi1A+3hz72PrxArx/tffhTbxgH3Z loPS6Zn06h37NL26mTGP+j9HnlW0rGUCZ/W23vU5vY3tcxdkLQk+VaMXUqokqTiEVKqkfwCP aevHdQMAAA== X-CMS-MailID: 20230627184117epcas5p3a9102988870743b20127422928f072bd X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20230627184117epcas5p3a9102988870743b20127422928f072bd References: <20230627183629.26571-1-nj.shetty@samsung.com> Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Implementaion is based on existing read and write infrastructure. copy_max_bytes: A new configfs and module parameter is introduced, which can be used to set hardware/driver supported maximum copy limit. Suggested-by: Damien Le Moal Signed-off-by: Anuj Gupta Signed-off-by: Nitesh Shetty Signed-off-by: Vincent Fu --- Documentation/block/null_blk.rst | 5 ++ drivers/block/null_blk/main.c | 85 +++++++++++++++++++++++++++++-- drivers/block/null_blk/null_blk.h | 1 + 3 files changed, 88 insertions(+), 3 deletions(-) diff --git a/Documentation/block/null_blk.rst b/Documentation/block/null_blk.rst index 4dd78f24d10a..6153e02fcf13 100644 --- a/Documentation/block/null_blk.rst +++ b/Documentation/block/null_blk.rst @@ -149,3 +149,8 @@ zone_size=[MB]: Default: 256 zone_nr_conv=[nr_conv]: Default: 0 The number of conventional zones to create when block device is zoned. If zone_nr_conv >= nr_zones, it will be reduced to nr_zones - 1. + +copy_max_bytes=[size in bytes]: Default: COPY_MAX_BYTES + A module and configfs parameter which can be used to set hardware/driver + supported maximum copy offload limit. + COPY_MAX_BYTES(=128MB at present) is defined in fs.h diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 864013019d6b..e9461bd4dc2c 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -157,6 +157,10 @@ static int g_max_sectors; module_param_named(max_sectors, g_max_sectors, int, 0444); MODULE_PARM_DESC(max_sectors, "Maximum size of a command (in 512B sectors)"); +static unsigned long g_copy_max_bytes = COPY_MAX_BYTES; +module_param_named(copy_max_bytes, g_copy_max_bytes, ulong, 0444); +MODULE_PARM_DESC(copy_max_bytes, "Maximum size of a copy command (in bytes)"); + static unsigned int nr_devices = 1; module_param(nr_devices, uint, 0444); MODULE_PARM_DESC(nr_devices, "Number of devices to register"); @@ -409,6 +413,7 @@ NULLB_DEVICE_ATTR(home_node, uint, NULL); NULLB_DEVICE_ATTR(queue_mode, uint, NULL); NULLB_DEVICE_ATTR(blocksize, uint, NULL); NULLB_DEVICE_ATTR(max_sectors, uint, NULL); +NULLB_DEVICE_ATTR(copy_max_bytes, uint, NULL); NULLB_DEVICE_ATTR(irqmode, uint, NULL); NULLB_DEVICE_ATTR(hw_queue_depth, uint, NULL); NULLB_DEVICE_ATTR(index, uint, NULL); @@ -550,6 +555,7 @@ static struct configfs_attribute *nullb_device_attrs[] = { &nullb_device_attr_queue_mode, &nullb_device_attr_blocksize, &nullb_device_attr_max_sectors, + &nullb_device_attr_copy_max_bytes, &nullb_device_attr_irqmode, &nullb_device_attr_hw_queue_depth, &nullb_device_attr_index, @@ -656,7 +662,8 @@ static ssize_t memb_group_features_show(struct config_item *item, char *page) "poll_queues,power,queue_mode,shared_tag_bitmap,size," "submit_queues,use_per_node_hctx,virt_boundary,zoned," "zone_capacity,zone_max_active,zone_max_open," - "zone_nr_conv,zone_offline,zone_readonly,zone_size\n"); + "zone_nr_conv,zone_offline,zone_readonly,zone_size," + "copy_max_bytes\n"); } CONFIGFS_ATTR_RO(memb_group_, features); @@ -722,6 +729,7 @@ static struct nullb_device *null_alloc_dev(void) dev->queue_mode = g_queue_mode; dev->blocksize = g_bs; dev->max_sectors = g_max_sectors; + dev->copy_max_bytes = g_copy_max_bytes; dev->irqmode = g_irqmode; dev->hw_queue_depth = g_hw_queue_depth; dev->blocking = g_blocking; @@ -1271,6 +1279,67 @@ static int null_transfer(struct nullb *nullb, struct page *page, return err; } +static inline int nullb_setup_copy_write(struct nullb *nullb, + struct request *req, bool is_fua) +{ + sector_t sector_in, sector_out; + void *in, *out; + size_t rem, temp; + struct bio *bio; + unsigned long offset_in, offset_out; + struct nullb_page *t_page_in, *t_page_out; + int ret = -EIO; + + sector_out = blk_rq_pos(req); + + __rq_for_each_bio(bio, req) { + sector_in = bio->bi_iter.bi_sector; + rem = bio->bi_iter.bi_size; + } + + if (WARN_ON(!rem)) + return BLK_STS_NOTSUPP; + + spin_lock_irq(&nullb->lock); + while (rem > 0) { + temp = min_t(size_t, nullb->dev->blocksize, rem); + offset_in = (sector_in & SECTOR_MASK) << SECTOR_SHIFT; + offset_out = (sector_out & SECTOR_MASK) << SECTOR_SHIFT; + + if (null_cache_active(nullb) && !is_fua) + null_make_cache_space(nullb, PAGE_SIZE); + + t_page_in = null_lookup_page(nullb, sector_in, false, + !null_cache_active(nullb)); + if (!t_page_in) + goto err; + t_page_out = null_insert_page(nullb, sector_out, + !null_cache_active(nullb) || is_fua); + if (!t_page_out) + goto err; + + in = kmap_local_page(t_page_in->page); + out = kmap_local_page(t_page_out->page); + + memcpy(out + offset_out, in + offset_in, temp); + kunmap_local(out); + kunmap_local(in); + __set_bit(sector_out & SECTOR_MASK, t_page_out->bitmap); + + if (is_fua) + null_free_sector(nullb, sector_out, true); + + rem -= temp; + sector_in += temp >> SECTOR_SHIFT; + sector_out += temp >> SECTOR_SHIFT; + } + + ret = 0; +err: + spin_unlock_irq(&nullb->lock); + return ret; +} + static int null_handle_rq(struct nullb_cmd *cmd) { struct request *rq = cmd->rq; @@ -1280,13 +1349,16 @@ static int null_handle_rq(struct nullb_cmd *cmd) sector_t sector = blk_rq_pos(rq); struct req_iterator iter; struct bio_vec bvec; + bool fua = rq->cmd_flags & REQ_FUA; + + if (op_is_copy(req_op(rq))) + return nullb_setup_copy_write(nullb, rq, fua); spin_lock_irq(&nullb->lock); rq_for_each_segment(bvec, rq, iter) { len = bvec.bv_len; err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset, - op_is_write(req_op(rq)), sector, - rq->cmd_flags & REQ_FUA); + op_is_write(req_op(rq)), sector, fua); if (err) { spin_unlock_irq(&nullb->lock); return err; @@ -2042,6 +2114,9 @@ static int null_validate_conf(struct nullb_device *dev) return -EINVAL; } + if (dev->queue_mode == NULL_Q_BIO) + dev->copy_max_bytes = 0; + return 0; } @@ -2161,6 +2236,10 @@ static int null_add_dev(struct nullb_device *dev) dev->max_sectors = queue_max_hw_sectors(nullb->q); dev->max_sectors = min(dev->max_sectors, BLK_DEF_MAX_SECTORS); blk_queue_max_hw_sectors(nullb->q, dev->max_sectors); + blk_queue_max_copy_sectors_hw(nullb->q, + dev->copy_max_bytes >> SECTOR_SHIFT); + if (dev->copy_max_bytes) + blk_queue_flag_set(QUEUE_FLAG_COPY, nullb->disk->queue); if (dev->virt_boundary) blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1); diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h index 929f659dd255..e82e53a2e2df 100644 --- a/drivers/block/null_blk/null_blk.h +++ b/drivers/block/null_blk/null_blk.h @@ -107,6 +107,7 @@ struct nullb_device { unsigned int queue_mode; /* block interface */ unsigned int blocksize; /* block size */ unsigned int max_sectors; /* Max sectors per command */ + unsigned long copy_max_bytes; /* Max copy offload length in bytes */ unsigned int irqmode; /* IRQ completion handler */ unsigned int hw_queue_depth; /* queue depth */ unsigned int index; /* index of the disk, only valid with a disk */