From patchwork Tue Dec 15 06:03:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11974063 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78984C4361B for ; Tue, 15 Dec 2020 06:04:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4036E20791 for ; Tue, 15 Dec 2020 06:04:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725778AbgLOGEZ (ORCPT ); Tue, 15 Dec 2020 01:04:25 -0500 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:8974 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726300AbgLOGEV (ORCPT ); Tue, 15 Dec 2020 01:04:21 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1608012262; x=1639548262; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UfKvG3DF5+e1xbSx0w/6Rpv8SlZKd4h4pbL+RDgW3Js=; b=EDTHmm3uLCunkgL76Kd8yIXCU2pIMn5YHpDfIpYoN9bGyTmerhElCYYZ 2NNbRl5TTsF++jurC0Kvgz8H2rC6bmuW3kaOfVqEhRciKV7XAtS4XaKAn yJmCQC7+NCETitlRDkISgt0pa9nV57R+T74br5CtaL1ur/x/V6BmgCHVY W2IWz6MGoBQ5X3sDX+Zge0qtWG3TTx0jIjpy7E1JD9FiP2Ilxy/A40/fF ixAt6WPO5S9UZ7l5Sb41AGHW3vvkU2i0tnFx8mu09ZT6uq8Vp89/THO/f 7jm2kkGwjGQvPWqBNn42LLCnB4WoGJXEB4q2G53Wwd0cWp5WsGHZk1Zjj g==; IronPort-SDR: UC8drgPgtRDpWzRw2Gj341rQp3c3IjXCtehOVIkqe5qTk2NPJkG7oLisp2uctc7krXY0qBjHI/ o+Kh0MYGLiN03S7cr+ChSmdqyacRN02KzVhB8I/a1NWzvUXtJ8Do9TiFFm71HMdcwPv469V0Dv blHWCh/H1Kt5QA4l8yGgyMSdzMVubkrJ1CdaLVhu4POCMwi0Mdxu52UcRciik1rVKYRKFTy7bc gtSV6Le+VAStGioHp/rW2iqjN9NroNkE5U+f96rztkLSkfcmK18ndGtcGhAnrgEEuJwBnt7rEt 3G4= X-IronPort-AV: E=Sophos;i="5.78,420,1599494400"; d="scan'208";a="156369714" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 15 Dec 2020 14:03:15 +0800 IronPort-SDR: DJKxgqdkXds7krytASShkBaPRZXFHYS8KKsnnBLs5KL7q1t5YpkQ+qsPJsMKrRCtCpR5zyImF8 QlGOTlP644wUkGHzZMmIr4bpkmeOJkrYOXuE8hAoRfPMUc+g1ZwBwo/MyqHOoVdT82wFoxlBy9 ug+qQjus2jmDaTpNXW9e/zjNWIKN33mXg5UO6OWYaTANj750XlEGdekAqsPzW3ICB9h8iqMH4D 9sNOWBX6XXObC3uiSaz1IuLZmUAQG02xQ4f4zfSa1mnmWCIaTiWrAd2Ws5Hnan0OVfwWicK+ZK KfCHJ5NBz2J9xUF3oW/qvNKh Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2020 21:46:56 -0800 IronPort-SDR: l4I1If61PouEmATpRv5WNuoGaMD2g1GNIOgWhfrbcgIFMfd5lbAACpFAo46tzcnjDoGUhr09aH fkBrk7r8r6WtLlquL5MKTmLG4tScimfUigVvRw3PdsrlDnoxRvbXTZVxXe1aOhMTe9ut5UwS5h UHiygAOoe99Nwk3sqZWRakfnKdD665u6pP9pf9xXiRueL5tCIEio1al9jjNG+hHJBq1uqzNKc0 8kMqGiqI9mE0T8U/15IL8Q5/KNKQKA0ATwQ6UTPEwhmbeMTYKIr1cek4wviQMubw8RQYLV/egq w08= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 14 Dec 2020 22:03:14 -0800 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: sagi@grimberg.me, hch@lst.de, damien.lemoal@wdc.com, Chaitanya Kulkarni Subject: [PATCH V7 1/6] block: export bio_add_hw_pages() Date: Mon, 14 Dec 2020 22:03:00 -0800 Message-Id: <20201215060305.28141-2-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201215060305.28141-1-chaitanya.kulkarni@wdc.com> References: <20201215060305.28141-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org To implement the NVMe Zone Append command on the NVMeOF target side for generic Zoned Block Devices with NVMe Zoned Namespaces interface, we need to build the bios with hardware limitations, i.e. we use bio_add_hw_page() with queue_max_zone_append_sectors() instead of bio_add_page(). Without this API being exported NVMeOF target will require to use bio_add_hw_page() caller bio_iov_iter_get_pages(). That results in extra work which is inefficient. Export the API so that NVMeOF ZBD over ZNS backend can use it to build Zone Append bios. Signed-off-by: Chaitanya Kulkarni --- block/bio.c | 1 + block/blk.h | 4 ---- include/linux/blkdev.h | 4 ++++ 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/block/bio.c b/block/bio.c index fa01bef35bb1..eafd97c6c7fd 100644 --- a/block/bio.c +++ b/block/bio.c @@ -826,6 +826,7 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio, bio->bi_iter.bi_size += len; return len; } +EXPORT_SYMBOL(bio_add_hw_page); /** * bio_add_pc_page - attempt to add page to passthrough bio diff --git a/block/blk.h b/block/blk.h index e05507a8d1e3..1fdb8d5d8590 100644 --- a/block/blk.h +++ b/block/blk.h @@ -428,8 +428,4 @@ static inline void part_nr_sects_write(struct hd_struct *part, sector_t size) #endif } -int bio_add_hw_page(struct request_queue *q, struct bio *bio, - struct page *page, unsigned int len, unsigned int offset, - unsigned int max_sectors, bool *same_page); - #endif /* BLK_INTERNAL_H */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 05b346a68c2e..2bdaa7cacfa3 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -2023,4 +2023,8 @@ int fsync_bdev(struct block_device *bdev); struct super_block *freeze_bdev(struct block_device *bdev); int thaw_bdev(struct block_device *bdev, struct super_block *sb); +int bio_add_hw_page(struct request_queue *q, struct bio *bio, + struct page *page, unsigned int len, unsigned int offset, + unsigned int max_sectors, bool *same_page); + #endif /* _LINUX_BLKDEV_H */ From patchwork Tue Dec 15 06:03:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11974065 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FD24C4361B for ; Tue, 15 Dec 2020 06:04:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 27A2320791 for ; Tue, 15 Dec 2020 06:04:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725968AbgLOGEl (ORCPT ); Tue, 15 Dec 2020 01:04:41 -0500 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:10533 "EHLO esa1.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726319AbgLOGEc (ORCPT ); Tue, 15 Dec 2020 01:04:32 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1608012272; x=1639548272; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yOc8tZLY9aTLtJqoqcpsKtJ8jkH3U8ob/P06alAcQUU=; b=OZW5JKv/NmHbxukgIxe5+9bBJpf7MRth7mVgqeFryWLpPGvFLSMC9+5R g8Pdf2hINlvhWkeHFN0xvoBR9vw/us2QfzBQWJlnbozgO4Dp0htxTdEEi ccYBiVH2Eq8UCwPcwcEIhP2IXWr31GI0EPx4Dc5Gb9n5FDUppZe/es1eQ uhZndZ7l0QJSBo60liqHuKKxmyUXn8m+bh6w20ACYC+ABJrT+oRMlJQYF wm7rseNe7v7SXwtjQyAFrMLqlCIKLX1ZMjBtyoiubnHOizBLCx3cqNME8 TcUDAGHz5TAYeljyHqlGhn6YDVo5rjUeTrew14Fd/ckvUjWy46Ca3qNfk w==; IronPort-SDR: DZ+VTKcgpZ0bqMt/5JrZGZu7A0RTYfT/3r91/RB5++alrr93zG7nNNDsqDhQXJpGH8bvTxN0gh zK/li4l+NMTmZIHvbzUxekjJIo0Q6sobb20mhHU+A56Saty0B7vpUSQC1Z15wH0zKthHApCB60 7GoRTXIlH1p9ygpkjt/Q/sZsr0wN6/Cz2vTVwecaBxSuNI2o2iAuB12l3tsVYg+y+AwKSfkRtW V+SRKkNXusoiy2Ythnt6QUgHcZQ8GEUKKCNfqNhAAHZiy/1+pxmrmLPQjzB13ThKb5/+LDuCrk R8k= X-IronPort-AV: E=Sophos;i="5.78,420,1599494400"; d="scan'208";a="265354129" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 15 Dec 2020 14:03:24 +0800 IronPort-SDR: JuYb/aLzja3WOnOLj3+O4q9/46B7/ahvrHh8/sswV+3slwXCWZVQ1FOsZvX6ikcC9bZje8zrrA /joQgIqXMh2jrr5aa4yQObIshCxAEZU9A4jI/3Ggt8xz9RV12PmhtVv365FyVeQosCAa5kJ7Vo YWt7aJkhBts9iFMLh4acOU78VGPcQZ9Ic0Kaj7NxdzsVsDEyKQtkHQ0e2x0nIui50KB/UXwP2E 3iBP3z1j+utjgRl2V6pouJA6MFYnKHMIdS1mKu9zGfRFU8uDQDFefs8gAzI+400Ur5XKW+AubH UGAeblr0+OK5IfArzawAimlz Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2020 21:47:06 -0800 IronPort-SDR: PBELf+KjLLZ/jBGYF6IGuZ2KKABaMegiWW4MIiDv6nEAu6GYKOUBan1d7bS95Jq2GzkLR8GUlF jUtlR0IlY3kHJmEMwSaOuD6DMC7W90v5t7MDGGGiT0tWi+WszaoSj1f31JAxsPPCI4NQPFAcGY DG7SZB7NXdJGSkkSfNj3AmwISLU3/36GSYCAgVplAX+pWhxTHIRDYN10cWQTYbUJX91YpiFBUx 2rp3rr5wNinfYsrvfNtxA0rt/225/XYvtGEwAhtMwwnmTWHFF8lHgWfWTyBhBwqkFmu6hcLIfW qas= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 14 Dec 2020 22:03:24 -0800 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: sagi@grimberg.me, hch@lst.de, damien.lemoal@wdc.com, Chaitanya Kulkarni Subject: [PATCH V7 2/6] nvmet: add lba to sect conversion helpers Date: Mon, 14 Dec 2020 22:03:01 -0800 Message-Id: <20201215060305.28141-3-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201215060305.28141-1-chaitanya.kulkarni@wdc.com> References: <20201215060305.28141-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org In this preparation patch we add helpers to convert lbas to sectors and sectors to lba. This is needed to eliminate code duplication in the ZBD backend. Use these helpers in the block device backennd. Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/target/io-cmd-bdev.c | 8 +++----- drivers/nvme/target/nvmet.h | 10 ++++++++++ 2 files changed, 13 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index 125dde3f410e..23095bdfce06 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -256,8 +256,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req) if (is_pci_p2pdma_page(sg_page(req->sg))) op |= REQ_NOMERGE; - sector = le64_to_cpu(req->cmd->rw.slba); - sector <<= (req->ns->blksize_shift - 9); + sector = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba); if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { bio = &req->b.inline_bio; @@ -345,7 +344,7 @@ static u16 nvmet_bdev_discard_range(struct nvmet_req *req, int ret; ret = __blkdev_issue_discard(ns->bdev, - le64_to_cpu(range->slba) << (ns->blksize_shift - 9), + nvmet_lba_to_sect(ns, range->slba), le32_to_cpu(range->nlb) << (ns->blksize_shift - 9), GFP_KERNEL, 0, bio); if (ret && ret != -EOPNOTSUPP) { @@ -414,8 +413,7 @@ static void nvmet_bdev_execute_write_zeroes(struct nvmet_req *req) if (!nvmet_check_transfer_len(req, 0)) return; - sector = le64_to_cpu(write_zeroes->slba) << - (req->ns->blksize_shift - 9); + sector = nvmet_lba_to_sect(req->ns, write_zeroes->slba); nr_sector = (((sector_t)le16_to_cpu(write_zeroes->length) + 1) << (req->ns->blksize_shift - 9)); diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 592763732065..8776dd1a0490 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -603,4 +603,14 @@ static inline bool nvmet_ns_has_pi(struct nvmet_ns *ns) return ns->pi_type && ns->metadata_size == sizeof(struct t10_pi_tuple); } +static inline __le64 nvmet_sect_to_lba(struct nvmet_ns *ns, sector_t sect) +{ + return cpu_to_le64(sect >> (ns->blksize_shift - SECTOR_SHIFT)); +} + +static inline sector_t nvmet_lba_to_sect(struct nvmet_ns *ns, __le64 lba) +{ + return le64_to_cpu(lba) << (ns->blksize_shift - SECTOR_SHIFT); +} + #endif /* _NVMET_H */ From patchwork Tue Dec 15 06:03:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11974067 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DB38C4361B for ; Tue, 15 Dec 2020 06:04:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E79EE20791 for ; Tue, 15 Dec 2020 06:04:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725926AbgLOGEz (ORCPT ); Tue, 15 Dec 2020 01:04:55 -0500 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:36375 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726147AbgLOGEu (ORCPT ); Tue, 15 Dec 2020 01:04:50 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1608012289; x=1639548289; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mQ0mk9gOb3GueQDoqDxI+x/4JSzU9z3SWA8da78K+fM=; b=WU8J55hokfed88f438gpdo42kO3j6xkH1nF2uQ/lGeIP2/0iooH9AoUn NwnRwDpFbxG2bcJwpxgyAXlqycHXBofAv6yqehGlZe5fdtAs6GK+Ymmo6 9Khsn2HU5kEbIXWqdOLsPQqe2Gd4pbFnM8Z086GNlNmx/p5DHTJTRX8jO PE/MOpFYc9AtBz6Rnc36AI1WrKaF3llXYlCr/ld/dXguLLPIizFGsGBOH Xj2wKGHbzM5+VVcRYXPP+Pd3NKCR7yQTBvrmggyJ6l+oCCb4Axp9wiQlv kBKZq/B9ASnNogCcpKJx4GMNgHUX5X79LcE/8oADUg5c42nCmsGecZbO3 w==; IronPort-SDR: AQTFYUTPXurAGYWtZtVZ5qqprvCerwYBrAw9jwrXpcK38Kf0hIyzGadf1b9KVGAOhm5P67gku4 Zbnb0kMQzkSDAEA8KefIwpbcSxm9zU1+yhpXTDD6nlpV7ZMsYOTShvK1424r4ENSqYB6TEz1Av B5ze2Xmfd09rbuDQ6K6gFhnaF51PgImme4xYRvs6FUjQYrXkSpPsSVfEVAvXw0PBIEDWKkeNpo +mGgXNtM39akfXs2Jr38KcKDS/XwksyluTRIy0gLNX+PfifKa3j1hFVubuW0y/AKhDG4XIjVck ews= X-IronPort-AV: E=Sophos;i="5.78,420,1599494400"; d="scan'208";a="155197994" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 15 Dec 2020 14:03:40 +0800 IronPort-SDR: spqTAyjhowsz35j9+axYLxe/wMFThOOEAZyRDljUXoZvjbv+wDNVorLnFWYuTKskAj2d0ChO85 NPnCHt76SO2EVFEykb8r6R4BvP7PN0zIYX7D4V4zs0jne80DmvMJy6wMsMLViuYBLqfiU/Bns6 WKkhgN8pgOea0WkCHvRNCx/VGl5Lnfhv/qwhkBfZV7Yq3sboUMzEJCku76+BLeQ5dBIMDH+UKy XUwG/qTbTdamdDu5/9Ofx7V5zMquTVfYAa00U48a7Qcf5oWo3zmXXqIk1gZg4XUHD9HkGp3HoA kspOaa/mmzz+G8O0uB3i7RZ8 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2020 21:49:00 -0800 IronPort-SDR: wamrKfPWR01aYsCEvUChlhXQ38kBLloCi3TywL6gkp2uCy1laa+Yz+CrxmZxbBkD/mCbWkGvhJ B3o72mRtzgl6nx2/R07pRcbxNMiRtTClh5GhgZ3QXnVio9HRRw0ujfD9asAm0JwuZn0oyeN7J6 W7CYQiTkdDyBe57p1DGw8hWue6plmUD7zgt2aBUqnF+dlIXmqnz4IkHQFb6023oPSYkLflIHMK MKvs15RpgoCJdrgZDhqQYtrADKbBJ1skIsKSgSVA58YcHuH+6yb4CiUexkGMcptkySD7keybiZ EN4= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 14 Dec 2020 22:03:41 -0800 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: sagi@grimberg.me, hch@lst.de, damien.lemoal@wdc.com, Chaitanya Kulkarni Subject: [PATCH V7 3/6] nvmet: add NVM command set identifier support Date: Mon, 14 Dec 2020 22:03:02 -0800 Message-Id: <20201215060305.28141-4-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201215060305.28141-1-chaitanya.kulkarni@wdc.com> References: <20201215060305.28141-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org NVMe TP 4056 allows controller to support different command sets. NVMeoF target currently only supports namespaces that contain traditional logical blocks that may be randomly read and written. In some applications there is value in exposing namespaces that contain logical blocks that have special access rules (e.g. sequentially write required namespace such as Zoned Namespace (ZNS)). In order to support the Zoned Block Devices (ZBD) backend, controller needs to have support for ZNS Command Set Identifier (CSI). In this preparation patch we adjust the code such that it can now support different command sets. We update the namespace data structure to store the CSI value which defaults to NVME_CSI_NVM which represents traditional logical blocks namespace type. The CSI support is required to implement the ZBD backend over NVMe ZNS interface, since ZNS commands belongs to different command set than the default one. Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/target/admin-cmd.c | 33 ++++++++++++++++++++------------- drivers/nvme/target/core.c | 13 ++++++++++++- drivers/nvme/target/nvmet.h | 1 + 3 files changed, 33 insertions(+), 14 deletions(-) diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c index 74620240ac47..f4c0f3aca485 100644 --- a/drivers/nvme/target/admin-cmd.c +++ b/drivers/nvme/target/admin-cmd.c @@ -176,19 +176,26 @@ static void nvmet_execute_get_log_cmd_effects_ns(struct nvmet_req *req) if (!log) goto out; - log->acs[nvme_admin_get_log_page] = cpu_to_le32(1 << 0); - log->acs[nvme_admin_identify] = cpu_to_le32(1 << 0); - log->acs[nvme_admin_abort_cmd] = cpu_to_le32(1 << 0); - log->acs[nvme_admin_set_features] = cpu_to_le32(1 << 0); - log->acs[nvme_admin_get_features] = cpu_to_le32(1 << 0); - log->acs[nvme_admin_async_event] = cpu_to_le32(1 << 0); - log->acs[nvme_admin_keep_alive] = cpu_to_le32(1 << 0); - - log->iocs[nvme_cmd_read] = cpu_to_le32(1 << 0); - log->iocs[nvme_cmd_write] = cpu_to_le32(1 << 0); - log->iocs[nvme_cmd_flush] = cpu_to_le32(1 << 0); - log->iocs[nvme_cmd_dsm] = cpu_to_le32(1 << 0); - log->iocs[nvme_cmd_write_zeroes] = cpu_to_le32(1 << 0); + switch (req->cmd->get_log_page.csi) { + case NVME_CSI_NVM: + log->acs[nvme_admin_get_log_page] = cpu_to_le32(1 << 0); + log->acs[nvme_admin_identify] = cpu_to_le32(1 << 0); + log->acs[nvme_admin_abort_cmd] = cpu_to_le32(1 << 0); + log->acs[nvme_admin_set_features] = cpu_to_le32(1 << 0); + log->acs[nvme_admin_get_features] = cpu_to_le32(1 << 0); + log->acs[nvme_admin_async_event] = cpu_to_le32(1 << 0); + log->acs[nvme_admin_keep_alive] = cpu_to_le32(1 << 0); + + log->iocs[nvme_cmd_read] = cpu_to_le32(1 << 0); + log->iocs[nvme_cmd_write] = cpu_to_le32(1 << 0); + log->iocs[nvme_cmd_flush] = cpu_to_le32(1 << 0); + log->iocs[nvme_cmd_dsm] = cpu_to_le32(1 << 0); + log->iocs[nvme_cmd_write_zeroes] = cpu_to_le32(1 << 0); + break; + default: + status = NVME_SC_INVALID_LOG_PAGE; + break; + } status = nvmet_copy_to_sgl(req, 0, log, sizeof(*log)); diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c index 8ce4d59cc9e7..672e4009f8d6 100644 --- a/drivers/nvme/target/core.c +++ b/drivers/nvme/target/core.c @@ -681,6 +681,7 @@ struct nvmet_ns *nvmet_ns_alloc(struct nvmet_subsys *subsys, u32 nsid) uuid_gen(&ns->uuid); ns->buffered_io = false; + ns->csi = NVME_CSI_NVM; return ns; } @@ -1103,6 +1104,16 @@ static inline u8 nvmet_cc_iocqes(u32 cc) return (cc >> NVME_CC_IOCQES_SHIFT) & 0xf; } +static inline bool nvmet_cc_css_check(u8 cc_css) +{ + switch (cc_css <<= NVME_CC_CSS_SHIFT) { + case NVME_CC_CSS_NVM: + return true; + default: + return false; + } +} + static void nvmet_start_ctrl(struct nvmet_ctrl *ctrl) { lockdep_assert_held(&ctrl->lock); @@ -1111,7 +1122,7 @@ static void nvmet_start_ctrl(struct nvmet_ctrl *ctrl) nvmet_cc_iocqes(ctrl->cc) != NVME_NVM_IOCQES || nvmet_cc_mps(ctrl->cc) != 0 || nvmet_cc_ams(ctrl->cc) != 0 || - nvmet_cc_css(ctrl->cc) != 0) { + !nvmet_cc_css_check(nvmet_cc_css(ctrl->cc))) { ctrl->csts = NVME_CSTS_CFS; return; } diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 8776dd1a0490..476b3cd91c65 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -81,6 +81,7 @@ struct nvmet_ns { struct pci_dev *p2p_dev; int pi_type; int metadata_size; + u8 csi; }; static inline struct nvmet_ns *to_nvmet_ns(struct config_item *item) From patchwork Tue Dec 15 06:03:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11974069 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C40C6C4361B for ; Tue, 15 Dec 2020 06:04:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8B2CD20791 for ; Tue, 15 Dec 2020 06:04:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726147AbgLOGE6 (ORCPT ); Tue, 15 Dec 2020 01:04:58 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:50723 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725854AbgLOGE5 (ORCPT ); Tue, 15 Dec 2020 01:04:57 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1608012297; x=1639548297; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QgzWV+8w7DbIWzCQSn3bHv5sDcQ6RfkMG0ZuK/gm7WY=; b=c8sFJfjTg4Zkt9JbI1SLcBoo02FaKXC91gU4Nn6OvEYTgdUOzA/v8SGU 9yUjnj9Clo+7ZCFSSKb2G89HMjd2OBPpBp9b/dTwkTfjJwiL6jkpq2axe HrvceKwH8+rIaHUlFeuAiUgLkucYG4hDFgzgjYXm46COdLQn3CpCx/00E 6ohzCb6ijvUjpg465DZg0EtJMb1mXbQW7K3Lt1raUKtxJXYAsNG+eX7lY B01hgOYKhho5Pa15PnrA1FQft66eOroreWDL4H336VHDB3fgnQRihtXOV 8cxIMn27o2Xqlk1hsLdXl0TpolHrbzWYWFNyHx4wumh1U5F3mvBPC/w3d Q==; IronPort-SDR: DcWDG5iAv9kzsepGxBAfQqyrqh0bSjQAY2y7q9uBt2Q6I9xwM1N7auIoucs5UUCUEHQxQ4OH2s bWBiyAHMX2+sWwdgnMzOuJHflZC3ofro/EAEbnFtay10CnVy2LGqZbiqdX8A4Ha8byHx22AeDA DHnktT/EB4r2RX2wfc4K0H4vonXTzg+rKZ9paj8R78abNMruhfgnUPWE+YHRqqruwGAzBZ9zcR SGYxvrqwxAKjPzp/KkW7p1Iv7mUAs6lSrFTRcZhhxt0RHWPdxOoKvokRX9gsbeYwrY9PnMyttJ zA8= X-IronPort-AV: E=Sophos;i="5.78,420,1599494400"; d="scan'208";a="159618800" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 15 Dec 2020 14:03:52 +0800 IronPort-SDR: bIQujIQV0IBYEk2Keh9o7CxH60U2MNuWVdCc2Fqcz0lInFMed/FXXlU0mAqZmxQeZnXZaZOfdi YhE45uNE4qnHi03L8Fc7Zop0KnywSebH9N40Bivd6P96rtElb08l5F1gHa986XEISbdPRqwQ2I U2CUkqgLwl+cbG8jG5Ocn0TbRfYilzvUX7IdlmFjKF6So3jEE7M1vROF76bHs5fLaw+hDRv+mu XDojs+eOO3JtZCV/4rNG/ANOKdA/khXyVDfnTiiQJacgHAmKWFt8Y6wim88GER0izXuI3xX0bq 1zI3TdUpm4HG+3gCUYO1BWvJ Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2020 21:47:34 -0800 IronPort-SDR: XekbtZb2av2nVLIXOYbj4xQ03AUpZ48sJDCVTrC0b5NUx2h6ECZIm+D9gA/cXQgINJSIVps2Lc 8alFIJO8FnjurG3dKziKNIeQIAjkVOYwkwzosDGEeBwIqW/9RRzI7Blj+xICLytYHh3ULLkr9b gneXjtHF9RC1pnTSk2vRp2C7Wjz9rK/EaIv9p+Ah2KJu/ZwHLjyjNMwiaACHcX5A5pzYt97jgw 7adT+KjsD58nXnC35LLobNkBzNkHvxRpuC85bRJmgQQEUOO38SVuZxJPpwHHA/1rENxztOCUaN fhw= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 14 Dec 2020 22:03:52 -0800 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: sagi@grimberg.me, hch@lst.de, damien.lemoal@wdc.com, Chaitanya Kulkarni Subject: [PATCH V7 4/6] nvmet: add ZBD over ZNS backend support Date: Mon, 14 Dec 2020 22:03:03 -0800 Message-Id: <20201215060305.28141-5-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201215060305.28141-1-chaitanya.kulkarni@wdc.com> References: <20201215060305.28141-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org NVMe TP 4053 – Zoned Namespaces (ZNS) allows host software to communicate with a non-volatile memory subsystem using zones for NVMe protocol based controllers. NVMeOF already support the ZNS NVMe Protocol compliant devices on the target in the passthru mode. There are Generic zoned block devices like Shingled Magnetic Recording (SMR) HDDs which are not based on the NVMe protocol. This patch adds ZNS backend to support the ZBDs for NVMeOF target. This support includes implementing the new command set NVME_CSI_ZNS, adding different command handlers for ZNS command set such as NVMe Identify Controller, NVMe Identify Namespace, NVMe Zone Append, NVMe Zone Management Send and NVMe Zone Management Receive. With new command set identifier we also update the target command effects logs to reflect the ZNS compliant commands. Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/target/Makefile | 1 + drivers/nvme/target/admin-cmd.c | 26 +++ drivers/nvme/target/core.c | 1 + drivers/nvme/target/io-cmd-bdev.c | 33 ++- drivers/nvme/target/nvmet.h | 38 ++++ drivers/nvme/target/zns.c | 342 ++++++++++++++++++++++++++++++ 6 files changed, 433 insertions(+), 8 deletions(-) create mode 100644 drivers/nvme/target/zns.c diff --git a/drivers/nvme/target/Makefile b/drivers/nvme/target/Makefile index ebf91fc4c72e..9837e580fa7e 100644 --- a/drivers/nvme/target/Makefile +++ b/drivers/nvme/target/Makefile @@ -12,6 +12,7 @@ obj-$(CONFIG_NVME_TARGET_TCP) += nvmet-tcp.o nvmet-y += core.o configfs.o admin-cmd.o fabrics-cmd.o \ discovery.o io-cmd-file.o io-cmd-bdev.o nvmet-$(CONFIG_NVME_TARGET_PASSTHRU) += passthru.o +nvmet-$(CONFIG_BLK_DEV_ZONED) += zns.o nvme-loop-y += loop.o nvmet-rdma-y += rdma.o nvmet-fc-y += fc.o diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c index f4c0f3aca485..6f5279b15aa6 100644 --- a/drivers/nvme/target/admin-cmd.c +++ b/drivers/nvme/target/admin-cmd.c @@ -192,6 +192,15 @@ static void nvmet_execute_get_log_cmd_effects_ns(struct nvmet_req *req) log->iocs[nvme_cmd_dsm] = cpu_to_le32(1 << 0); log->iocs[nvme_cmd_write_zeroes] = cpu_to_le32(1 << 0); break; + case NVME_CSI_ZNS: + if (IS_ENABLED(CONFIG_BLK_DEV_ZONED)) { + u32 *iocs = log->iocs; + + iocs[nvme_cmd_zone_append] = cpu_to_le32(1 << 0); + iocs[nvme_cmd_zone_mgmt_send] = cpu_to_le32(1 << 0); + iocs[nvme_cmd_zone_mgmt_recv] = cpu_to_le32(1 << 0); + } + break; default: status = NVME_SC_INVALID_LOG_PAGE; break; @@ -614,6 +623,7 @@ static u16 nvmet_copy_ns_identifier(struct nvmet_req *req, u8 type, u8 len, static void nvmet_execute_identify_desclist(struct nvmet_req *req) { + u16 nvme_cis_zns = NVME_CSI_ZNS; u16 status = 0; off_t off = 0; @@ -638,6 +648,14 @@ static void nvmet_execute_identify_desclist(struct nvmet_req *req) if (status) goto out; } + if (IS_ENABLED(CONFIG_BLK_DEV_ZONED)) { + if (req->ns->csi == NVME_CSI_ZNS) + status = nvmet_copy_ns_identifier(req, NVME_NIDT_CSI, + NVME_NIDT_CSI_LEN, + &nvme_cis_zns, &off); + if (status) + goto out; + } if (sg_zero_buffer(req->sg, req->sg_cnt, NVME_IDENTIFY_DATA_SIZE - off, off) != NVME_IDENTIFY_DATA_SIZE - off) @@ -655,8 +673,16 @@ static void nvmet_execute_identify(struct nvmet_req *req) switch (req->cmd->identify.cns) { case NVME_ID_CNS_NS: return nvmet_execute_identify_ns(req); + case NVME_ID_CNS_CS_NS: + if (req->cmd->identify.csi == NVME_CSI_ZNS) + return nvmet_execute_identify_cns_cs_ns(req); + break; case NVME_ID_CNS_CTRL: return nvmet_execute_identify_ctrl(req); + case NVME_ID_CNS_CS_CTRL: + if (req->cmd->identify.csi == NVME_CSI_ZNS) + return nvmet_execute_identify_cns_cs_ctrl(req); + break; case NVME_ID_CNS_NS_ACTIVE_LIST: return nvmet_execute_identify_nslist(req); case NVME_ID_CNS_NS_DESC_LIST: diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c index 672e4009f8d6..17a99c7134dc 100644 --- a/drivers/nvme/target/core.c +++ b/drivers/nvme/target/core.c @@ -1107,6 +1107,7 @@ static inline u8 nvmet_cc_iocqes(u32 cc) static inline bool nvmet_cc_css_check(u8 cc_css) { switch (cc_css <<= NVME_CC_CSS_SHIFT) { + case NVME_CC_CSS_CSI: case NVME_CC_CSS_NVM: return true; default: diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index 23095bdfce06..6178ef643962 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -63,6 +63,14 @@ static void nvmet_bdev_ns_enable_integrity(struct nvmet_ns *ns) } } +void nvmet_bdev_ns_disable(struct nvmet_ns *ns) +{ + if (ns->bdev) { + blkdev_put(ns->bdev, FMODE_WRITE | FMODE_READ); + ns->bdev = NULL; + } +} + int nvmet_bdev_ns_enable(struct nvmet_ns *ns) { int ret; @@ -86,15 +94,15 @@ int nvmet_bdev_ns_enable(struct nvmet_ns *ns) if (IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY_T10)) nvmet_bdev_ns_enable_integrity(ns); - return 0; -} - -void nvmet_bdev_ns_disable(struct nvmet_ns *ns) -{ - if (ns->bdev) { - blkdev_put(ns->bdev, FMODE_WRITE | FMODE_READ); - ns->bdev = NULL; + if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) && bdev_is_zoned(ns->bdev)) { + if (!nvmet_bdev_zns_enable(ns)) { + nvmet_bdev_ns_disable(ns); + return -EINVAL; + } + ns->csi = NVME_CSI_ZNS; } + + return 0; } void nvmet_bdev_ns_revalidate(struct nvmet_ns *ns) @@ -448,6 +456,15 @@ u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req) case nvme_cmd_write_zeroes: req->execute = nvmet_bdev_execute_write_zeroes; return 0; + case nvme_cmd_zone_append: + req->execute = nvmet_bdev_execute_zone_append; + return 0; + case nvme_cmd_zone_mgmt_recv: + req->execute = nvmet_bdev_execute_zone_mgmt_recv; + return 0; + case nvme_cmd_zone_mgmt_send: + req->execute = nvmet_bdev_execute_zone_mgmt_send; + return 0; default: pr_err("unhandled cmd %d on qid %d\n", cmd->common.opcode, req->sq->qid); diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 476b3cd91c65..7361665585a2 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -252,6 +252,10 @@ struct nvmet_subsys { unsigned int admin_timeout; unsigned int io_timeout; #endif /* CONFIG_NVME_TARGET_PASSTHRU */ + +#ifdef CONFIG_BLK_DEV_ZONED + u8 zasl; +#endif /* CONFIG_BLK_DEV_ZONED */ }; static inline struct nvmet_subsys *to_subsys(struct config_item *item) @@ -614,4 +618,38 @@ static inline sector_t nvmet_lba_to_sect(struct nvmet_ns *ns, __le64 lba) return le64_to_cpu(lba) << (ns->blksize_shift - SECTOR_SHIFT); } +#ifdef CONFIG_BLK_DEV_ZONED +bool nvmet_bdev_zns_enable(struct nvmet_ns *ns); +void nvmet_execute_identify_cns_cs_ctrl(struct nvmet_req *req); +void nvmet_execute_identify_cns_cs_ns(struct nvmet_req *req); +void nvmet_bdev_execute_zone_mgmt_recv(struct nvmet_req *req); +void nvmet_bdev_execute_zone_mgmt_send(struct nvmet_req *req); +void nvmet_bdev_execute_zone_append(struct nvmet_req *req); +#else /* CONFIG_BLK_DEV_ZONED */ +static inline bool nvmet_bdev_zns_enable(struct nvmet_ns *ns) +{ + return false; +} +static inline void +nvmet_execute_identify_cns_cs_ctrl(struct nvmet_req *req) +{ +} +static inline void +nvmet_execute_identify_cns_cs_ns(struct nvmet_req *req) +{ +} +static inline void +nvmet_bdev_execute_zone_mgmt_recv(struct nvmet_req *req) +{ +} +static inline void +nvmet_bdev_execute_zone_mgmt_send(struct nvmet_req *req) +{ +} +static inline void +nvmet_bdev_execute_zone_append(struct nvmet_req *req) +{ +} +#endif /* CONFIG_BLK_DEV_ZONED */ + #endif /* _NVMET_H */ diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c new file mode 100644 index 000000000000..3981baa647b2 --- /dev/null +++ b/drivers/nvme/target/zns.c @@ -0,0 +1,342 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * NVMe ZNS-ZBD command implementation. + * Copyright (c) 2020-2021 HGST, a Western Digital Company. + */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include +#include +#include "nvmet.h" + +/* + * We set the Memory Page Size Minimum (MPSMIN) for target controller to 0 + * which gets added by 12 in the nvme_enable_ctrl() which results in 2^12 = 4k + * as page_shift value. When calculating the ZASL use shift by 12. + */ +#define NVMET_MPSMIN_SHIFT 12 + +static u16 nvmet_bdev_zns_checks(struct nvmet_req *req) +{ + u16 status = NVME_SC_SUCCESS; + + if (!bdev_is_zoned(req->ns->bdev)) { + status = NVME_SC_INVALID_NS | NVME_SC_DNR; + goto out; + } + + if (req->cmd->zmr.zra != NVME_ZRA_ZONE_REPORT) { + status = NVME_SC_INVALID_FIELD; + goto out; + } + + if (req->cmd->zmr.zrasf != NVME_ZRASF_ZONE_REPORT_ALL) { + status = NVME_SC_INVALID_FIELD; + goto out; + } + + if (req->cmd->zmr.pr != NVME_REPORT_ZONE_PARTIAL) + status = NVME_SC_INVALID_FIELD; + +out: + return status; +} + +/* + * ZNS related command implementation and helpers. + */ + +static inline u8 nvmet_zasl(unsigned int zone_append_sects) +{ + /* + * Zone Append Size Limit is the value experessed in the units + * of minimum memory page size (i.e. 12) and is reported power of 2. + */ + return ilog2((zone_append_sects << 9) >> NVMET_MPSMIN_SHIFT); +} + +static inline bool nvmet_zns_update_zasl(struct nvmet_ns *ns) +{ + struct request_queue *q = ns->bdev->bd_disk->queue; + u8 zasl = nvmet_zasl(queue_max_zone_append_sectors(q)); + + if (ns->subsys->zasl) + return ns->subsys->zasl < zasl ? false : true; + + ns->subsys->zasl = zasl; + return true; +} + + +static int nvmet_bdev_validate_zns_zones_cb(struct blk_zone *z, + unsigned int idx, void *data) +{ + if (z->type == BLK_ZONE_TYPE_CONVENTIONAL) + return -EOPNOTSUPP; + return 0; +} + +static bool nvmet_bdev_has_conv_zones(struct block_device *bdev) +{ + int ret; + + if (bdev->bd_disk->queue->conv_zones_bitmap) + return true; + + ret = blkdev_report_zones(bdev, 0, blkdev_nr_zones(bdev->bd_disk), + nvmet_bdev_validate_zns_zones_cb, NULL); + + return ret < 0 ? true : false; +} + +bool nvmet_bdev_zns_enable(struct nvmet_ns *ns) +{ + if (nvmet_bdev_has_conv_zones(ns->bdev)) + return false; + + /* + * For ZBC and ZAC devices, writes into sequential zones must be aligned + * to the device physical block size. So use this value as the logical + * block size to avoid errors. + */ + ns->blksize_shift = blksize_bits(bdev_physical_block_size(ns->bdev)); + + if (!nvmet_zns_update_zasl(ns)) + return false; + + return !(get_capacity(ns->bdev->bd_disk) & + (bdev_zone_sectors(ns->bdev) - 1)); +} + +/* + * ZNS related Admin and I/O command handlers. + */ +void nvmet_execute_identify_cns_cs_ctrl(struct nvmet_req *req) +{ + u8 zasl = req->sq->ctrl->subsys->zasl; + struct nvmet_ctrl *ctrl = req->sq->ctrl; + struct nvme_id_ctrl_zns *id; + u16 status; + + id = kzalloc(sizeof(*id), GFP_KERNEL); + if (!id) { + status = NVME_SC_INTERNAL; + goto out; + } + + if (ctrl->ops->get_mdts) + id->zasl = min_t(u8, ctrl->ops->get_mdts(ctrl), zasl); + else + id->zasl = zasl; + + status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id)); + + kfree(id); +out: + nvmet_req_complete(req, status); +} + +void nvmet_execute_identify_cns_cs_ns(struct nvmet_req *req) +{ + struct nvme_id_ns_zns *id_zns; + u16 status = NVME_SC_SUCCESS; + u64 zsze; + + if (le32_to_cpu(req->cmd->identify.nsid) == NVME_NSID_ALL) { + req->error_loc = offsetof(struct nvme_identify, nsid); + status = NVME_SC_INVALID_NS | NVME_SC_DNR; + goto out; + } + + id_zns = kzalloc(sizeof(*id_zns), GFP_KERNEL); + if (!id_zns) { + status = NVME_SC_INTERNAL; + goto out; + } + + req->ns = nvmet_find_namespace(req->sq->ctrl, req->cmd->identify.nsid); + if (!req->ns) { + status = NVME_SC_INTERNAL; + goto done; + } + + if (!bdev_is_zoned(req->ns->bdev)) { + req->error_loc = offsetof(struct nvme_identify, nsid); + status = NVME_SC_INVALID_NS | NVME_SC_DNR; + goto done; + } + + nvmet_ns_revalidate(req->ns); + zsze = (bdev_zone_sectors(req->ns->bdev) << 9) >> + req->ns->blksize_shift; + id_zns->lbafe[0].zsze = cpu_to_le64(zsze); + id_zns->mor = cpu_to_le32(bdev_max_open_zones(req->ns->bdev)); + id_zns->mar = cpu_to_le32(bdev_max_active_zones(req->ns->bdev)); + +done: + status = nvmet_copy_to_sgl(req, 0, id_zns, sizeof(*id_zns)); + kfree(id_zns); +out: + nvmet_req_complete(req, status); +} + +struct nvmet_report_zone_data { + struct nvmet_ns *ns; + struct nvme_zone_report *rz; +}; + +static int nvmet_bdev_report_zone_cb(struct blk_zone *z, unsigned int idx, + void *data) +{ + struct nvmet_report_zone_data *report_zone_data = data; + struct nvme_zone_descriptor *entries = report_zone_data->rz->entries; + struct nvmet_ns *ns = report_zone_data->ns; + + entries[idx].zcap = nvmet_sect_to_lba(ns, z->capacity); + entries[idx].zslba = nvmet_sect_to_lba(ns, z->start); + entries[idx].wp = nvmet_sect_to_lba(ns, z->wp); + entries[idx].za = z->reset ? 1 << 2 : 0; + entries[idx].zt = z->type; + entries[idx].zs = z->cond << 4; + + return 0; +} + +void nvmet_bdev_execute_zone_mgmt_recv(struct nvmet_req *req) +{ + sector_t sect = nvmet_lba_to_sect(req->ns, req->cmd->zmr.slba); + u32 bufsize = (le32_to_cpu(req->cmd->zmr.numd) + 1) << 2; + struct nvmet_report_zone_data data = { .ns = req->ns }; + unsigned int nr_zones; + int reported_zones; + u16 status; + + nr_zones = (bufsize - sizeof(struct nvme_zone_report)) / + sizeof(struct nvme_zone_descriptor); + + status = nvmet_bdev_zns_checks(req); + if (status) + goto out; + + data.rz = __vmalloc(bufsize, GFP_KERNEL | __GFP_NORETRY | __GFP_ZERO); + if (!data.rz) { + status = NVME_SC_INTERNAL; + goto out; + } + + reported_zones = blkdev_report_zones(req->ns->bdev, sect, nr_zones, + nvmet_bdev_report_zone_cb, + &data); + if (reported_zones < 0) { + status = NVME_SC_INTERNAL; + goto out_free_report_zones; + } + + data.rz->nr_zones = cpu_to_le64(reported_zones); + + status = nvmet_copy_to_sgl(req, 0, data.rz, bufsize); + +out_free_report_zones: + kvfree(data.rz); +out: + nvmet_req_complete(req, status); +} + +void nvmet_bdev_execute_zone_mgmt_send(struct nvmet_req *req) +{ + sector_t sect = nvmet_lba_to_sect(req->ns, req->cmd->zms.slba); + sector_t nr_sect = bdev_zone_sectors(req->ns->bdev); + u16 status = NVME_SC_SUCCESS; + enum req_opf op; + int ret; + + if (req->cmd->zms.select_all) + nr_sect = get_capacity(req->ns->bdev->bd_disk); + + switch (req->cmd->zms.zsa) { + case NVME_ZONE_OPEN: + op = REQ_OP_ZONE_OPEN; + break; + case NVME_ZONE_CLOSE: + op = REQ_OP_ZONE_CLOSE; + break; + case NVME_ZONE_FINISH: + op = REQ_OP_ZONE_FINISH; + break; + case NVME_ZONE_RESET: + op = REQ_OP_ZONE_RESET; + break; + default: + status = NVME_SC_INVALID_FIELD; + goto out; + } + + ret = blkdev_zone_mgmt(req->ns->bdev, op, sect, nr_sect, GFP_KERNEL); + if (ret) + status = NVME_SC_INTERNAL; +out: + nvmet_req_complete(req, status); +} + +void nvmet_bdev_execute_zone_append(struct nvmet_req *req) +{ + sector_t sect = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba); + struct request_queue *q = req->ns->bdev->bd_disk->queue; + unsigned int max_sects = queue_max_zone_append_sectors(q); + u16 status = NVME_SC_SUCCESS; + unsigned int total_len = 0; + struct scatterlist *sg; + int ret = 0, sg_cnt; + struct bio *bio; + + if (!nvmet_check_transfer_len(req, nvmet_rw_data_len(req))) + return; + + if (!req->sg_cnt) { + nvmet_req_complete(req, 0); + return; + } + + if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { + bio = &req->b.inline_bio; + bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec)); + } else { + bio = bio_alloc(GFP_KERNEL, req->sg_cnt); + } + + bio_set_dev(bio, req->ns->bdev); + bio->bi_iter.bi_sector = sect; + bio->bi_opf = REQ_OP_ZONE_APPEND | REQ_SYNC | REQ_IDLE; + if (req->cmd->rw.control & cpu_to_le16(NVME_RW_FUA)) + bio->bi_opf |= REQ_FUA; + + for_each_sg(req->sg, sg, req->sg_cnt, sg_cnt) { + struct page *p = sg_page(sg); + unsigned int l = sg->length; + unsigned int o = sg->offset; + bool same_page = false; + + ret = bio_add_hw_page(q, bio, p, l, o, max_sects, &same_page); + if (ret != sg->length) { + status = NVME_SC_INTERNAL; + goto out_bio_put; + } + if (same_page) + put_page(p); + + total_len += sg->length; + } + + if (total_len != nvmet_rw_data_len(req)) { + status = NVME_SC_INTERNAL | NVME_SC_DNR; + goto out_bio_put; + } + + ret = submit_bio_wait(bio); + req->cqe->result.u64 = nvmet_sect_to_lba(req->ns, + bio->bi_iter.bi_sector); + +out_bio_put: + if (bio != &req->b.inline_bio) + bio_put(bio); + nvmet_req_complete(req, ret < 0 ? NVME_SC_INTERNAL : status); +} From patchwork Tue Dec 15 06:03:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11974073 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C602C2BB48 for ; Tue, 15 Dec 2020 06:05:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 42B60207A0 for ; Tue, 15 Dec 2020 06:05:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725854AbgLOGFH (ORCPT ); Tue, 15 Dec 2020 01:05:07 -0500 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:56472 "EHLO esa1.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726235AbgLOGFH (ORCPT ); Tue, 15 Dec 2020 01:05:07 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1608012307; x=1639548307; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PPDiSY5mJQRDmFoGvUppmX0kTxns+mXyQ/4JveVI9hc=; b=aBOrbAiq4o59gwNwF85zF07X/7bpELZ7hiJl72V6PgdGbCdWu7H8gWD7 qrlG7z1FIj2v4UAex2tSCVdgt7LAzxE9tFvqsrejRNlM8+7VKblYI202W X4Beof7nc5pivRTvax4c9JwRSr/1URT208OiHcSEnOeBrEf/0CzmCEUQV 9NYPg9Av2CZpQeN/ZGuqgR8cAKr6CC/KGZEsixgj13tz19oL4Yp47nx5s JqZt3H2FY/lCK0jn9KyjVYG8DZ8ZWMyfMHX4p5A4XD3AZ5do9sbXMziuK rJkmf8ER2NBiIM+VXw3BQ3qktnAUiH2VroWStAK2zm/S9v5g2KAhbleJp A==; IronPort-SDR: rlAKkcxywMvgl0NqTFFi7ZQu/K/SD7scigacfqxPrwhMbBW1nePhNzfgOP97Rb7EjVidMbA7eX HYhZ2KmWy6T6UGNqgsZpxAfFMFbE8tdSZ3OQeAWLz5OSW/cbSbzWECwgqqBkUHwOrtkXD0EFcP Er0VId5ZEA3PKSB59g60f9t2HGW9Un7WLYV/HfV4jrdKP9SoafjMEzT3rlUP74iREmdtlcuZ8C PyxYGykM8b0FWD+Hau8OxQyqqO4qr+LcH8NK8vczP+BwBvPErpIIQhm3awwTXQ3h2b3HK1hfg8 aVY= X-IronPort-AV: E=Sophos;i="5.78,420,1599494400"; d="scan'208";a="265354160" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 15 Dec 2020 14:04:02 +0800 IronPort-SDR: C4h2nAMyfewVKQ/CIhe+MRsUj+ypfO9dKnvDISMs0GmKLOPkNmOmRbBq56R+UB6hFFkCRTCoci A3wwdmp2qin8pVbU1Ysqyb5BA5s+8nHXo8c9R1RX0QAbWh0sCflziYVg3lTL46e14Bx83FTFpt 7tv5FRMcWZCsWXpSV6Ima56yvGgKJ1ryp7nlDzgIULfst9eTI0STWOIsZFkN7X0oNfYo0wSDfl JoymQm69ZUV9OiWIy0MV9UpKvmsJ3V08YZXaEtnAZoUI6eeAgTIDsSufhneLAcMSwXqVqVK3c8 qU3ccpZ496c9JrtoBFP7iqR0 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2020 21:47:44 -0800 IronPort-SDR: 0c8iRz3GQa4vDOc4N59zYYpKgspvZNwyXG4IMFQcaWcm8DPFzJz1SVYOekcQYbcLUzUCWl9aXW +9LW4IDEJ5LIUlV0U9JXSBa84iTK1YcVKkycDyoRhVrpiJ4/vm+4VquezoSrd1JxHNHEqy0Dpg nmGvzE6MEGFvo+DEMsesuVVDB6yb51Asto6v8upPNELiOxce+iVDwEEq09Y2JEB78ulL7RMN7G dSRM19/SkqVygfy1nsUEvqOKxA/aaTh8+CvblqSjnxFqzo16CU9mxai6D8iKppbGjQe66+siWs 7qk= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 14 Dec 2020 22:04:02 -0800 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: sagi@grimberg.me, hch@lst.de, damien.lemoal@wdc.com, Chaitanya Kulkarni Subject: [PATCH V7 5/6] nvmet: add bio get helper for different backends Date: Mon, 14 Dec 2020 22:03:04 -0800 Message-Id: <20201215060305.28141-6-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201215060305.28141-1-chaitanya.kulkarni@wdc.com> References: <20201215060305.28141-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org With the addition of the zns backend now we have three different backends with inline bio optimization. That leads to having duplicate code in for allocating or initializing the bio in all three backends: generic bdev, passsthru, and generic zns. Add a helper function to reduce the duplicate code such that helper function accepts the bi_end_io callback which gets initialize for the non-inline bio_alloc() case. This is due to the special case needed for the passthru backend non-inline bio allocation bio_alloc() where we set the bio->bi_end_io = bio_put, having this parameter avoids the extra branch in the passthru fast path. For rest of the backends, we set the same bi_end_io callback for inline and non-inline cases, that is for generic bdev we set to nvmet_bio_done() and for generic zns we set to NULL.                             Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/target/io-cmd-bdev.c | 7 +------ drivers/nvme/target/nvmet.h | 16 ++++++++++++++++ drivers/nvme/target/passthru.c | 8 +------- drivers/nvme/target/zns.c | 8 +------- 4 files changed, 19 insertions(+), 20 deletions(-) diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index 6178ef643962..72746e29cb0d 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -266,12 +266,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req) sector = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba); - if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { - bio = &req->b.inline_bio; - bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec)); - } else { - bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES)); - } + bio = nvmet_req_bio_get(req, NULL); bio_set_dev(bio, req->ns->bdev); bio->bi_iter.bi_sector = sector; bio->bi_private = req; diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 7361665585a2..3fc84f79cce1 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -652,4 +652,20 @@ nvmet_bdev_execute_zone_append(struct nvmet_req *req) } #endif /* CONFIG_BLK_DEV_ZONED */ +static inline struct bio *nvmet_req_bio_get(struct nvmet_req *req, + bio_end_io_t *bi_end_io) +{ + struct bio *bio; + + if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { + bio = &req->b.inline_bio; + bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec)); + return bio; + } + + bio = bio_alloc(GFP_KERNEL, req->sg_cnt); + bio->bi_end_io = bi_end_io; + return bio; +} + #endif /* _NVMET_H */ diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c index b9776fc8f08f..54f765b566ee 100644 --- a/drivers/nvme/target/passthru.c +++ b/drivers/nvme/target/passthru.c @@ -194,13 +194,7 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq) if (req->sg_cnt > BIO_MAX_PAGES) return -EINVAL; - if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { - bio = &req->p.inline_bio; - bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec)); - } else { - bio = bio_alloc(GFP_KERNEL, min(req->sg_cnt, BIO_MAX_PAGES)); - bio->bi_end_io = bio_put; - } + bio = nvmet_req_bio_get(req, bio_put); bio->bi_opf = req_op(rq); for_each_sg(req->sg, sg, req->sg_cnt, i) { diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c index 3981baa647b2..8bafab98d076 100644 --- a/drivers/nvme/target/zns.c +++ b/drivers/nvme/target/zns.c @@ -296,13 +296,7 @@ void nvmet_bdev_execute_zone_append(struct nvmet_req *req) return; } - if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { - bio = &req->b.inline_bio; - bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec)); - } else { - bio = bio_alloc(GFP_KERNEL, req->sg_cnt); - } - + bio = nvmet_req_bio_get(req, NULL); bio_set_dev(bio, req->ns->bdev); bio->bi_iter.bi_sector = sect; bio->bi_opf = REQ_OP_ZONE_APPEND | REQ_SYNC | REQ_IDLE; From patchwork Tue Dec 15 06:03:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11974071 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D627BC4361B for ; Tue, 15 Dec 2020 06:05:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 965E9207A0 for ; Tue, 15 Dec 2020 06:05:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725976AbgLOGFG (ORCPT ); Tue, 15 Dec 2020 01:05:06 -0500 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:8974 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726235AbgLOGFF (ORCPT ); Tue, 15 Dec 2020 01:05:05 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1608012306; x=1639548306; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2WRsibq+qDho+pjgl/Wk32oPiBilraCut4aAJwCYJ8M=; b=YLyHiQkLzj15NHIOSFwZZ7WQCp12YL0x7coTZqdKd2JgVW6YeXI4PfXa 5Ni38TXZqyirwMFN0rnjrbHFgrmLTrRa6ymQ7JZJ7jND/uDvCqjwPiQjd IynVNza7dal1Yg3KD3MqNg6X1PfK1tHJzotU+/kbsSczLRoU8sYFz3QSZ 1Aex4Zb8IsOqCAQDmQILTqqcX6duJADzEXDLx4SFWFbn27dtcSbwBz1KZ U9cRevvasyU6Gjvhi+wq2QLPHg0BKMtbYqRNOoCn8vlOU974ylfmJbrff grSvP+zfvXyvzpbNpOCzfrMsTKK6JLka8BDbaUVw3I+Ywe72QfMi0Zg4a A==; IronPort-SDR: qN17aQFn6ktNPyPPhHY0hvt2GEi4KeLFKCRnMqfrlSpI88WVzo/DX5BVwtZ5HxASO8x9Qwcl7x Bcs3DSkpqUZhlb8YFlXeaB/8rbRr6YqlcDauwPdsR0ER/8crtq/NCQg8jpaweGMEMBoug9h72I kxYwxkJBUxNWVoDJ55GP98QI1I4q/JKNHoBruBq9jFrUWCSWIoEzXKqE5hAZGSsJ4AN4rEVI/1 TF92jMj9h3kRtishaRtnE6BMRaQHDRZWSeaCwXGxQnr/shuXegWVQ9ZfVVXhtmRHI1X6TtKCtj mEI= X-IronPort-AV: E=Sophos;i="5.78,420,1599494400"; d="scan'208";a="156369767" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 15 Dec 2020 14:04:12 +0800 IronPort-SDR: CLKYEQkdRf4dqr48CEP2AR3lRlEbxHBvr0LW/J3fshG0AoIlarm7tHlOJRMXsW1E3vaS56c/jd G78wYIwf0OWo0f9s6Fhw1pi76jxWH7AY5cpwkiNa56NAOwuMGLHfb/Je9POjedcJoKFpYUaVoh +830KlX3exYSqivGLbGC+w9SHdWDn/aBK9vp71F46HQbd2IXAo/Rt/BLccHZdVDbqzs5hqi5Zy G/I6qJfX2zY1JbOPR3sA6QWOVE4zy0rlr54pSv8jCCL42wYNnfFDNj+5Hd3T7LxcHH7hS/vSZX 4DFVCCzUBaLHsOeBihNo0XVr Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2020 21:47:54 -0800 IronPort-SDR: BmMmOzY6yL12xaJFftEGsBKcBk4EVNmQVzbyJSa/WgM6aQgDS0z+Iw+cakoRcTkhjwpU9+KZNo s9FFwWwtH6dXYa5Ll4QyvRdRNgdC4tGz6daHVEmfp1H2M7PTlUSJfiuRaaCYB4n4PhcUrBCyWS NCWSu4uXj3P5atQSlKS/bqKc2ZDw0de8zKi32uxl1h8eRmurwkhE3aaWydcNH7TMcS+pB9DUVL lmujSaEu8TedQ+Z1Rz+hWJewdC/feyQOb3rsbh3UQF5ARL6UBbOlvUaYdDV/+l2fPfsBiIqs2W Bak= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 14 Dec 2020 22:04:12 -0800 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: sagi@grimberg.me, hch@lst.de, damien.lemoal@wdc.com, Chaitanya Kulkarni Subject: [PATCH V7 6/6] nvmet: add bio put helper for different backends Date: Mon, 14 Dec 2020 22:03:05 -0800 Message-Id: <20201215060305.28141-7-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201215060305.28141-1-chaitanya.kulkarni@wdc.com> References: <20201215060305.28141-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org With the addition of zns backend now we have three different backends with inline bio optimization. That leads to having duplicate code in for freeing the bio in all three backends: generic bdev, passsthru and generic zns. Add a helper function to avoid the duplicate code and update the respective backends. Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/target/io-cmd-bdev.c | 3 +-- drivers/nvme/target/nvmet.h | 6 ++++++ drivers/nvme/target/passthru.c | 3 +-- drivers/nvme/target/zns.c | 3 +-- 4 files changed, 9 insertions(+), 6 deletions(-) diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index 72746e29cb0d..6ffd84a620e7 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -172,8 +172,7 @@ static void nvmet_bio_done(struct bio *bio) struct nvmet_req *req = bio->bi_private; nvmet_req_complete(req, blk_to_nvme_status(req, bio->bi_status)); - if (bio != &req->b.inline_bio) - bio_put(bio); + nvmet_req_bio_put(req, bio); } #ifdef CONFIG_BLK_DEV_INTEGRITY diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 3fc84f79cce1..e770086b5890 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -668,4 +668,10 @@ static inline struct bio *nvmet_req_bio_get(struct nvmet_req *req, return bio; } +static inline void nvmet_req_bio_put(struct nvmet_req *req, struct bio *bio) +{ + if (bio != &req->b.inline_bio) + bio_put(bio); +} + #endif /* _NVMET_H */ diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c index 54f765b566ee..a4a73d64c603 100644 --- a/drivers/nvme/target/passthru.c +++ b/drivers/nvme/target/passthru.c @@ -200,8 +200,7 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq) for_each_sg(req->sg, sg, req->sg_cnt, i) { if (bio_add_pc_page(rq->q, bio, sg_page(sg), sg->length, sg->offset) < sg->length) { - if (bio != &req->p.inline_bio) - bio_put(bio); + nvmet_req_bio_put(req, bio); return -EINVAL; } } diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c index 8bafab98d076..d6a8310cf672 100644 --- a/drivers/nvme/target/zns.c +++ b/drivers/nvme/target/zns.c @@ -330,7 +330,6 @@ void nvmet_bdev_execute_zone_append(struct nvmet_req *req) bio->bi_iter.bi_sector); out_bio_put: - if (bio != &req->b.inline_bio) - bio_put(bio); + nvmet_req_bio_put(req, bio); nvmet_req_complete(req, ret < 0 ? NVME_SC_INTERNAL : status); }