From patchwork Fri Apr 17 12:15:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Thumshirn X-Patchwork-Id: 11494987 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 14406174A for ; Fri, 17 Apr 2020 12:15:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EF0EA221EA for ; Fri, 17 Apr 2020 12:15:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="NQP5EzOA" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728116AbgDQMPw (ORCPT ); Fri, 17 Apr 2020 08:15:52 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:50633 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726050AbgDQMPs (ORCPT ); Fri, 17 Apr 2020 08:15:48 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1587125750; x=1618661750; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/pELZdFaBDOCSlhpLmW1NsDce5dkjBcqSqBPgtxDGc4=; b=NQP5EzOAraAAHtDKEJp7eWd8h0YJmBNMOikztMS8Li6h9Hjh2jU25it0 XmZfNE60DmE/u3j2GAnczSNE27W6QNhQsmQZd5GWOeSeVAkZ2gq7IuZCA sW6mzlptnbcDxTXEa5g5F7O/3sZOVRpsTxbPcJGtbLSSoZ6OrD5+hKIkz lWstjdrYZKMmL6sDABF51QkiqFIk1NYpe8XbGLlk1alZyaiQHjrEEtrYf auibSl6904aDDEHoLRy9tp6Q6tcOUusniaB294mSBeNZqSf8TNKq3Y39d Fqp9JzkabKj6nO8q9X35iuuDKgzR1V1uNn8JW+wFaq1W2H52+fSUvnVPo A==; IronPort-SDR: cwpUUwDFj4xbTmfopd4V7+miE+iYlrmyb4caUf5fjGwmips11Pvi0iDqeMshRYRIL2uneGcOGb Qf7m8jCisbpbhp0YJacGNfXoLcOGn0msAVliO3B9Dn8ZBuanDtQ1PY00tvxfHGGtD1B72gnUgm MvhZhN6Om73ntMHGqtbmniToBC8maKxLOCnMRxLWCKvcTQh1U0JVNQ98gFNVIt8ReMgnV+lN2q qWUInJ5b3hZ1nlIbmN12bGGJUkKqvvPhC6lylkGNnaWiFFsrdMI97mePibNVYNSpq2wiFxbLUY deE= X-IronPort-AV: E=Sophos;i="5.72,394,1580745600"; d="scan'208";a="237989186" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 17 Apr 2020 20:15:50 +0800 IronPort-SDR: 5d0Cb1mVcBKFHm3FqcuOIIgGeM7r8tOwejOtdz+AEcaluHX75F/+UBhYEBnFDh2a/KbEr7oTDQ bdhCeOCnXUwB0mnOyKnx7TG1lsrIPKchk= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 05:06:46 -0700 IronPort-SDR: liTpkeGF1lG49RXI90uJk/vixLqudWjSqAV2T6JkKWifx2cfZHGE5o3O7LBg5B5WZjsjKPrsRP axcuwLRS6OxQ== WDCIronportException: Internal Received: from unknown (HELO redsun60.ssa.fujisawa.hgst.com) ([10.149.66.36]) by uls-op-cesaip01.wdc.com with ESMTP; 17 Apr 2020 05:15:46 -0700 From: Johannes Thumshirn To: Jens Axboe Cc: Christoph Hellwig , linux-block , Damien Le Moal , Keith Busch , "linux-scsi @ vger . kernel . org" , "Martin K . Petersen" , "linux-fsdevel @ vger . kernel . org" , Daniel Wagner , Johannes Thumshirn , Christoph Hellwig Subject: [PATCH v7 01/11] scsi: free sgtables in case command setup fails Date: Fri, 17 Apr 2020 21:15:26 +0900 Message-Id: <20200417121536.5393-2-johannes.thumshirn@wdc.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200417121536.5393-1-johannes.thumshirn@wdc.com> References: <20200417121536.5393-1-johannes.thumshirn@wdc.com> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org In case scsi_setup_fs_cmnd() fails we're not freeing the sgtables allocated by scsi_init_io(), thus we leak the allocated memory. So free the sgtables allocated by scsi_init_io() in case scsi_setup_fs_cmnd() fails. Technically scsi_setup_scsi_cmnd() does not suffer from this problem, as it can only fail if scsi_init_io() fails, so it does not have sgtables allocated. But to maintain symmetry and as a measure of defensive programming, free the sgtables on scsi_setup_scsi_cmnd() failure as well. scsi_mq_free_sgtables() has safeguards against double-freeing of memory so this is safe to do. While we're at it, rename scsi_mq_free_sgtables() to scsi_free_sgtables(). Signed-off-by: Johannes Thumshirn Reviewed-by: Christoph Hellwig Reviewed-by: Daniel Wagner --- drivers/scsi/scsi_lib.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 47835c4b4ee0..ad97369ffabd 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -548,7 +548,7 @@ static void scsi_uninit_cmd(struct scsi_cmnd *cmd) } } -static void scsi_mq_free_sgtables(struct scsi_cmnd *cmd) +static void scsi_free_sgtables(struct scsi_cmnd *cmd) { if (cmd->sdb.table.nents) sg_free_table_chained(&cmd->sdb.table, @@ -560,7 +560,7 @@ static void scsi_mq_free_sgtables(struct scsi_cmnd *cmd) static void scsi_mq_uninit_cmd(struct scsi_cmnd *cmd) { - scsi_mq_free_sgtables(cmd); + scsi_free_sgtables(cmd); scsi_uninit_cmd(cmd); } @@ -1059,7 +1059,7 @@ blk_status_t scsi_init_io(struct scsi_cmnd *cmd) return BLK_STS_OK; out_free_sgtables: - scsi_mq_free_sgtables(cmd); + scsi_free_sgtables(cmd); return ret; } EXPORT_SYMBOL(scsi_init_io); @@ -1190,6 +1190,7 @@ static blk_status_t scsi_setup_cmnd(struct scsi_device *sdev, struct request *req) { struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req); + blk_status_t ret; if (!blk_rq_bytes(req)) cmd->sc_data_direction = DMA_NONE; @@ -1199,9 +1200,14 @@ static blk_status_t scsi_setup_cmnd(struct scsi_device *sdev, cmd->sc_data_direction = DMA_FROM_DEVICE; if (blk_rq_is_scsi(req)) - return scsi_setup_scsi_cmnd(sdev, req); + ret = scsi_setup_scsi_cmnd(sdev, req); else - return scsi_setup_fs_cmnd(sdev, req); + ret = scsi_setup_fs_cmnd(sdev, req); + + if (ret != BLK_STS_OK) + scsi_free_sgtables(cmd); + + return ret; } static blk_status_t From patchwork Fri Apr 17 12:15:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Thumshirn X-Patchwork-Id: 11494983 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B29DD81 for ; Fri, 17 Apr 2020 12:15:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 99F4D221E9 for ; Fri, 17 Apr 2020 12:15:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="Qolbq8Lo" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728223AbgDQMPw (ORCPT ); Fri, 17 Apr 2020 08:15:52 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:50639 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727845AbgDQMPu (ORCPT ); Fri, 17 Apr 2020 08:15:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1587125751; x=1618661751; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rp/9OsypQVA9Jov9x+82RNHkk+PbM/KfEuomMQ2uw8w=; b=Qolbq8Lo9uOHddaxWJNvb71ix03dASDl4c9h1RPXNQUs1ICmuwsxptWC v0hR9PHJeQWZAWJHZMlLyMUReM+DRIYSNS/KRDc0AsjBuFcOVpAPUMe3J HaSYAeHL31YVLdqWLeS6lgp71fxEe/Efi7GXBd+AWI2/5lR7TaR4RMvEW p831rnKXoFBM9LFgSqfCTdL/F8t3YvsnoGwvVGshyjMaik1T82aQlXPe6 Z/QYMstRxHw7zxbRceIR7ofa0X3NxvQkY4JMEYJnIuXjNI42m3E+8jIVV ubqJaPbyNmXX1iUol34y7w0UkGRRjlcrtdEAJm4JqKwJAMaw1M1223mIo w==; IronPort-SDR: F5YNONMjzsKj6GMFWTMS9jJdaj5/ObJzbxPp6AlX3age/yZ7TAAdY9X3u0wlDdgUYWNJPfOs4E rkzZMrlyTLhX9q2hGTDF8qoRNj+8GR4HlA02lmo68FpcKG9X7YsBrPKsafa9Dhu3ES2S0JUBMy U3LW5HjoHb3nbtboFvmnpzEA236MPI9viEFv2m9laNBz2AuBbkrQidRUlL1F9gie1fPvSD/YeB 3eSavtCgqJkNyWO23hkbGpmHDGTrWg1uocy/Y0QH9wL4qh5jWe5GJNzSBV/A0QfqIKyEAwuBzp M5M= X-IronPort-AV: E=Sophos;i="5.72,394,1580745600"; d="scan'208";a="237989190" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 17 Apr 2020 20:15:51 +0800 IronPort-SDR: jiBKecB7tyb+aekJxEz+GK3o0vsgqdn4AKHx4UPErX9LK/7Qqvcq2XAOiF5tVpOk2IO+LsJpSl DavzO606E0dAUgjriIOglrw/8OYCJyShM= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 05:06:48 -0700 IronPort-SDR: z4sZFldNIYr7WM9G1Ho96dYm8aPmzCNCl/euZB8zO80kJJ9s+EX37MB8H6vvXtwvPNsjM72tkH cdJrzMk+0GrA== WDCIronportException: Internal Received: from unknown (HELO redsun60.ssa.fujisawa.hgst.com) ([10.149.66.36]) by uls-op-cesaip01.wdc.com with ESMTP; 17 Apr 2020 05:15:48 -0700 From: Johannes Thumshirn To: Jens Axboe Cc: Christoph Hellwig , linux-block , Damien Le Moal , Keith Busch , "linux-scsi @ vger . kernel . org" , "Martin K . Petersen" , "linux-fsdevel @ vger . kernel . org" , Daniel Wagner , Johannes Thumshirn , Christoph Hellwig Subject: [PATCH v7 02/11] block: provide fallbacks for blk_queue_zone_is_seq and blk_queue_zone_no Date: Fri, 17 Apr 2020 21:15:27 +0900 Message-Id: <20200417121536.5393-3-johannes.thumshirn@wdc.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200417121536.5393-1-johannes.thumshirn@wdc.com> References: <20200417121536.5393-1-johannes.thumshirn@wdc.com> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org blk_queue_zone_is_seq() and blk_queue_zone_no() have not been called with CONFIG_BLK_DEV_ZONED disabled until now. The introduction of REQ_OP_ZONE_APPEND will change this, so we need to provide noop fallbacks for the !CONFIG_BLK_DEV_ZONED case. Signed-off-by: Johannes Thumshirn Reviewed-by: Christoph Hellwig Reviewed-by: Bart Van Assche --- include/linux/blkdev.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 32868fbedc9e..e47888a7d80b 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -729,6 +729,16 @@ static inline unsigned int blk_queue_nr_zones(struct request_queue *q) { return 0; } +static inline bool blk_queue_zone_is_seq(struct request_queue *q, + sector_t sector) +{ + return false; +} +static inline unsigned int blk_queue_zone_no(struct request_queue *q, + sector_t sector) +{ + return 0; +} #endif /* CONFIG_BLK_DEV_ZONED */ static inline bool rq_is_sync(struct request *rq) From patchwork Fri Apr 17 12:15:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Thumshirn X-Patchwork-Id: 11494995 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 967A881 for ; Fri, 17 Apr 2020 12:15:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 787A420644 for ; Fri, 17 Apr 2020 12:15:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="aHG/oPob" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728260AbgDQMPz (ORCPT ); Fri, 17 Apr 2020 08:15:55 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:50633 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727877AbgDQMPv (ORCPT ); Fri, 17 Apr 2020 08:15:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1587125754; x=1618661754; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ceyolFZjLog0R1cfXQxBGYFTm4zf9Qbi2FXx+cpH/5o=; b=aHG/oPobWwj7mZpX2Jp+lc45w/F29H2q0J/RCedGTiJDKkdL59pmRHxE NQ6JGZKMj/sU562pehTJTK/InhcDpFAodxG74uI5o84/hkynXLBA8IFK9 5Py1yYocK//QdeIim1HOSjBgHwH9NXhBr7BVG4zVUlXBn0ws2KNsKzup3 LAGDYrnQcAA4YL9++HVwFUTFpuZPJid/KoMX9HFwlJawt4ZJvCuXZnNSS ZVrOvhVorDNXVZoNVO7wuWRX8xL/yYlv4rHxhorQ3VUOz96kyswvFQKv5 0Pnv1UdbBK+M6X0FoO0OhAkDQze94+doI5oxS17f+jjdxPIIWOZ9GYEMw w==; IronPort-SDR: 8SE/wQ+SGag4IMolWorO+Xt+LJ2mal9rwvozkIUEVZI96KyiweFouAMr7Sr6tBw59sqUHJoxuz egJdlGNj54OwO8aW0FLkBZ8L2sOyjBXrPoat8ynWl6aq0aOID2jPWXmA3BX4El7nqKUykcIkbC yfxbIbcdHa8PlIrt4YXF5bMgFYBM7pXenXBcIz3iw2YEEj5/ufiyWKCk1gN/Gil5EjqIGAEeN3 D8fRHfrGS5XfUH6LLoqytWaF4LwVGVSDvnPt+n/Hm5bB/Debg9FOZBQQNFQOGz0jQ5OtP2iJzh 4TU= X-IronPort-AV: E=Sophos;i="5.72,394,1580745600"; d="scan'208";a="237989195" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 17 Apr 2020 20:15:53 +0800 IronPort-SDR: WxwblZyb4NlFLrUOv7fBCfmYSWqXCoR+6ukuaTElagah8sOIkBw0Z8rjcHTmv44bZfNKbx25Wj B/qi2jXA+dEOJE2FA4Nb0u3K9Veiq48co= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 05:06:50 -0700 IronPort-SDR: 3tK8s3qpnLpBEcAo/uYZ8yEk/WXWF2+n3DPD+A8HLWM9lWuqR5Vq5GuhzOYwlJWcn28ds/Zr1v ZfufQRzb5yuA== WDCIronportException: Internal Received: from unknown (HELO redsun60.ssa.fujisawa.hgst.com) ([10.149.66.36]) by uls-op-cesaip01.wdc.com with ESMTP; 17 Apr 2020 05:15:50 -0700 From: Johannes Thumshirn To: Jens Axboe Cc: Christoph Hellwig , linux-block , Damien Le Moal , Keith Busch , "linux-scsi @ vger . kernel . org" , "Martin K . Petersen" , "linux-fsdevel @ vger . kernel . org" , Daniel Wagner , Christoph Hellwig , Johannes Thumshirn Subject: [PATCH v7 03/11] block: rename __bio_add_pc_page to bio_add_hw_page Date: Fri, 17 Apr 2020 21:15:28 +0900 Message-Id: <20200417121536.5393-4-johannes.thumshirn@wdc.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200417121536.5393-1-johannes.thumshirn@wdc.com> References: <20200417121536.5393-1-johannes.thumshirn@wdc.com> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Christoph Hellwig Rename __bio_add_pc_page() to bio_add_hw_page() and explicitly pass in a max_sectors argument. This max_sectors argument can be used to specify constraints from the hardware. Signed-off-by: Christoph Hellwig [ jth: rebased and made public for blk-map.c ] Signed-off-by: Johannes Thumshirn Reviewed-by: Daniel Wagner --- block/bio.c | 60 +++++++++++++++++++++++++++---------------------- block/blk-map.c | 5 +++-- block/blk.h | 4 ++-- 3 files changed, 38 insertions(+), 31 deletions(-) diff --git a/block/bio.c b/block/bio.c index 21cbaa6a1c20..0f0e337e46b4 100644 --- a/block/bio.c +++ b/block/bio.c @@ -748,9 +748,14 @@ static inline bool page_is_mergeable(const struct bio_vec *bv, return true; } -static bool bio_try_merge_pc_page(struct request_queue *q, struct bio *bio, - struct page *page, unsigned len, unsigned offset, - bool *same_page) +/* + * Try to merge a page into a segment, while obeying the hardware segment + * size limit. This is not for normal read/write bios, but for passthrough + * or Zone Append operations that we can't split. + */ +static bool bio_try_merge_hw_seg(struct request_queue *q, struct bio *bio, + struct page *page, unsigned len, + unsigned offset, bool *same_page) { struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1]; unsigned long mask = queue_segment_boundary(q); @@ -764,39 +769,24 @@ static bool bio_try_merge_pc_page(struct request_queue *q, struct bio *bio, return __bio_try_merge_page(bio, page, len, offset, same_page); } -/** - * __bio_add_pc_page - attempt to add page to passthrough bio - * @q: the target queue - * @bio: destination bio - * @page: page to add - * @len: vec entry length - * @offset: vec entry offset - * @same_page: return if the merge happen inside the same page - * - * Attempt to add a page to the bio_vec maplist. This can fail for a - * number of reasons, such as the bio being full or target block device - * limitations. The target block device must allow bio's up to PAGE_SIZE, - * so it is always possible to add a single page to an empty bio. - * - * This should only be used by passthrough bios. +/* + * Add a page to a bio while respecting the hardware max_sectors, max_segment + * and gap limitations. */ -int __bio_add_pc_page(struct request_queue *q, struct bio *bio, +int bio_add_hw_page(struct request_queue *q, struct bio *bio, struct page *page, unsigned int len, unsigned int offset, - bool *same_page) + unsigned int max_sectors, bool *same_page) { struct bio_vec *bvec; - /* - * cloned bio must not modify vec list - */ - if (unlikely(bio_flagged(bio, BIO_CLONED))) + if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED))) return 0; - if (((bio->bi_iter.bi_size + len) >> 9) > queue_max_hw_sectors(q)) + if (((bio->bi_iter.bi_size + len) >> 9) > max_sectors) return 0; if (bio->bi_vcnt > 0) { - if (bio_try_merge_pc_page(q, bio, page, len, offset, same_page)) + if (bio_try_merge_hw_seg(q, bio, page, len, offset, same_page)) return len; /* @@ -823,11 +813,27 @@ int __bio_add_pc_page(struct request_queue *q, struct bio *bio, return len; } +/** + * bio_add_pc_page - attempt to add page to passthrough bio + * @q: the target queue + * @bio: destination bio + * @page: page to add + * @len: vec entry length + * @offset: vec entry offset + * + * Attempt to add a page to the bio_vec maplist. This can fail for a + * number of reasons, such as the bio being full or target block device + * limitations. The target block device must allow bio's up to PAGE_SIZE, + * so it is always possible to add a single page to an empty bio. + * + * This should only be used by passthrough bios. + */ int bio_add_pc_page(struct request_queue *q, struct bio *bio, struct page *page, unsigned int len, unsigned int offset) { bool same_page = false; - return __bio_add_pc_page(q, bio, page, len, offset, &same_page); + return bio_add_hw_page(q, bio, page, len, offset, + queue_max_hw_sectors(q), &same_page); } EXPORT_SYMBOL(bio_add_pc_page); diff --git a/block/blk-map.c b/block/blk-map.c index b72c361911a4..f36ff496a761 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -257,6 +257,7 @@ static struct bio *bio_copy_user_iov(struct request_queue *q, static struct bio *bio_map_user_iov(struct request_queue *q, struct iov_iter *iter, gfp_t gfp_mask) { + unsigned int max_sectors = queue_max_hw_sectors(q); int j; struct bio *bio; int ret; @@ -294,8 +295,8 @@ static struct bio *bio_map_user_iov(struct request_queue *q, if (n > bytes) n = bytes; - if (!__bio_add_pc_page(q, bio, page, n, offs, - &same_page)) { + if (!bio_add_hw_page(q, bio, page, n, offs, + max_sectors, &same_page)) { if (same_page) put_page(page); break; diff --git a/block/blk.h b/block/blk.h index 0a94ec68af32..ba31511c5243 100644 --- a/block/blk.h +++ b/block/blk.h @@ -484,8 +484,8 @@ static inline void part_nr_sects_write(struct hd_struct *part, sector_t size) struct request_queue *__blk_alloc_queue(int node_id); -int __bio_add_pc_page(struct request_queue *q, struct bio *bio, +int bio_add_hw_page(struct request_queue *q, struct bio *bio, struct page *page, unsigned int len, unsigned int offset, - bool *same_page); + unsigned int max_sectors, bool *same_page); #endif /* BLK_INTERNAL_H */ From patchwork Fri Apr 17 12:15:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Thumshirn X-Patchwork-Id: 11494993 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4563581 for ; Fri, 17 Apr 2020 12:15:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2277B21D94 for ; Fri, 17 Apr 2020 12:15:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="S7vogfm6" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728322AbgDQMPz (ORCPT ); Fri, 17 Apr 2020 08:15:55 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:50643 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727845AbgDQMPy (ORCPT ); Fri, 17 Apr 2020 08:15:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1587125755; x=1618661755; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=n0oh3nH0mBwW3eyPFRpwMZNXVU9DU6b4e5ZfmVFoSEk=; b=S7vogfm6oB32RF9dyBjt4kW04BNnZrg6lYD9E4j44yds0TQIZ/8lVv7f 085z/rZ13Su0Z5c24sD+10JocwMbIwYazoUn4S2djv7IlhZbxHBb5e5vx Sxvx0gwE9ILELmewzxYTlTVaM8JmL9pgkhHzpTmRqOT2RKJnuEVgJXP8q y1JtjgsMisMBKusA8ziCv3eijgUY9yIawdHKf1fLLh5ngXbJrYTBoNwY1 vjroo9VIXvOS+cxRYfOjW5mdetqdQiby/iAFqcmrc437HToU+S6u7zK86 AZ4LBqa5O6tM47EJwOWo+SQQEPtjg63CTACbPni3LeT3JVB0H0B9XtQZu w==; IronPort-SDR: 3knQVrrV9rM0Xr04cxwPBbTmX7V+5rAeNsKpp6UlPJCFykS2vv1CmnVMUvyUcnyqEebFY9Gd8h Y7Of1smmsWHgpgVPjlgOwZQWYm9cXkhdmyPXz33QqtI/MI1nvvtl/RCGuRyPB4fw7LNJWkU3it bp82eSTFnd2dSOPp7mF6ebMLiX4gzlM1V01HfpIdCwvKx7NOx/ttybjkFtNP5BsL4BtkMziFaW ze+qMWquLbCX9gFy/IE3Jdd7xEXIoEYQY9PMw2mmZT2DrJ4tGL2qxbhtlf+HUBBxctXrdj5yLn EFw= X-IronPort-AV: E=Sophos;i="5.72,394,1580745600"; d="scan'208";a="237989203" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 17 Apr 2020 20:15:54 +0800 IronPort-SDR: SY91Ep5Ix4PQyTDmgsjeAyVXmEwgVJqrr6sY/rkoGX3qxYNq1BFYWorkQ/ZBhrjSuVY13rbh9H B8fbaG3/VFns1EWg901aQZvQrW8r3RPBM= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 05:06:51 -0700 IronPort-SDR: HSC8wp4m42cPTmyZmQj6b2U3uBFUQBzWLvaEoVNRnuWXNaeVHNmcBrKSUPglKBMJplDBthowOS CLue03tTwHkQ== WDCIronportException: Internal Received: from unknown (HELO redsun60.ssa.fujisawa.hgst.com) ([10.149.66.36]) by uls-op-cesaip01.wdc.com with ESMTP; 17 Apr 2020 05:15:52 -0700 From: Johannes Thumshirn To: Jens Axboe Cc: Christoph Hellwig , linux-block , Damien Le Moal , Keith Busch , "linux-scsi @ vger . kernel . org" , "Martin K . Petersen" , "linux-fsdevel @ vger . kernel . org" , Daniel Wagner , Johannes Thumshirn , Christoph Hellwig Subject: [PATCH v7 04/11] block: Introduce REQ_OP_ZONE_APPEND Date: Fri, 17 Apr 2020 21:15:29 +0900 Message-Id: <20200417121536.5393-5-johannes.thumshirn@wdc.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200417121536.5393-1-johannes.thumshirn@wdc.com> References: <20200417121536.5393-1-johannes.thumshirn@wdc.com> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Keith Busch Define REQ_OP_ZONE_APPEND to append-write sectors to a zone of a zoned block device. This is a no-merge write operation. A zone append write BIO must: * Target a zoned block device * Have a sector position indicating the start sector of the target zone * The target zone must be a sequential write zone * The BIO must not cross a zone boundary * The BIO size must not be split to ensure that a single range of LBAs is written with a single command. Implement these checks in generic_make_request_checks() using the helper function blk_check_zone_append(). To avoid write append BIO splitting, introduce the new max_zone_append_sectors queue limit attribute and ensure that a BIO size is always lower than this limit. Export this new limit through sysfs and check these limits in bio_full(). Also when a LLDD can't dispatch a request to a specific zone, it will return BLK_STS_ZONE_RESOURCE indicating this request needs to be delayed, e.g. because the zone it will be dispatched to is still write-locked. If this happens set the request aside in a local list to continue trying dispatching requests such as READ requests or a WRITE/ZONE_APPEND requests targetting other zones. This way we can still keep a high queue depth without starving other requests even if one request can't be served due to zone write-locking. Finally, make sure that the bio sector position indicates the actual write position as indicated by the device on completion. Signed-off-by: Keith Busch [ jth: added zone-append specific add_page and merge_page helpers ] Signed-off-by: Johannes Thumshirn Reviewed-by: Christoph Hellwig --- block/bio.c | 64 ++++++++++++++++++++++++++++++++++++--- block/blk-core.c | 52 +++++++++++++++++++++++++++++++ block/blk-mq.c | 27 +++++++++++++++++ block/blk-settings.c | 23 ++++++++++++++ block/blk-sysfs.c | 13 ++++++++ drivers/scsi/scsi_lib.c | 1 + include/linux/blk_types.h | 14 +++++++++ include/linux/blkdev.h | 11 +++++++ 8 files changed, 200 insertions(+), 5 deletions(-) diff --git a/block/bio.c b/block/bio.c index 0f0e337e46b4..97baadc6d964 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1006,7 +1006,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) put_page(page); } else { if (WARN_ON_ONCE(bio_full(bio, len))) - return -EINVAL; + return -EINVAL; __bio_add_page(bio, page, len, offset); } offset = 0; @@ -1016,6 +1016,50 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) return 0; } +static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter) +{ + unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt; + unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt; + struct request_queue *q = bio->bi_disk->queue; + unsigned int max_append_sectors = queue_max_zone_append_sectors(q); + struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt; + struct page **pages = (struct page **)bv; + ssize_t size, left; + unsigned len, i; + size_t offset; + + if (WARN_ON_ONCE(!max_append_sectors)) + return 0; + + /* + * Move page array up in the allocated memory for the bio vecs as far as + * possible so that we can start filling biovecs from the beginning + * without overwriting the temporary page array. + */ + BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2); + pages += entries_left * (PAGE_PTRS_PER_BVEC - 1); + + size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset); + if (unlikely(size <= 0)) + return size ? size : -EFAULT; + + for (left = size, i = 0; left > 0; left -= len, i++) { + struct page *page = pages[i]; + bool same_page = false; + + len = min_t(size_t, PAGE_SIZE - offset, left); + if (bio_add_hw_page(q, bio, page, len, offset, + max_append_sectors, &same_page) != len) + return -EINVAL; + if (same_page) + put_page(page); + offset = 0; + } + + iov_iter_advance(iter, size); + return 0; +} + /** * bio_iov_iter_get_pages - add user or kernel pages to a bio * @bio: bio to add pages to @@ -1045,10 +1089,16 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) return -EINVAL; do { - if (is_bvec) - ret = __bio_iov_bvec_add_pages(bio, iter); - else - ret = __bio_iov_iter_get_pages(bio, iter); + if (bio_op(bio) == REQ_OP_ZONE_APPEND) { + if (WARN_ON_ONCE(is_bvec)) + return -EINVAL; + ret = __bio_iov_append_get_pages(bio, iter); + } else { + if (is_bvec) + ret = __bio_iov_bvec_add_pages(bio, iter); + else + ret = __bio_iov_iter_get_pages(bio, iter); + } } while (!ret && iov_iter_count(iter) && !bio_full(bio, 0)); if (is_bvec) @@ -1451,6 +1501,10 @@ struct bio *bio_split(struct bio *bio, int sectors, BUG_ON(sectors <= 0); BUG_ON(sectors >= bio_sectors(bio)); + /* Zone append commands cannot be split */ + if (WARN_ON_ONCE(bio_op(bio) == REQ_OP_ZONE_APPEND)) + return NULL; + split = bio_clone_fast(bio, gfp, bs); if (!split) return NULL; diff --git a/block/blk-core.c b/block/blk-core.c index 7e4a1da0715e..34fe47a728c3 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -135,6 +135,7 @@ static const char *const blk_op_name[] = { REQ_OP_NAME(ZONE_OPEN), REQ_OP_NAME(ZONE_CLOSE), REQ_OP_NAME(ZONE_FINISH), + REQ_OP_NAME(ZONE_APPEND), REQ_OP_NAME(WRITE_SAME), REQ_OP_NAME(WRITE_ZEROES), REQ_OP_NAME(SCSI_IN), @@ -240,6 +241,17 @@ static void req_bio_endio(struct request *rq, struct bio *bio, bio_advance(bio, nbytes); + if (req_op(rq) == REQ_OP_ZONE_APPEND && error == BLK_STS_OK) { + /* + * Partial zone append completions cannot be supported as the + * BIO fragments may end up not being written sequentially. + */ + if (bio->bi_iter.bi_size) + bio->bi_status = BLK_STS_IOERR; + else + bio->bi_iter.bi_sector = rq->__sector; + } + /* don't actually finish bio if it's part of flush sequence */ if (bio->bi_iter.bi_size == 0 && !(rq->rq_flags & RQF_FLUSH_SEQ)) bio_endio(bio); @@ -871,6 +883,41 @@ static inline int blk_partition_remap(struct bio *bio) return ret; } +/* + * Check write append to a zoned block device. + */ +static inline blk_status_t blk_check_zone_append(struct request_queue *q, + struct bio *bio) +{ + sector_t pos = bio->bi_iter.bi_sector; + int nr_sectors = bio_sectors(bio); + + /* Only applicable to zoned block devices */ + if (!blk_queue_is_zoned(q)) + return BLK_STS_NOTSUPP; + + /* The bio sector must point to the start of a sequential zone */ + if (pos & (blk_queue_zone_sectors(q) - 1) || + !blk_queue_zone_is_seq(q, pos)) + return BLK_STS_IOERR; + + /* + * Not allowed to cross zone boundaries. Otherwise, the BIO will be + * split and could result in non-contiguous sectors being written in + * different zones. + */ + if (blk_queue_zone_no(q, pos) != blk_queue_zone_no(q, pos + nr_sectors)) + return BLK_STS_IOERR; + + /* Make sure the BIO is small enough and will not get split */ + if (nr_sectors > q->limits.max_zone_append_sectors) + return BLK_STS_IOERR; + + bio->bi_opf |= REQ_NOMERGE; + + return BLK_STS_OK; +} + static noinline_for_stack bool generic_make_request_checks(struct bio *bio) { @@ -943,6 +990,11 @@ generic_make_request_checks(struct bio *bio) if (!q->limits.max_write_same_sectors) goto not_supported; break; + case REQ_OP_ZONE_APPEND: + status = blk_check_zone_append(q, bio); + if (status != BLK_STS_OK) + goto end_io; + break; case REQ_OP_ZONE_RESET: case REQ_OP_ZONE_OPEN: case REQ_OP_ZONE_CLOSE: diff --git a/block/blk-mq.c b/block/blk-mq.c index 8e56884fd2e9..50e216a218ee 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1195,6 +1195,19 @@ static void blk_mq_handle_dev_resource(struct request *rq, __blk_mq_requeue_request(rq); } +static void blk_mq_handle_zone_resource(struct request *rq, + struct list_head *zone_list) +{ + /* + * If we end up here it is because we cannot dispatch a request to a + * specific zone due to LLD level zone-write locking or other zone + * related resource not being available. In this case, set the request + * aside in zone_list for retrying it later. + */ + list_add(&rq->queuelist, zone_list); + __blk_mq_requeue_request(rq); +} + /* * Returns true if we did some work AND can potentially do more. */ @@ -1206,6 +1219,7 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list, bool no_tag = false; int errors, queued; blk_status_t ret = BLK_STS_OK; + LIST_HEAD(zone_list); if (list_empty(list)) return false; @@ -1264,6 +1278,16 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list, if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) { blk_mq_handle_dev_resource(rq, list); break; + } else if (ret == BLK_STS_ZONE_RESOURCE) { + /* + * Move the request to zone_list and keep going through + * the dispatch list to find more requests the drive can + * accept. + */ + blk_mq_handle_zone_resource(rq, &zone_list); + if (list_empty(list)) + break; + continue; } if (unlikely(ret != BLK_STS_OK)) { @@ -1275,6 +1299,9 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list, queued++; } while (!list_empty(list)); + if (!list_empty(&zone_list)) + list_splice_tail_init(&zone_list, list); + hctx->dispatched[queued_to_index(queued)]++; /* diff --git a/block/blk-settings.c b/block/blk-settings.c index 14397b4c4b53..58d5b49fb131 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -48,6 +48,7 @@ void blk_set_default_limits(struct queue_limits *lim) lim->chunk_sectors = 0; lim->max_write_same_sectors = 0; lim->max_write_zeroes_sectors = 0; + lim->max_zone_append_sectors = 0; lim->max_discard_sectors = 0; lim->max_hw_discard_sectors = 0; lim->discard_granularity = 0; @@ -83,6 +84,7 @@ void blk_set_stacking_limits(struct queue_limits *lim) lim->max_dev_sectors = UINT_MAX; lim->max_write_same_sectors = UINT_MAX; lim->max_write_zeroes_sectors = UINT_MAX; + lim->max_zone_append_sectors = UINT_MAX; } EXPORT_SYMBOL(blk_set_stacking_limits); @@ -221,6 +223,25 @@ void blk_queue_max_write_zeroes_sectors(struct request_queue *q, } EXPORT_SYMBOL(blk_queue_max_write_zeroes_sectors); +/** + * blk_queue_max_zone_append_sectors - set max sectors for a single zone append + * @q: the request queue for the device + * @max_zone_append_sectors: maximum number of sectors to write per command + **/ +void blk_queue_max_zone_append_sectors(struct request_queue *q, + unsigned int max_zone_append_sectors) +{ + unsigned int max_sectors; + + max_sectors = min(q->limits.max_hw_sectors, max_zone_append_sectors); + if (max_sectors) + max_sectors = min_not_zero(q->limits.chunk_sectors, + max_sectors); + + q->limits.max_zone_append_sectors = max_sectors; +} +EXPORT_SYMBOL_GPL(blk_queue_max_zone_append_sectors); + /** * blk_queue_max_segments - set max hw segments for a request for this queue * @q: the request queue for the device @@ -470,6 +491,8 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, b->max_write_same_sectors); t->max_write_zeroes_sectors = min(t->max_write_zeroes_sectors, b->max_write_zeroes_sectors); + t->max_zone_append_sectors = min(t->max_zone_append_sectors, + b->max_zone_append_sectors); t->bounce_pfn = min_not_zero(t->bounce_pfn, b->bounce_pfn); t->seg_boundary_mask = min_not_zero(t->seg_boundary_mask, diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index fca9b158f4a0..02643e149d5e 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -218,6 +218,13 @@ static ssize_t queue_write_zeroes_max_show(struct request_queue *q, char *page) (unsigned long long)q->limits.max_write_zeroes_sectors << 9); } +static ssize_t queue_zone_append_max_show(struct request_queue *q, char *page) +{ + unsigned long long max_sectors = q->limits.max_zone_append_sectors; + + return sprintf(page, "%llu\n", max_sectors << SECTOR_SHIFT); +} + static ssize_t queue_max_sectors_store(struct request_queue *q, const char *page, size_t count) { @@ -639,6 +646,11 @@ static struct queue_sysfs_entry queue_write_zeroes_max_entry = { .show = queue_write_zeroes_max_show, }; +static struct queue_sysfs_entry queue_zone_append_max_entry = { + .attr = {.name = "zone_append_max_bytes", .mode = 0444 }, + .show = queue_zone_append_max_show, +}; + static struct queue_sysfs_entry queue_nonrot_entry = { .attr = {.name = "rotational", .mode = 0644 }, .show = queue_show_nonrot, @@ -749,6 +761,7 @@ static struct attribute *queue_attrs[] = { &queue_discard_zeroes_data_entry.attr, &queue_write_same_max_entry.attr, &queue_write_zeroes_max_entry.attr, + &queue_zone_append_max_entry.attr, &queue_nonrot_entry.attr, &queue_zoned_entry.attr, &queue_nr_zones_entry.attr, diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index ad97369ffabd..b9e8f55cf8c4 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -1690,6 +1690,7 @@ static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx, case BLK_STS_OK: break; case BLK_STS_RESOURCE: + case BLK_STS_ZONE_RESOURCE: if (atomic_read(&sdev->device_busy) || scsi_device_blocked(sdev)) ret = BLK_STS_DEV_RESOURCE; diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 70254ae11769..824ec2d89954 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -63,6 +63,18 @@ typedef u8 __bitwise blk_status_t; */ #define BLK_STS_DEV_RESOURCE ((__force blk_status_t)13) +/* + * BLK_STS_ZONE_RESOURCE is returned from the driver to the block layer if zone + * related resources are unavailable, but the driver can guarantee the queue + * will be rerun in the future once the resources become available again. + * + * This is different from BLK_STS_DEV_RESOURCE in that it explicitly references + * a zone specific resource and IO to a different zone on the same device could + * still be served. Examples of that are zones that are write-locked, but a read + * to the same zone could be served. + */ +#define BLK_STS_ZONE_RESOURCE ((__force blk_status_t)14) + /** * blk_path_error - returns true if error may be path related * @error: status the request was completed with @@ -296,6 +308,8 @@ enum req_opf { REQ_OP_ZONE_CLOSE = 11, /* Transition a zone to full */ REQ_OP_ZONE_FINISH = 12, + /* write data at the current zone write pointer */ + REQ_OP_ZONE_APPEND = 13, /* SCSI passthrough using struct scsi_request */ REQ_OP_SCSI_IN = 32, diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index e47888a7d80b..774947365341 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -336,6 +336,7 @@ struct queue_limits { unsigned int max_hw_discard_sectors; unsigned int max_write_same_sectors; unsigned int max_write_zeroes_sectors; + unsigned int max_zone_append_sectors; unsigned int discard_granularity; unsigned int discard_alignment; @@ -757,6 +758,9 @@ static inline bool rq_mergeable(struct request *rq) if (req_op(rq) == REQ_OP_WRITE_ZEROES) return false; + if (req_op(rq) == REQ_OP_ZONE_APPEND) + return false; + if (rq->cmd_flags & REQ_NOMERGE_FLAGS) return false; if (rq->rq_flags & RQF_NOMERGE_FLAGS) @@ -1091,6 +1095,8 @@ extern void blk_queue_max_write_same_sectors(struct request_queue *q, extern void blk_queue_max_write_zeroes_sectors(struct request_queue *q, unsigned int max_write_same_sectors); extern void blk_queue_logical_block_size(struct request_queue *, unsigned int); +extern void blk_queue_max_zone_append_sectors(struct request_queue *q, + unsigned int max_zone_append_sectors); extern void blk_queue_physical_block_size(struct request_queue *, unsigned int); extern void blk_queue_alignment_offset(struct request_queue *q, unsigned int alignment); @@ -1303,6 +1309,11 @@ static inline unsigned int queue_max_segment_size(const struct request_queue *q) return q->limits.max_segment_size; } +static inline unsigned int queue_max_zone_append_sectors(const struct request_queue *q) +{ + return q->limits.max_zone_append_sectors; +} + static inline unsigned queue_logical_block_size(const struct request_queue *q) { int retval = 512; From patchwork Fri Apr 17 12:15:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Thumshirn X-Patchwork-Id: 11495003 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7C64881 for ; Fri, 17 Apr 2020 12:16:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 64472221EC for ; Fri, 17 Apr 2020 12:16:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="KjuyHPm1" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728390AbgDQMP6 (ORCPT ); Fri, 17 Apr 2020 08:15:58 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:50643 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728307AbgDQMP4 (ORCPT ); Fri, 17 Apr 2020 08:15:56 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1587125756; x=1618661756; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=idI1WH8nxkPnh+O/RqijVff+VKCcvan/3RihAz7vyTw=; b=KjuyHPm1HBxuiYaPHs+0aPs5MxQmaFHLRRSNtD9ffer6+CkdBNKwUaqB fJyJrT7WQ3376mhTtCaO+7HlAd+oN72VOIV1lNz3MHLD4MBl8LF57c6hy 3tKF+WDja0vxtbzfuMpP0lJKiXWLFYTDdyrhArM/6Sy4C8AvpPrR3sioE mFGvy4bUCreYONTSjMyNvkZz2J8XmpdrSjr01R06Tw5bPXYdose67qBjE pZO62YX7U3CpLYQJLYlonS0BntWpLY/TR6kT3JovsBIZqHa+SkGpiXHhK J7LDqjmCWw+RrDbm0yPC4MxDEWbmprFC5asPIYa3s4ifhbUlo4biKJKXY A==; IronPort-SDR: eJhNd2b28ErsCnbRPBKJCxVa7DVFaXISphbe7+EiwSprmV/YGJ6U4oNOosfYlnmjagL1z6sm8r upfg1As+q4FK7okKmTLHQTaU6mGZVmTJTMEa8OY92sn+axb5wL7Xyc6SnodWHbwKK1WyvTlyJv gVqtamxLzaJu//tJ4wm9tSg3Ssr6hzVZcvtYdRCyi8NVpeny6A+TZ1dBZ+hU/MEd5eHs0vZapz DOKaB6C/QN1khCFFkrbcHbSJupuRcJsEXPgYpHXlPFeM5RKEK8IQ6MtX+TzzozzPPM5ZiwKTKS Ymg= X-IronPort-AV: E=Sophos;i="5.72,394,1580745600"; d="scan'208";a="237989209" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 17 Apr 2020 20:15:56 +0800 IronPort-SDR: FOz5e14Rs4RJ5Zg5+0is7jELJPnVDNNCYq2iJTNt+ZlFLl4eC1FEoHwNDxQ86LEjK7Mm3xX/Rj P6O9lG1RfhXVBIdggXG8xeRjM0Y7H+JYQ= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 05:06:53 -0700 IronPort-SDR: 0erbExPLkPILObTH3yKq8aAPplNx7/l9ZZQdRXmIxlqL2N/Vr/nUymtdIK2ZEE103vTyUAOoh0 9Ets8YGG6kyw== WDCIronportException: Internal Received: from unknown (HELO redsun60.ssa.fujisawa.hgst.com) ([10.149.66.36]) by uls-op-cesaip01.wdc.com with ESMTP; 17 Apr 2020 05:15:54 -0700 From: Johannes Thumshirn To: Jens Axboe Cc: Christoph Hellwig , linux-block , Damien Le Moal , Keith Busch , "linux-scsi @ vger . kernel . org" , "Martin K . Petersen" , "linux-fsdevel @ vger . kernel . org" , Daniel Wagner , Johannes Thumshirn , Christoph Hellwig Subject: [PATCH v7 05/11] block: introduce blk_req_zone_write_trylock Date: Fri, 17 Apr 2020 21:15:30 +0900 Message-Id: <20200417121536.5393-6-johannes.thumshirn@wdc.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200417121536.5393-1-johannes.thumshirn@wdc.com> References: <20200417121536.5393-1-johannes.thumshirn@wdc.com> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Introduce blk_req_zone_write_trylock(), which either grabs the write-lock for a sequential zone or returns false, if the zone is already locked. Signed-off-by: Johannes Thumshirn Reviewed-by: Christoph Hellwig --- block/blk-zoned.c | 14 ++++++++++++++ include/linux/blkdev.h | 1 + 2 files changed, 15 insertions(+) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index f87956e0dcaf..c822cfa7a102 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -82,6 +82,20 @@ bool blk_req_needs_zone_write_lock(struct request *rq) } EXPORT_SYMBOL_GPL(blk_req_needs_zone_write_lock); +bool blk_req_zone_write_trylock(struct request *rq) +{ + unsigned int zno = blk_rq_zone_no(rq); + + if (test_and_set_bit(zno, rq->q->seq_zones_wlock)) + return false; + + WARN_ON_ONCE(rq->rq_flags & RQF_ZONE_WRITE_LOCKED); + rq->rq_flags |= RQF_ZONE_WRITE_LOCKED; + + return true; +} +EXPORT_SYMBOL_GPL(blk_req_zone_write_trylock); + void __blk_req_zone_write_lock(struct request *rq) { if (WARN_ON_ONCE(test_and_set_bit(blk_rq_zone_no(rq), diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 774947365341..0797d1e81802 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1740,6 +1740,7 @@ extern int bdev_write_page(struct block_device *, sector_t, struct page *, #ifdef CONFIG_BLK_DEV_ZONED bool blk_req_needs_zone_write_lock(struct request *rq); +bool blk_req_zone_write_trylock(struct request *rq); void __blk_req_zone_write_lock(struct request *rq); void __blk_req_zone_write_unlock(struct request *rq); From patchwork Fri Apr 17 12:15:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Thumshirn X-Patchwork-Id: 11495009 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 22C6B174A for ; Fri, 17 Apr 2020 12:16:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0965320644 for ; Fri, 17 Apr 2020 12:16:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="FcY4zKbs" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728472AbgDQMQB (ORCPT ); Fri, 17 Apr 2020 08:16:01 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:50643 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727845AbgDQMP5 (ORCPT ); Fri, 17 Apr 2020 08:15:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1587125759; x=1618661759; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wi1qv7z0Mg6tQsAx0kXU9eGu+szpv/+bHam7HZYDstw=; b=FcY4zKbsuFmVRfZUfDLIPCPFVesIfbC0ihMrF8djVE2E3U+aWEzH1ptG ySNISdDnba1/dlj60K2iGvRbH4KGufDTc1riIEN3cKBH9nT5693jeEa0E Ut73cSXnkQPlmzwVQg6Cb8Sepr0T9+qdC3fd1cQYSHWF5DGLqcU08T9Y0 c6mST9hgXzNy6/zRKiPc3NJSvdyKGgwydokbHxkjEFyVObodTUCl6UJVG n9r8DWwHS/3rJaI01L4Fy72kdnsK4KmEU15fHKVUK9N0SNiaMVEkQLdZt ZvgaZV0cv2d3QzkNrfQpSybk5hYqbRKCRX1KlsCLnnGlnmPPrS0mNUjgy A==; IronPort-SDR: OEYkZ3Mixop9Kl/UfunnbmjW0T5N6IsFoKBpE1y8bBLeSYjpepqSfzWsodU8loTlQqs51SlL1v RepzYLEUKj9jLxrinGvMM3o+vlRc511hxAnlccojPZey/1nfmKs0JaxQZ0Rq2Y9zfh2dtCq0PA v10uBBeZU6x91h3iNHg0IAYjS7pR/wrEnD7JfAJS011FTHx9L249aK8rW/efJ/jJXRkbfCGF2d dMW3zJsilXoZWI3QyfaR2vz2G61b50pe+tuhRP1WydoCWZW3WgK0OuEKJntdU01uCUDWyTrM7x 9XE= X-IronPort-AV: E=Sophos;i="5.72,394,1580745600"; d="scan'208";a="237989215" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 17 Apr 2020 20:15:59 +0800 IronPort-SDR: 5ocKAeeR/mSqEjHT5L6HdGrdEBubaDX77siPr6WRqeOgiCow9t2ik0v5oWAG+ZIVydGNwHajay 4BRLIWAOLECswwasn2pxO0nGpKnNbOalA= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 05:06:55 -0700 IronPort-SDR: 9iiCa8u2QcickQQvQ4MFcxhGYorKDNNEVOgdFs2maaFi1LJFWgpo/baiZo16U9jPE/WyAJHnx0 eAZhJuKGJg8A== WDCIronportException: Internal Received: from unknown (HELO redsun60.ssa.fujisawa.hgst.com) ([10.149.66.36]) by uls-op-cesaip01.wdc.com with ESMTP; 17 Apr 2020 05:15:56 -0700 From: Johannes Thumshirn To: Jens Axboe Cc: Christoph Hellwig , linux-block , Damien Le Moal , Keith Busch , "linux-scsi @ vger . kernel . org" , "Martin K . Petersen" , "linux-fsdevel @ vger . kernel . org" , Daniel Wagner , Damien Le Moal , Johannes Thumshirn Subject: [PATCH v7 06/11] block: Modify revalidate zones Date: Fri, 17 Apr 2020 21:15:31 +0900 Message-Id: <20200417121536.5393-7-johannes.thumshirn@wdc.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200417121536.5393-1-johannes.thumshirn@wdc.com> References: <20200417121536.5393-1-johannes.thumshirn@wdc.com> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Damien Le Moal Modify the interface of blk_revalidate_disk_zones() to add an optional driver callback function that a driver can use to extend processing done during zone revalidation. The callback, if defined, is executed with the device request queue frozen, after all zones have been inspected. Signed-off-by: Damien Le Moal Signed-off-by: Johannes Thumshirn Reviewed-by: Christoph Hellwig --- block/blk-zoned.c | 9 ++++++++- drivers/block/null_blk_zoned.c | 2 +- include/linux/blkdev.h | 3 ++- 3 files changed, 11 insertions(+), 3 deletions(-) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index c822cfa7a102..23831fa8701d 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -471,14 +471,19 @@ static int blk_revalidate_zone_cb(struct blk_zone *zone, unsigned int idx, /** * blk_revalidate_disk_zones - (re)allocate and initialize zone bitmaps * @disk: Target disk + * @update_driver_data: Callback to update driver data on the frozen disk * * Helper function for low-level device drivers to (re) allocate and initialize * a disk request queue zone bitmaps. This functions should normally be called * within the disk ->revalidate method for blk-mq based drivers. For BIO based * drivers only q->nr_zones needs to be updated so that the sysfs exposed value * is correct. + * If the @update_driver_data callback function is not NULL, the callback is + * executed with the device request queue frozen after all zones have been + * checked. */ -int blk_revalidate_disk_zones(struct gendisk *disk) +int blk_revalidate_disk_zones(struct gendisk *disk, + void (*update_driver_data)(struct gendisk *disk)) { struct request_queue *q = disk->queue; struct blk_revalidate_zone_args args = { @@ -512,6 +517,8 @@ int blk_revalidate_disk_zones(struct gendisk *disk) q->nr_zones = args.nr_zones; swap(q->seq_zones_wlock, args.seq_zones_wlock); swap(q->conv_zones_bitmap, args.conv_zones_bitmap); + if (update_driver_data) + update_driver_data(disk); ret = 0; } else { pr_warn("%s: failed to revalidate zones\n", disk->disk_name); diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c index 9e4bcdad1a80..46641df2e58e 100644 --- a/drivers/block/null_blk_zoned.c +++ b/drivers/block/null_blk_zoned.c @@ -73,7 +73,7 @@ int null_register_zoned_dev(struct nullb *nullb) struct request_queue *q = nullb->q; if (queue_is_mq(q)) - return blk_revalidate_disk_zones(nullb->disk); + return blk_revalidate_disk_zones(nullb->disk, NULL); blk_queue_chunk_sectors(q, nullb->dev->zone_size_sects); q->nr_zones = blkdev_nr_zones(nullb->disk); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 0797d1e81802..132de24180ad 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -362,7 +362,8 @@ unsigned int blkdev_nr_zones(struct gendisk *disk); extern int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op, sector_t sectors, sector_t nr_sectors, gfp_t gfp_mask); -extern int blk_revalidate_disk_zones(struct gendisk *disk); +int blk_revalidate_disk_zones(struct gendisk *disk, + void (*update_driver_data)(struct gendisk *disk)); extern int blkdev_report_zones_ioctl(struct block_device *bdev, fmode_t mode, unsigned int cmd, unsigned long arg); From patchwork Fri Apr 17 12:15:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Thumshirn X-Patchwork-Id: 11495017 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F409214B4 for ; Fri, 17 Apr 2020 12:16:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D88CD22202 for ; Fri, 17 Apr 2020 12:16:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="k1rJxI3X" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728549AbgDQMQE (ORCPT ); Fri, 17 Apr 2020 08:16:04 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:50643 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728307AbgDQMP7 (ORCPT ); Fri, 17 Apr 2020 08:15:59 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1587125761; x=1618661761; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=af+TpmrNaP6opwMEPh6jKrw6Bxfh1IsevFX/PW8xfPo=; b=k1rJxI3XnEdr/6iX0iiw7HW1lT1xop1JToB4jfJPrZKsZL6084iOGPQp bX6UlUu5mlCTrcptejcVkdonNd1PM/IjFB2S6NLH7NNaDG3sN9Zf9FsI0 4cTvdS8X+9tWv4++3xSAucxQgszpQfraJJGdCWhc2aPOOSlpQwz3lN/HS T1ygK3jOEhD2hlhrgkO86ataVT7Z67aKRxQMbGsTIj9C7/YA/QuETLiTs Bj6RZSDh8wjNguBeRYiZmdRO2j7pNXsko32MIrcPk4xZF4EFu3HZj5zpL wJdJjdN+Qu/+IprIBqdquBI21CHmof7DH6fixITRVzTVTBWziUdpHp9OK w==; IronPort-SDR: 2NxYaZNOOMxf+C1aWYvlE7cMmPwDxW9D91TsOYnQ3lbppxnxbZCTn3aeHrd9z9vQYZ9zZpRWLB kUhx2P++lEzk7YDCLLSHyfB9IdCHDfHDnjgHCxDVLf65IfbceDUDbTFYDvupvr68Zpa9Lg93fx 2R/M6MBSpgLIxhjMz+X2Jv7/591Hwy/1LFHzvOwj4wX5TuLQPuxeUNNhB+h/ggmQY8zhdCW8LK EFo5DIRd84EIMrKTWYszUVGQb/T15BMJkpGFu98BjHaSOC6auZt7J9rxT2SLkQeon2ZRJQdgkT fJ0= X-IronPort-AV: E=Sophos;i="5.72,394,1580745600"; d="scan'208";a="237989218" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 17 Apr 2020 20:16:01 +0800 IronPort-SDR: KsXl1vBG+6ZBip9e3uAPneaPVddJ0Jx1jxSmQ5t4ouCv/lZuyW8ZcOdjPjyBTkiUZ96WhoYSLw VK8s7nfBLv90FX8wW7uwu+ON8L8lAoEwQ= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 05:06:57 -0700 IronPort-SDR: tSViJ805KU4pd2c4z3dOxyqqjih10k5hKp/Krbbs/A3O3yBitmqV/5GkKBdY3iQkcRuGGUBnRK ifgealMRFGCw== WDCIronportException: Internal Received: from unknown (HELO redsun60.ssa.fujisawa.hgst.com) ([10.149.66.36]) by uls-op-cesaip01.wdc.com with ESMTP; 17 Apr 2020 05:15:57 -0700 From: Johannes Thumshirn To: Jens Axboe Cc: Christoph Hellwig , linux-block , Damien Le Moal , Keith Busch , "linux-scsi @ vger . kernel . org" , "Martin K . Petersen" , "linux-fsdevel @ vger . kernel . org" , Daniel Wagner , Johannes Thumshirn , Christoph Hellwig Subject: [PATCH v7 07/11] scsi: sd_zbc: factor out sanity checks for zoned commands Date: Fri, 17 Apr 2020 21:15:32 +0900 Message-Id: <20200417121536.5393-8-johannes.thumshirn@wdc.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200417121536.5393-1-johannes.thumshirn@wdc.com> References: <20200417121536.5393-1-johannes.thumshirn@wdc.com> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Factor sanity checks for zoned commands from sd_zbc_setup_zone_mgmt_cmnd(). This will help with the introduction of an emulated ZONE_APPEND command. Signed-off-by: Johannes Thumshirn Reviewed-by: Christoph Hellwig Reviewed-by: Bart Van Assche --- drivers/scsi/sd_zbc.c | 36 +++++++++++++++++++++++++----------- 1 file changed, 25 insertions(+), 11 deletions(-) diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c index f45c22b09726..ee156fbf3780 100644 --- a/drivers/scsi/sd_zbc.c +++ b/drivers/scsi/sd_zbc.c @@ -209,6 +209,26 @@ int sd_zbc_report_zones(struct gendisk *disk, sector_t sector, return ret; } +static blk_status_t sd_zbc_cmnd_checks(struct scsi_cmnd *cmd) +{ + struct request *rq = cmd->request; + struct scsi_disk *sdkp = scsi_disk(rq->rq_disk); + sector_t sector = blk_rq_pos(rq); + + if (!sd_is_zoned(sdkp)) + /* Not a zoned device */ + return BLK_STS_IOERR; + + if (sdkp->device->changed) + return BLK_STS_IOERR; + + if (sector & (sd_zbc_zone_sectors(sdkp) - 1)) + /* Unaligned request */ + return BLK_STS_IOERR; + + return BLK_STS_OK; +} + /** * sd_zbc_setup_zone_mgmt_cmnd - Prepare a zone ZBC_OUT command. The operations * can be RESET WRITE POINTER, OPEN, CLOSE or FINISH. @@ -223,20 +243,14 @@ blk_status_t sd_zbc_setup_zone_mgmt_cmnd(struct scsi_cmnd *cmd, unsigned char op, bool all) { struct request *rq = cmd->request; - struct scsi_disk *sdkp = scsi_disk(rq->rq_disk); sector_t sector = blk_rq_pos(rq); + struct scsi_disk *sdkp = scsi_disk(rq->rq_disk); sector_t block = sectors_to_logical(sdkp->device, sector); + blk_status_t ret; - if (!sd_is_zoned(sdkp)) - /* Not a zoned device */ - return BLK_STS_IOERR; - - if (sdkp->device->changed) - return BLK_STS_IOERR; - - if (sector & (sd_zbc_zone_sectors(sdkp) - 1)) - /* Unaligned request */ - return BLK_STS_IOERR; + ret = sd_zbc_cmnd_checks(cmd); + if (ret != BLK_STS_OK) + return ret; cmd->cmd_len = 16; memset(cmd->cmnd, 0, cmd->cmd_len); From patchwork Fri Apr 17 12:15:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Thumshirn X-Patchwork-Id: 11495027 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6031781 for ; Fri, 17 Apr 2020 12:16:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3BA7122201 for ; Fri, 17 Apr 2020 12:16:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="GZkumbqV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728566AbgDQMQE (ORCPT ); Fri, 17 Apr 2020 08:16:04 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:50643 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728498AbgDQMQB (ORCPT ); Fri, 17 Apr 2020 08:16:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1587125761; x=1618661761; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XJW4eywHMA42R3raXR99I11CRWm8bseN5qveeBv/Wlo=; b=GZkumbqVWOiz9q21Inm6HFqMBeXs/9HQqMq7IMQe9WMgdc6bJ89cNz8P 4Zha4vrKDbnZrdXJ3KP47IyaJLXrxJYb9QHLQF0Te9kFo4LFWHToXs00a LoUsmpivuxNH+lyhnHTyYeYysfm+cLgmpHjgukmaQlVZxFZ/LUF/csa5s SHIon5frafXTb+GADXaMD+Y+LuKk5PBN7XVLYdaVjR5ZyKSfVS4e4t+3X DQVwbtmvvy3H6V8pE3yrS8XJhdtxNi1iN0e96v/eDdtI38NF6RAO5XQ4k XbnSECfUbxtoh8/Hfo+XUTLMXD0qk+vtWQ6QNNa8gUMrLhZrcLaDj9Jga g==; IronPort-SDR: Ww/bxpw+wb6ccBGjzyNv9+5Fk7NkQT7oPwrI+9yYauITPyodnf/7hsaP07t4dQJ0Y4yvx2oWiv DofmpCY2RHk0dwzosy8BHJ8zVZv29FDE6P4rkYaZQrdyuAJFgknDy2Sa9JKfsoh5f6e2Ax8B7L alMsFluwe5T1dAQkQSpl6ICeA0nKkfKdlI0FTC3+6fKsyDqeHNqBguk50W1MhW65yi/GSWUfbc 7CXWd9qU97EGw7NWp/HwjTEN93a6T91Wh/pO3YiKTa03ZV5Bkyj5DOOJK3ziS43tbRjX7RhC/O Pyw= X-IronPort-AV: E=Sophos;i="5.72,394,1580745600"; d="scan'208";a="237989221" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 17 Apr 2020 20:16:04 +0800 IronPort-SDR: n9WwHip21twxRoTnPToTLlInVkR5ErmffLdKpoRUIKgsoX5cVDobfXwvP6K9QQcBrGlPLOyRu4 2WMfuPE8UH8apuRhUz9Ion9qPbLNhB/Ow= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 05:06:59 -0700 IronPort-SDR: LOT1SUHzC7IYvlSxX3i/CCt8TalxaH7eTzYQxWhMJtXMcZLxEiR9r6wlIlREq6Adngl3a3m/l7 AVR035PtHsQg== WDCIronportException: Internal Received: from unknown (HELO redsun60.ssa.fujisawa.hgst.com) ([10.149.66.36]) by uls-op-cesaip01.wdc.com with ESMTP; 17 Apr 2020 05:15:59 -0700 From: Johannes Thumshirn To: Jens Axboe Cc: Christoph Hellwig , linux-block , Damien Le Moal , Keith Busch , "linux-scsi @ vger . kernel . org" , "Martin K . Petersen" , "linux-fsdevel @ vger . kernel . org" , Daniel Wagner , Johannes Thumshirn Subject: [PATCH v7 08/11] scsi: sd_zbc: emulate ZONE_APPEND commands Date: Fri, 17 Apr 2020 21:15:33 +0900 Message-Id: <20200417121536.5393-9-johannes.thumshirn@wdc.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200417121536.5393-1-johannes.thumshirn@wdc.com> References: <20200417121536.5393-1-johannes.thumshirn@wdc.com> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Emulate ZONE_APPEND for SCSI disks using a regular WRITE(16) command with a start LBA set to the target zone write pointer position. In order to always know the write pointer position of a sequential write zone, the write pointer of all zones is tracked using an array of 32bits zone write pointer offset attached to the scsi disk structure. Each entry of the array indicate a zone write pointer position relative to the zone start sector. The write pointer offsets are maintained in sync with the device as follows: 1) the write pointer offset of a zone is reset to 0 when a REQ_OP_ZONE_RESET command completes. 2) the write pointer offset of a zone is set to the zone size when a REQ_OP_ZONE_FINISH command completes. 3) the write pointer offset of a zone is incremented by the number of 512B sectors written when a write, write same or a zone append command completes. 4) the write pointer offset of all zones is reset to 0 when a REQ_OP_ZONE_RESET_ALL command completes. Since the block layer does not write lock zones for zone append commands, to ensure a sequential ordering of the regular write commands used for the emulation, the target zone of a zone append command is locked when the function sd_zbc_prepare_zone_append() is called from sd_setup_read_write_cmnd(). If the zone write lock cannot be obtained (e.g. a zone append is in-flight or a regular write has already locked the zone), the zone append command dispatching is delayed by returning BLK_STS_ZONE_RESOURCE. To avoid the need for write locking all zones for REQ_OP_ZONE_RESET_ALL requests, use a spinlock to protect accesses and modifications of the zone write pointer offsets. This spinlock is initialized from sd_probe() using the new function sd_zbc_init(). Co-developed-by: Damien Le Moal Signed-off-by: Johannes Thumshirn Reviewed-by: Christoph Hellwig --- drivers/scsi/sd.c | 24 ++- drivers/scsi/sd.h | 43 ++++- drivers/scsi/sd_zbc.c | 361 ++++++++++++++++++++++++++++++++++++++---- 3 files changed, 390 insertions(+), 38 deletions(-) diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index a793cb08d025..66ff5f04c0ce 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -1206,6 +1206,12 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd) } } + if (req_op(rq) == REQ_OP_ZONE_APPEND) { + ret = sd_zbc_prepare_zone_append(cmd, &lba, nr_blocks); + if (ret) + return ret; + } + fua = rq->cmd_flags & REQ_FUA ? 0x8 : 0; dix = scsi_prot_sg_count(cmd); dif = scsi_host_dif_capable(cmd->device->host, sdkp->protection_type); @@ -1287,6 +1293,7 @@ static blk_status_t sd_init_command(struct scsi_cmnd *cmd) return sd_setup_flush_cmnd(cmd); case REQ_OP_READ: case REQ_OP_WRITE: + case REQ_OP_ZONE_APPEND: return sd_setup_read_write_cmnd(cmd); case REQ_OP_ZONE_RESET: return sd_zbc_setup_zone_mgmt_cmnd(cmd, ZO_RESET_WRITE_POINTER, @@ -2055,7 +2062,7 @@ static int sd_done(struct scsi_cmnd *SCpnt) out: if (sd_is_zoned(sdkp)) - sd_zbc_complete(SCpnt, good_bytes, &sshdr); + good_bytes = sd_zbc_complete(SCpnt, good_bytes, &sshdr); SCSI_LOG_HLCOMPLETE(1, scmd_printk(KERN_INFO, SCpnt, "sd_done: completed %d of %d bytes\n", @@ -3372,6 +3379,10 @@ static int sd_probe(struct device *dev) sdkp->first_scan = 1; sdkp->max_medium_access_timeouts = SD_MAX_MEDIUM_TIMEOUTS; + error = sd_zbc_init_disk(sdkp); + if (error) + goto out_free_index; + sd_revalidate_disk(gd); gd->flags = GENHD_FL_EXT_DEVT; @@ -3409,6 +3420,7 @@ static int sd_probe(struct device *dev) out_put: put_disk(gd); out_free: + sd_zbc_release_disk(sdkp); kfree(sdkp); out: scsi_autopm_put_device(sdp); @@ -3485,6 +3497,8 @@ static void scsi_disk_release(struct device *dev) put_disk(disk); put_device(&sdkp->device->sdev_gendev); + sd_zbc_release_disk(sdkp); + kfree(sdkp); } @@ -3665,19 +3679,19 @@ static int __init init_sd(void) if (!sd_page_pool) { printk(KERN_ERR "sd: can't init discard page pool\n"); err = -ENOMEM; - goto err_out_ppool; + goto err_out_cdb_pool; } err = scsi_register_driver(&sd_template.gendrv); if (err) - goto err_out_driver; + goto err_out_ppool; return 0; -err_out_driver: +err_out_ppool: mempool_destroy(sd_page_pool); -err_out_ppool: +err_out_cdb_pool: mempool_destroy(sd_cdb_pool); err_out_cache: diff --git a/drivers/scsi/sd.h b/drivers/scsi/sd.h index 50fff0bf8c8e..6009311105ef 100644 --- a/drivers/scsi/sd.h +++ b/drivers/scsi/sd.h @@ -79,6 +79,12 @@ struct scsi_disk { u32 zones_optimal_open; u32 zones_optimal_nonseq; u32 zones_max_open; + u32 *zones_wp_ofst; + spinlock_t zones_wp_ofst_lock; + u32 *rev_wp_ofst; + struct mutex rev_mutex; + struct work_struct zone_wp_ofst_work; + char *zone_wp_update_buf; #endif atomic_t openers; sector_t capacity; /* size in logical blocks */ @@ -207,17 +213,35 @@ static inline int sd_is_zoned(struct scsi_disk *sdkp) #ifdef CONFIG_BLK_DEV_ZONED +int sd_zbc_init_disk(struct scsi_disk *sdkp); +void sd_zbc_release_disk(struct scsi_disk *sdkp); extern int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buffer); extern void sd_zbc_print_zones(struct scsi_disk *sdkp); blk_status_t sd_zbc_setup_zone_mgmt_cmnd(struct scsi_cmnd *cmd, unsigned char op, bool all); -extern void sd_zbc_complete(struct scsi_cmnd *cmd, unsigned int good_bytes, - struct scsi_sense_hdr *sshdr); +unsigned int sd_zbc_complete(struct scsi_cmnd *cmd, unsigned int good_bytes, + struct scsi_sense_hdr *sshdr); int sd_zbc_report_zones(struct gendisk *disk, sector_t sector, unsigned int nr_zones, report_zones_cb cb, void *data); +blk_status_t sd_zbc_prepare_zone_append(struct scsi_cmnd *cmd, sector_t *lba, + unsigned int nr_blocks); + #else /* CONFIG_BLK_DEV_ZONED */ +static inline int sd_zbc_init(void) +{ + return 0; +} + +static inline int sd_zbc_init_disk(struct scsi_disk *sdkp) +{ + return 0; +} + +static inline void sd_zbc_exit(void) {} +static inline void sd_zbc_release_disk(struct scsi_disk *sdkp) {} + static inline int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buf) { @@ -233,9 +257,18 @@ static inline blk_status_t sd_zbc_setup_zone_mgmt_cmnd(struct scsi_cmnd *cmd, return BLK_STS_TARGET; } -static inline void sd_zbc_complete(struct scsi_cmnd *cmd, - unsigned int good_bytes, - struct scsi_sense_hdr *sshdr) {} +static inline unsigned int sd_zbc_complete(struct scsi_cmnd *cmd, + unsigned int good_bytes, struct scsi_sense_hdr *sshdr) +{ + return 0; +} + +static inline blk_status_t sd_zbc_prepare_zone_append(struct scsi_cmnd *cmd, + sector_t *lba, + unsigned int nr_blocks) +{ + return BLK_STS_TARGET; +} #define sd_zbc_report_zones NULL diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c index ee156fbf3780..39ba9cd7e1fa 100644 --- a/drivers/scsi/sd_zbc.c +++ b/drivers/scsi/sd_zbc.c @@ -11,6 +11,7 @@ #include #include #include +#include #include @@ -19,11 +20,36 @@ #include "sd.h" +static unsigned int sd_zbc_get_zone_wp_ofst(struct blk_zone *zone) +{ + if (zone->type == ZBC_ZONE_TYPE_CONV) + return 0; + + switch (zone->cond) { + case BLK_ZONE_COND_IMP_OPEN: + case BLK_ZONE_COND_EXP_OPEN: + case BLK_ZONE_COND_CLOSED: + return zone->wp - zone->start; + case BLK_ZONE_COND_FULL: + return zone->len; + case BLK_ZONE_COND_EMPTY: + case BLK_ZONE_COND_OFFLINE: + case BLK_ZONE_COND_READONLY: + default: + /* + * Offline and read-only zones do not have a valid + * write pointer. Use 0 as for an empty zone. + */ + return 0; + } +} + static int sd_zbc_parse_report(struct scsi_disk *sdkp, u8 *buf, unsigned int idx, report_zones_cb cb, void *data) { struct scsi_device *sdp = sdkp->device; struct blk_zone zone = { 0 }; + int ret; zone.type = buf[0] & 0x0f; zone.cond = (buf[1] >> 4) & 0xf; @@ -39,7 +65,14 @@ static int sd_zbc_parse_report(struct scsi_disk *sdkp, u8 *buf, zone.cond == ZBC_ZONE_COND_FULL) zone.wp = zone.start + zone.len; - return cb(&zone, idx, data); + ret = cb(&zone, idx, data); + if (ret) + return ret; + + if (sdkp->rev_wp_ofst) + sdkp->rev_wp_ofst[idx] = sd_zbc_get_zone_wp_ofst(&zone); + + return 0; } /** @@ -229,6 +262,116 @@ static blk_status_t sd_zbc_cmnd_checks(struct scsi_cmnd *cmd) return BLK_STS_OK; } +#define SD_ZBC_INVALID_WP_OFST (~0u) +#define SD_ZBC_UPDATING_WP_OFST (SD_ZBC_INVALID_WP_OFST - 1) + +static int sd_zbc_update_wp_ofst_cb(struct blk_zone *zone, unsigned int idx, + void *data) +{ + struct scsi_disk *sdkp = data; + + lockdep_assert_held(&sdkp->zones_wp_ofst_lock); + + sdkp->zones_wp_ofst[idx] = sd_zbc_get_zone_wp_ofst(zone); + + return 0; +} + +static void sd_zbc_update_wp_ofst_workfn(struct work_struct *work) +{ + struct scsi_disk *sdkp; + unsigned int zno; + int ret; + + sdkp = container_of(work, struct scsi_disk, zone_wp_ofst_work); + + spin_lock_bh(&sdkp->zones_wp_ofst_lock); + for (zno = 0; zno < sdkp->nr_zones; zno++) { + if (sdkp->zones_wp_ofst[zno] != SD_ZBC_UPDATING_WP_OFST) + continue; + + spin_unlock_bh(&sdkp->zones_wp_ofst_lock); + ret = sd_zbc_do_report_zones(sdkp, sdkp->zone_wp_update_buf, + SD_BUF_SIZE, + zno * sdkp->zone_blocks, true); + spin_lock_bh(&sdkp->zones_wp_ofst_lock); + if (!ret) + sd_zbc_parse_report(sdkp, sdkp->zone_wp_update_buf + 64, + zno, sd_zbc_update_wp_ofst_cb, + sdkp); + } + spin_unlock_bh(&sdkp->zones_wp_ofst_lock); + + scsi_device_put(sdkp->device); +} + +/** + * sd_zbc_prepare_zone_append() - Prepare an emulated ZONE_APPEND command. + * @cmd: the command to setup + * @lba: the LBA to patch + * @nr_blocks: the number of LBAs to be written + * + * Called from sd_setup_read_write_cmnd() for REQ_OP_ZONE_APPEND. + * @sd_zbc_prepare_zone_append() handles the necessary zone wrote locking and + * patching of the lba for an emulated ZONE_APPEND command. + * + * In case the cached write pointer offset is %SD_ZBC_INVALID_WP_OFST it will + * schedule a REPORT ZONES command and return BLK_STS_IOERR. + */ +blk_status_t sd_zbc_prepare_zone_append(struct scsi_cmnd *cmd, sector_t *lba, + unsigned int nr_blocks) +{ + struct request *rq = cmd->request; + struct scsi_disk *sdkp = scsi_disk(rq->rq_disk); + unsigned int wp_ofst, zno = blk_rq_zone_no(rq); + blk_status_t ret; + + ret = sd_zbc_cmnd_checks(cmd); + if (ret != BLK_STS_OK) + return ret; + + if (!blk_rq_zone_is_seq(rq)) + return BLK_STS_IOERR; + + /* Unlock of the write lock will happen in sd_zbc_complete() */ + if (!blk_req_zone_write_trylock(rq)) + return BLK_STS_ZONE_RESOURCE; + + spin_lock_bh(&sdkp->zones_wp_ofst_lock); + wp_ofst = sdkp->zones_wp_ofst[zno]; + switch (wp_ofst) { + case SD_ZBC_INVALID_WP_OFST: + /* + * We are about to schedule work to update a zone write pointer + * offset, which will cause the zone append command to be + * requeued. So make sure that the scsi device does not go away + * while the work is being processed. + */ + if (scsi_device_get(sdkp->device)) { + ret = BLK_STS_IOERR; + break; + } + sdkp->zones_wp_ofst[zno] = SD_ZBC_UPDATING_WP_OFST; + schedule_work(&sdkp->zone_wp_ofst_work); + /*FALLTHRU*/ + case SD_ZBC_UPDATING_WP_OFST: + ret = BLK_STS_DEV_RESOURCE; + break; + default: + wp_ofst = sectors_to_logical(sdkp->device, wp_ofst); + if (wp_ofst + nr_blocks > sdkp->zone_blocks) { + ret = BLK_STS_IOERR; + break; + } + + *lba += wp_ofst; + } + spin_unlock_bh(&sdkp->zones_wp_ofst_lock); + if (ret) + blk_req_zone_write_unlock(rq); + return ret; +} + /** * sd_zbc_setup_zone_mgmt_cmnd - Prepare a zone ZBC_OUT command. The operations * can be RESET WRITE POINTER, OPEN, CLOSE or FINISH. @@ -269,16 +412,104 @@ blk_status_t sd_zbc_setup_zone_mgmt_cmnd(struct scsi_cmnd *cmd, return BLK_STS_OK; } +static bool sd_zbc_need_zone_wp_update(struct request *rq) +{ + switch (req_op(rq)) { + case REQ_OP_ZONE_APPEND: + case REQ_OP_ZONE_FINISH: + case REQ_OP_ZONE_RESET: + case REQ_OP_ZONE_RESET_ALL: + return true; + case REQ_OP_WRITE: + case REQ_OP_WRITE_ZEROES: + case REQ_OP_WRITE_SAME: + return blk_rq_zone_is_seq(rq); + default: + return false; + } +} + +/** + * sd_zbc_zone_wp_update - Update cached zone write pointer upon cmd completion + * @cmd: Completed command + * @good_bytes: Command reply bytes + * + * Called from sd_zbc_complete() to handle the update of the cached zone write + * pointer value in case an update is needed. + */ +static unsigned int sd_zbc_zone_wp_update(struct scsi_cmnd *cmd, + unsigned int good_bytes) +{ + int result = cmd->result; + struct request *rq = cmd->request; + struct scsi_disk *sdkp = scsi_disk(rq->rq_disk); + unsigned int zno = blk_rq_zone_no(rq); + enum req_opf op = req_op(rq); + + /* + * If we got an error for a command that needs updating the write + * pointer offset cache, we must mark the zone wp offset entry as + * invalid to force an update from disk the next time a zone append + * command is issued. + */ + spin_lock_bh(&sdkp->zones_wp_ofst_lock); + + if (result && op != REQ_OP_ZONE_RESET_ALL) { + if (op == REQ_OP_ZONE_APPEND) { + /* Force complete completion (no retry) */ + good_bytes = 0; + scsi_set_resid(cmd, blk_rq_bytes(rq)); + } + + /* + * Force an update of the zone write pointer offset on + * the next zone append access. + */ + if (sdkp->zones_wp_ofst[zno] != SD_ZBC_UPDATING_WP_OFST) + sdkp->zones_wp_ofst[zno] = SD_ZBC_INVALID_WP_OFST; + goto unlock_wp_ofst; + } + + switch (op) { + case REQ_OP_ZONE_APPEND: + rq->__sector += sdkp->zones_wp_ofst[zno]; + /* fallthrough */ + case REQ_OP_WRITE_ZEROES: + case REQ_OP_WRITE_SAME: + case REQ_OP_WRITE: + if (sdkp->zones_wp_ofst[zno] < sd_zbc_zone_sectors(sdkp)) + sdkp->zones_wp_ofst[zno] += good_bytes >> SECTOR_SHIFT; + break; + case REQ_OP_ZONE_RESET: + sdkp->zones_wp_ofst[zno] = 0; + break; + case REQ_OP_ZONE_FINISH: + sdkp->zones_wp_ofst[zno] = sd_zbc_zone_sectors(sdkp); + break; + case REQ_OP_ZONE_RESET_ALL: + memset(sdkp->zones_wp_ofst, 0, + sdkp->nr_zones * sizeof(unsigned int)); + break; + default: + break; + } + +unlock_wp_ofst: + spin_unlock_bh(&sdkp->zones_wp_ofst_lock); + + return good_bytes; +} + /** * sd_zbc_complete - ZBC command post processing. * @cmd: Completed command * @good_bytes: Command reply bytes * @sshdr: command sense header * - * Called from sd_done(). Process report zones reply and handle reset zone - * and write commands errors. + * Called from sd_done() to handle zone commands errors and updates to the + * device queue zone write pointer offset cahce. */ -void sd_zbc_complete(struct scsi_cmnd *cmd, unsigned int good_bytes, +unsigned int sd_zbc_complete(struct scsi_cmnd *cmd, unsigned int good_bytes, struct scsi_sense_hdr *sshdr) { int result = cmd->result; @@ -294,7 +525,13 @@ void sd_zbc_complete(struct scsi_cmnd *cmd, unsigned int good_bytes, * so be quiet about the error. */ rq->rq_flags |= RQF_QUIET; - } + } else if (sd_zbc_need_zone_wp_update(rq)) + good_bytes = sd_zbc_zone_wp_update(cmd, good_bytes); + + if (req_op(rq) == REQ_OP_ZONE_APPEND) + blk_req_zone_write_unlock(rq); + + return good_bytes; } /** @@ -396,11 +633,67 @@ static int sd_zbc_check_capacity(struct scsi_disk *sdkp, unsigned char *buf, return 0; } +static void sd_zbc_revalidate_zones_cb(struct gendisk *disk) +{ + struct scsi_disk *sdkp = scsi_disk(disk); + + swap(sdkp->zones_wp_ofst, sdkp->rev_wp_ofst); +} + +static int sd_zbc_revalidate_zones(struct scsi_disk *sdkp, + u32 zone_blocks, + unsigned int nr_zones) +{ + struct gendisk *disk = sdkp->disk; + int ret = 0; + + /* + * Make sure revalidate zones are serialized to ensure exclusive + * updates of the scsi disk data. + */ + mutex_lock(&sdkp->rev_mutex); + + /* + * Revalidate the disk zones to update the device request queue zone + * bitmaps and the zone write pointer offset array. Do this only once + * the device capacity is set on the second revalidate execution for + * disk scan or if something changed when executing a normal revalidate. + */ + if (sdkp->first_scan) { + sdkp->zone_blocks = zone_blocks; + sdkp->nr_zones = nr_zones; + goto unlock; + } + + if (sdkp->zone_blocks == zone_blocks && + sdkp->nr_zones == nr_zones && + disk->queue->nr_zones == nr_zones) + goto unlock; + + sdkp->rev_wp_ofst = kvcalloc(nr_zones, sizeof(u32), GFP_NOIO); + if (!sdkp->rev_wp_ofst) { + ret = -ENOMEM; + goto unlock; + } + + ret = blk_revalidate_disk_zones(disk, sd_zbc_revalidate_zones_cb); + + kvfree(sdkp->rev_wp_ofst); + sdkp->rev_wp_ofst = NULL; + +unlock: + mutex_unlock(&sdkp->rev_mutex); + + return ret; +} + int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buf) { struct gendisk *disk = sdkp->disk; + struct request_queue *q = disk->queue; unsigned int nr_zones; u32 zone_blocks = 0; + u32 max_append; int ret; if (!sd_is_zoned(sdkp)) @@ -420,36 +713,23 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buf) if (ret != 0) goto err; + max_append = min_t(u32, logical_to_sectors(sdkp->device, zone_blocks), + q->limits.max_segments << (PAGE_SHIFT - 9)); + max_append = min_t(u32, max_append, queue_max_hw_sectors(q)); + /* The drive satisfies the kernel restrictions: set it up */ - blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, sdkp->disk->queue); - blk_queue_required_elevator_features(sdkp->disk->queue, - ELEVATOR_F_ZBD_SEQ_WRITE); + blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q); + blk_queue_required_elevator_features(q, ELEVATOR_F_ZBD_SEQ_WRITE); + blk_queue_max_zone_append_sectors(q, max_append); nr_zones = round_up(sdkp->capacity, zone_blocks) >> ilog2(zone_blocks); /* READ16/WRITE16 is mandatory for ZBC disks */ sdkp->device->use_16_for_rw = 1; sdkp->device->use_10_for_rw = 0; - /* - * Revalidate the disk zone bitmaps once the block device capacity is - * set on the second revalidate execution during disk scan and if - * something changed when executing a normal revalidate. - */ - if (sdkp->first_scan) { - sdkp->zone_blocks = zone_blocks; - sdkp->nr_zones = nr_zones; - return 0; - } - - if (sdkp->zone_blocks != zone_blocks || - sdkp->nr_zones != nr_zones || - disk->queue->nr_zones != nr_zones) { - ret = blk_revalidate_disk_zones(disk); - if (ret != 0) - goto err; - sdkp->zone_blocks = zone_blocks; - sdkp->nr_zones = nr_zones; - } + ret = sd_zbc_revalidate_zones(sdkp, zone_blocks, nr_zones); + if (ret) + goto err; return 0; @@ -475,3 +755,28 @@ void sd_zbc_print_zones(struct scsi_disk *sdkp) sdkp->nr_zones, sdkp->zone_blocks); } + +int sd_zbc_init_disk(struct scsi_disk *sdkp) +{ + if (!sd_is_zoned(sdkp)) + return 0; + + sdkp->zones_wp_ofst = NULL; + spin_lock_init(&sdkp->zones_wp_ofst_lock); + sdkp->rev_wp_ofst = NULL; + mutex_init(&sdkp->rev_mutex); + INIT_WORK(&sdkp->zone_wp_ofst_work, sd_zbc_update_wp_ofst_workfn); + sdkp->zone_wp_update_buf = kzalloc(SD_BUF_SIZE, GFP_KERNEL); + if (!sdkp->zone_wp_update_buf) + return -ENOMEM; + + return 0; +} + +void sd_zbc_release_disk(struct scsi_disk *sdkp) +{ + kvfree(sdkp->zones_wp_ofst); + sdkp->zones_wp_ofst = NULL; + kfree(sdkp->zone_wp_update_buf); + sdkp->zone_wp_update_buf = NULL; +} From patchwork Fri Apr 17 12:15:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Thumshirn X-Patchwork-Id: 11495025 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C53E21871 for ; Fri, 17 Apr 2020 12:16:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AB6CF221E9 for ; Fri, 17 Apr 2020 12:16:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="ScjHZER1" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728586AbgDQMQG (ORCPT ); Fri, 17 Apr 2020 08:16:06 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:50643 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728532AbgDQMQD (ORCPT ); Fri, 17 Apr 2020 08:16:03 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1587125764; x=1618661764; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EptpuszxKds2VZq8D512Jp2spjpDwJgzg8AZNAyDqGo=; b=ScjHZER1dbTr2OkuldWyHBPZ4ukfzh43GRmpCNRKSPspNdiGB2Pm9MWF UbdsphVabrxLh+BF61+4ldWNuLHNv7lRq5M72iwyte9qolxUxo3AqkL71 YvfQ3DO+YCkDKr7DgOFGkiDUD/KZNiq7P+Y0VAwaRcMl5/p8Dtxi9orXb So20nkY9kFA4+LFNytM9w9gQtr9nK3DfW/Y058xQ5wDDCDKUhI85hzJON IqeGjfCu9AVr+OwYBRghp6uznFeWC8bcMZL7m8kVPH0GLy1Gt/8somZkU SDeTGY5KN5eLcq+aZgS2M7Uu8X+Y2/HefXe2blONwTUsQerLQoCiMi014 w==; IronPort-SDR: BLieFpLijjyiEkfnudrkVzlfovd+Sc+CjEEkstMbpaqqlRRbs07NNo+vmdingSLGLNTsGPww5M +5l3HoKA9ofG6f48CTqsnY9HBj2C19S6ta1SZwLnVWNTYuw7tdx+Jya6ZX/loYjXuD0SHwZJTK ct0UZgAUf8/lIFyc+jcLOpGJwyPgUOwp9RsWy+E7dAL9eWRY3z47Cqf6S+X7bPALCeRiBpYDjp qKhRxuaoXpWhUQLKGLVj9u/L5NL8ksNLHhbgU5INwl5vWupIdJ7YHpg03HFnek0YpqB3cWEaYd rms= X-IronPort-AV: E=Sophos;i="5.72,394,1580745600"; d="scan'208";a="237989225" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 17 Apr 2020 20:16:04 +0800 IronPort-SDR: o3oTBxX/7D2s3Up2zH3jYozt6WLcM0K1PZl39Mpp6E0ShJYUCHhWRBPIGlU1FdJ0tHVYJetEU7 z/M6qkrLVMcUxyOqdr1ukm189L9fyvZXA= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 05:07:01 -0700 IronPort-SDR: 1IWS6HriTvmHC4yJSOpeDjh64s5LrSRVMxB6ZTAE81KWBx5eI7UQGT8XgfXZNmifNHyB/nI5C6 3ByUMz1iGVig== WDCIronportException: Internal Received: from unknown (HELO redsun60.ssa.fujisawa.hgst.com) ([10.149.66.36]) by uls-op-cesaip01.wdc.com with ESMTP; 17 Apr 2020 05:16:01 -0700 From: Johannes Thumshirn To: Jens Axboe Cc: Christoph Hellwig , linux-block , Damien Le Moal , Keith Busch , "linux-scsi @ vger . kernel . org" , "Martin K . Petersen" , "linux-fsdevel @ vger . kernel . org" , Daniel Wagner , Damien Le Moal , Christoph Hellwig , Johannes Thumshirn Subject: [PATCH v7 09/11] null_blk: Support REQ_OP_ZONE_APPEND Date: Fri, 17 Apr 2020 21:15:34 +0900 Message-Id: <20200417121536.5393-10-johannes.thumshirn@wdc.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200417121536.5393-1-johannes.thumshirn@wdc.com> References: <20200417121536.5393-1-johannes.thumshirn@wdc.com> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Damien Le Moal Support REQ_OP_ZONE_APPEND requests for null_blk devices with zoned mode enabled. Use the internally tracked zone write pointer position as the actual write position and return it using the command request __sector field in the case of an mq device and using the command BIO sector in the case of a BIO device. Reviewed-by: Christoph Hellwig Signed-off-by: Damien Le Moal Signed-off-by: Johannes Thumshirn --- drivers/block/null_blk_zoned.c | 39 +++++++++++++++++++++++++++------- 1 file changed, 31 insertions(+), 8 deletions(-) diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c index 46641df2e58e..5c70e0c7e862 100644 --- a/drivers/block/null_blk_zoned.c +++ b/drivers/block/null_blk_zoned.c @@ -70,13 +70,22 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q) int null_register_zoned_dev(struct nullb *nullb) { + struct nullb_device *dev = nullb->dev; struct request_queue *q = nullb->q; - if (queue_is_mq(q)) - return blk_revalidate_disk_zones(nullb->disk, NULL); + if (queue_is_mq(q)) { + int ret = blk_revalidate_disk_zones(nullb->disk, NULL); + + if (ret) + return ret; + } else { + blk_queue_chunk_sectors(q, dev->zone_size_sects); + q->nr_zones = blkdev_nr_zones(nullb->disk); + } - blk_queue_chunk_sectors(q, nullb->dev->zone_size_sects); - q->nr_zones = blkdev_nr_zones(nullb->disk); + blk_queue_max_zone_append_sectors(q, + min_t(sector_t, q->limits.max_hw_sectors, + dev->zone_size_sects)); return 0; } @@ -138,7 +147,7 @@ size_t null_zone_valid_read_len(struct nullb *nullb, } static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, - unsigned int nr_sectors) + unsigned int nr_sectors, bool append) { struct nullb_device *dev = cmd->nq->dev; unsigned int zno = null_zone_no(dev, sector); @@ -158,9 +167,21 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, case BLK_ZONE_COND_IMP_OPEN: case BLK_ZONE_COND_EXP_OPEN: case BLK_ZONE_COND_CLOSED: - /* Writes must be at the write pointer position */ - if (sector != zone->wp) + /* + * Regular writes must be at the write pointer position. + * Zone append writes are automatically issued at the write + * pointer and the position returned using the request or BIO + * sector. + */ + if (append) { + sector = zone->wp; + if (cmd->bio) + cmd->bio->bi_iter.bi_sector = sector; + else + cmd->rq->__sector = sector; + } else if (sector != zone->wp) { return BLK_STS_IOERR; + } if (zone->cond != BLK_ZONE_COND_EXP_OPEN) zone->cond = BLK_ZONE_COND_IMP_OPEN; @@ -242,7 +263,9 @@ blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd, enum req_opf op, { switch (op) { case REQ_OP_WRITE: - return null_zone_write(cmd, sector, nr_sectors); + return null_zone_write(cmd, sector, nr_sectors, false); + case REQ_OP_ZONE_APPEND: + return null_zone_write(cmd, sector, nr_sectors, true); case REQ_OP_ZONE_RESET: case REQ_OP_ZONE_RESET_ALL: case REQ_OP_ZONE_OPEN: From patchwork Fri Apr 17 12:15:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Thumshirn X-Patchwork-Id: 11495031 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D9A751871 for ; Fri, 17 Apr 2020 12:16:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B9F0A22201 for ; Fri, 17 Apr 2020 12:16:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="D3VezgGN" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728605AbgDQMQH (ORCPT ); Fri, 17 Apr 2020 08:16:07 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:50643 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728498AbgDQMQF (ORCPT ); Fri, 17 Apr 2020 08:16:05 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1587125766; x=1618661766; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wu0Wqu+uMy68Ptkn8zi6Jwl5CpiQBAeDUJahmaTNaQQ=; b=D3VezgGN0I2BroMEOG8FJZSmi0f/8wQaTPZXGl2V6zOxiZ+dGLjRoVy6 ahmTC6zcLnfI7O1+H8OlhiZDwsmBc/D71lYH4X5AwuLyOG+zOptv9zaBE yiLl2TRnoM9JJaWNerDherkvirjzV9W131DJ5+Ybl79ZEdnWzV133MiwS 0tPx2Oz+0OvvlmYVWGeH6VwuFAO4CB0WREtRxGrG2YgvakO7YY2OitAo8 9AZBZPVvfTD1JsM5i6VS61ZZLggDuV9zRnUXXr2nflK/S5c0c2LODYaP8 md1082xMJf+zqbrf46XJb/PbajgKFHn6ymT5Gl3tt9nsLNZBG20x+vdls A==; IronPort-SDR: 5BH0guw+k6zsRlaHYa306+C6M/f8TdGyvcvwQfP0kAeBYqUtnyXbmcvFfrO6MRfd0jZyc+0gxD xh7LTJCrnQ9LJlfPj92/T4mvu+nkSLm8HLt4Np+De8cvV1TNtSfaPxWgSeOzaWINwEi2YjAOMh pcYImtEKLiXCKhEZE84W+flN33YFQByTjZvr1cyL3x84u70g69SaoAcD9iBxfdDZu08cFCyfUO fzqyxHbp/gqzSeA7GDqWJozZJIfJgeQ3mAFav1EL84VszeV3wSuBRI4XOUeuTypHA+t1lV9lMz SxA= X-IronPort-AV: E=Sophos;i="5.72,394,1580745600"; d="scan'208";a="237989230" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 17 Apr 2020 20:16:06 +0800 IronPort-SDR: vu+A0NBNbckmURYsv86L1YvQ9FNYQkUV/2hScAgYeWSEE6VE2MZUzD18DnEIxD4MwHqL30Zb+x 6FUiwA2PqJ5RE12sgHpPJf4kystidZAiY= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 05:07:03 -0700 IronPort-SDR: mDs4uGOoMXkJQVh/tpaLkYkTxzbG6jdIAJMpKKxIoZcS71rHDAg34MmcG7sTdZZhhnT3EDiU5W uCyKuejXQ0pA== WDCIronportException: Internal Received: from unknown (HELO redsun60.ssa.fujisawa.hgst.com) ([10.149.66.36]) by uls-op-cesaip01.wdc.com with ESMTP; 17 Apr 2020 05:16:03 -0700 From: Johannes Thumshirn To: Jens Axboe Cc: Christoph Hellwig , linux-block , Damien Le Moal , Keith Busch , "linux-scsi @ vger . kernel . org" , "Martin K . Petersen" , "linux-fsdevel @ vger . kernel . org" , Daniel Wagner , Johannes Thumshirn Subject: [PATCH v7 10/11] block: export bio_release_pages and bio_iov_iter_get_pages Date: Fri, 17 Apr 2020 21:15:35 +0900 Message-Id: <20200417121536.5393-11-johannes.thumshirn@wdc.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200417121536.5393-1-johannes.thumshirn@wdc.com> References: <20200417121536.5393-1-johannes.thumshirn@wdc.com> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Export bio_release_pages and bio_iov_iter_get_pages, so they can be used from modular code. Signed-off-by: Johannes Thumshirn --- block/bio.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/block/bio.c b/block/bio.c index 97baadc6d964..baa37d936d67 100644 --- a/block/bio.c +++ b/block/bio.c @@ -942,6 +942,7 @@ void bio_release_pages(struct bio *bio, bool mark_dirty) put_page(bvec->bv_page); } } +EXPORT_SYMBOL_GPL(bio_release_pages); static int __bio_iov_bvec_add_pages(struct bio *bio, struct iov_iter *iter) { @@ -1105,6 +1106,7 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) bio_set_flag(bio, BIO_NO_PAGE_REF); return bio->bi_vcnt ? 0 : ret; } +EXPORT_SYMBOL_GPL(bio_iov_iter_get_pages); static void submit_bio_wait_endio(struct bio *bio) { From patchwork Fri Apr 17 12:15:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Thumshirn X-Patchwork-Id: 11495037 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 07DEC81 for ; Fri, 17 Apr 2020 12:16:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E3AB5221E9 for ; Fri, 17 Apr 2020 12:16:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="bdxz2ycm" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728635AbgDQMQJ (ORCPT ); Fri, 17 Apr 2020 08:16:09 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:50670 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728532AbgDQMQH (ORCPT ); Fri, 17 Apr 2020 08:16:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1587125769; x=1618661769; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DntH3FOL+F2qJ4JB8bNfeHvJeRnIrbJq4X8RpN63JaA=; b=bdxz2ycmWoAx2/yAQk2lY32iplyNpr4lbGhqbzUVILVySz0WdPUrvlFF DvFtb6UMWwWjHJjjIIvKhXfh7eKds6oN7foWVmq3v6mEC1G2/DSHfQQE0 7AWrQMeVKFrKvY7DHKw4jbkiXGq852RHwfww4Pz1aRAM1ynnhA3CVFVyU 3ej34UdC8ZO/L/gl/KCZpp6RoQdb4nat53Q67978RJKRg1KrWXBFbRmBg tduFo3Au2Whcgf4sRIIQZU0TErYPzg3T/7eH6oguoEqr4m5OA9cQ0129p JMczvFJTMn+QO9p3TgrytlUlNtRXSJ08OcrzqLUZ/7RtlxQt9JTrXIMz/ A==; IronPort-SDR: ygJbTuMf9h6W7UKWtj/TFkTNNF9hqxjl3gr5jPYCBif284SJXM553o12Cl5+C4Xgq+IE3zSKD+ VXzE9taDzRirhh7d/ArkByrHRiXK5CPNHrHuRtGF+vusClOfq9a8bZeiFrs1LjUMMJM0Tdnaf7 HZ131LRHZDQftpdnvQgEHIOG3EYZaN1JTwme3RyUC0VL9FOmiU5bnQ4fcbxEQVQ1YVuRlMRFBq PqA8HXXW1roXozydmDfLJtYczL4m4EgI5mzqmETUngLMN4QtJOxi5EdDwgFenyPlADK66ABmNh dwc= X-IronPort-AV: E=Sophos;i="5.72,394,1580745600"; d="scan'208";a="237989235" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 17 Apr 2020 20:16:08 +0800 IronPort-SDR: kkeLWFAAxNb9FfEAE7nPvW9d3Vh1t/xY3EX0k139hidFR6dbsJwQ8smgzK90q+yq1raegqCTEV 8fc5Kfq+PQ0U7ae2alUo99iDFHxU1ZaP4= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 05:07:05 -0700 IronPort-SDR: FTnxkZXjU4DFTvIsctm6WMqZ1KWfVRnVbcxfd6QFruAxT7pPPhVUihHPUT02ZXpKbSO36ZBLC1 dKhi/uJiaUwQ== WDCIronportException: Internal Received: from unknown (HELO redsun60.ssa.fujisawa.hgst.com) ([10.149.66.36]) by uls-op-cesaip01.wdc.com with ESMTP; 17 Apr 2020 05:16:05 -0700 From: Johannes Thumshirn To: Jens Axboe Cc: Christoph Hellwig , linux-block , Damien Le Moal , Keith Busch , "linux-scsi @ vger . kernel . org" , "Martin K . Petersen" , "linux-fsdevel @ vger . kernel . org" , Daniel Wagner , Johannes Thumshirn Subject: [PATCH v7 11/11] zonefs: use REQ_OP_ZONE_APPEND for sync DIO Date: Fri, 17 Apr 2020 21:15:36 +0900 Message-Id: <20200417121536.5393-12-johannes.thumshirn@wdc.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200417121536.5393-1-johannes.thumshirn@wdc.com> References: <20200417121536.5393-1-johannes.thumshirn@wdc.com> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Synchronous direct I/O to a sequential write only zone can be issued using the new REQ_OP_ZONE_APPEND request operation. As dispatching multiple BIOs can potentially result in reordering, we cannot support asynchronous IO via this interface. We also can only dispatch up to queue_max_zone_append_sectors() via the new zone-append method and have to return a short write back to user-space in case an IO larger than queue_max_zone_append_sectors() has been issued. Signed-off-by: Johannes Thumshirn --- fs/zonefs/super.c | 80 ++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 72 insertions(+), 8 deletions(-) diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c index 3ce9829a6936..0bf7009f50a2 100644 --- a/fs/zonefs/super.c +++ b/fs/zonefs/super.c @@ -20,6 +20,7 @@ #include #include #include +#include #include "zonefs.h" @@ -596,6 +597,61 @@ static const struct iomap_dio_ops zonefs_write_dio_ops = { .end_io = zonefs_file_write_dio_end_io, }; +static ssize_t zonefs_file_dio_append(struct kiocb *iocb, struct iov_iter *from) +{ + struct inode *inode = file_inode(iocb->ki_filp); + struct zonefs_inode_info *zi = ZONEFS_I(inode); + struct block_device *bdev = inode->i_sb->s_bdev; + unsigned int max; + struct bio *bio; + ssize_t size; + int nr_pages; + ssize_t ret; + + nr_pages = iov_iter_npages(from, BIO_MAX_PAGES); + if (!nr_pages) + return 0; + + max = queue_max_zone_append_sectors(bdev_get_queue(bdev)); + max = ALIGN_DOWN(max << SECTOR_SHIFT, inode->i_sb->s_blocksize); + iov_iter_truncate(from, max); + + bio = bio_alloc_bioset(GFP_NOFS, nr_pages, &fs_bio_set); + if (!bio) + return -ENOMEM; + + bio_set_dev(bio, bdev); + bio->bi_iter.bi_sector = zi->i_zsector; + bio->bi_write_hint = iocb->ki_hint; + bio->bi_ioprio = iocb->ki_ioprio; + bio->bi_opf = REQ_OP_ZONE_APPEND | REQ_SYNC | REQ_IDLE; + if (iocb->ki_flags & IOCB_DSYNC) + bio->bi_opf |= REQ_FUA; + + ret = bio_iov_iter_get_pages(bio, from); + if (unlikely(ret)) { + bio_io_error(bio); + return ret; + } + size = bio->bi_iter.bi_size; + task_io_account_write(ret); + + if (iocb->ki_flags & IOCB_HIPRI) + bio_set_polled(bio, iocb); + + ret = submit_bio_wait(bio); + + bio_put(bio); + + zonefs_file_write_dio_end_io(iocb, size, ret, 0); + if (ret >= 0) { + iocb->ki_pos += size; + return size; + } + + return ret; +} + /* * Handle direct writes. For sequential zone files, this is the only possible * write path. For these files, check that the user is issuing writes @@ -611,6 +667,8 @@ static ssize_t zonefs_file_dio_write(struct kiocb *iocb, struct iov_iter *from) struct inode *inode = file_inode(iocb->ki_filp); struct zonefs_inode_info *zi = ZONEFS_I(inode); struct super_block *sb = inode->i_sb; + bool sync = is_sync_kiocb(iocb); + bool append = false; size_t count; ssize_t ret; @@ -619,7 +677,7 @@ static ssize_t zonefs_file_dio_write(struct kiocb *iocb, struct iov_iter *from) * as this can cause write reordering (e.g. the first aio gets EAGAIN * on the inode lock but the second goes through but is now unaligned). */ - if (zi->i_ztype == ZONEFS_ZTYPE_SEQ && !is_sync_kiocb(iocb) && + if (zi->i_ztype == ZONEFS_ZTYPE_SEQ && !sync && (iocb->ki_flags & IOCB_NOWAIT)) return -EOPNOTSUPP; @@ -643,16 +701,22 @@ static ssize_t zonefs_file_dio_write(struct kiocb *iocb, struct iov_iter *from) } /* Enforce sequential writes (append only) in sequential zones */ - mutex_lock(&zi->i_truncate_mutex); - if (zi->i_ztype == ZONEFS_ZTYPE_SEQ && iocb->ki_pos != zi->i_wpoffset) { + if (zi->i_ztype == ZONEFS_ZTYPE_SEQ) { + mutex_lock(&zi->i_truncate_mutex); + if (iocb->ki_pos != zi->i_wpoffset) { + mutex_unlock(&zi->i_truncate_mutex); + ret = -EINVAL; + goto inode_unlock; + } mutex_unlock(&zi->i_truncate_mutex); - ret = -EINVAL; - goto inode_unlock; + append = sync; } - mutex_unlock(&zi->i_truncate_mutex); - ret = iomap_dio_rw(iocb, from, &zonefs_iomap_ops, - &zonefs_write_dio_ops, is_sync_kiocb(iocb)); + if (append) + ret = zonefs_file_dio_append(iocb, from); + else + ret = iomap_dio_rw(iocb, from, &zonefs_iomap_ops, + &zonefs_write_dio_ops, sync); if (zi->i_ztype == ZONEFS_ZTYPE_SEQ && (ret > 0 || ret == -EIOCBQUEUED)) { if (ret > 0)