From patchwork Sun Sep 24 07:02:37 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 9967769 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DA30C6020C for ; Sun, 24 Sep 2017 07:03:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CCEC121C9A for ; Sun, 24 Sep 2017 07:03:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C1DCA28478; Sun, 24 Sep 2017 07:03:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 63FB821C9A for ; Sun, 24 Sep 2017 07:03:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751868AbdIXHDA (ORCPT ); Sun, 24 Sep 2017 03:03:00 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:22595 "EHLO esa1.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751847AbdIXHDA (ORCPT ); Sun, 24 Sep 2017 03:03:00 -0400 X-IronPort-AV: E=Sophos;i="5.42,431,1500912000"; d="scan'208";a="156248355" Received: from sjappemgw12.hgst.com (HELO sjappemgw11.hgst.com) ([199.255.44.66]) by ob1.hgst.iphmx.com with ESMTP; 24 Sep 2017 15:02:59 +0800 Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by sjappemgw11.hgst.com with ESMTP; 24 Sep 2017 00:02:59 -0700 From: Damien Le Moal To: linux-scsi@vger.kernel.org, "Martin K . Petersen" , linux-block@vger.kernel.org, Jens Axboe Cc: Christoph Hellwig , Bart Van Assche Subject: [PATCH V4 06/16] block: Add zoned block device information to request queue Date: Sun, 24 Sep 2017 16:02:37 +0900 Message-Id: <20170924070247.25560-7-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20170924070247.25560-1-damien.lemoal@wdc.com> References: <20170924070247.25560-1-damien.lemoal@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Components relying only on the requeuest_queue structure for accessing block devices (e.g. I/O schedulers) have a limited knowledged of the device characteristics. In particular, the device capacity cannot be easily discovered, which for a zoned block device also result in the inability to easily know the number of zones of the device (the zone size is indicated by the chunk_sectors field of the queue limits). Introduce the nr_zones field to the request_queue sturcture to simplify access to this information. Also, add the seq_zones bitmap which indicates which zones of the device are sequential (write preferred or write required) zones. These two fields can be initialized by the low level block device driver (sd.c for ZBC/ZAC disks), and if desired, by stacking drivers too (device mappers). Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig --- include/linux/blkdev.h | 48 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 460294bb0fa5..f53bc4191dd6 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -544,6 +544,13 @@ struct request_queue { struct queue_limits limits; /* + * Zoned block device information for mq I/O schedulers. + * Set by low level device driver (stacking driver may not set this). + */ + unsigned int nr_zones; + unsigned long *seq_zones; + + /* * sg stuff */ unsigned int sg_timeout; @@ -785,6 +792,27 @@ static inline unsigned int blk_queue_zone_sectors(struct request_queue *q) return blk_queue_is_zoned(q) ? q->limits.chunk_sectors : 0; } +static inline unsigned int blk_queue_nr_zones(struct request_queue *q) +{ + return q->nr_zones; +} + +static inline unsigned int blk_queue_zone_no(struct request_queue *q, + sector_t sector) +{ + if (!blk_queue_is_zoned(q)) + return 0; + return sector >> ilog2(q->limits.chunk_sectors); +} + +static inline bool blk_queue_zone_is_seq(struct request_queue *q, + sector_t sector) +{ + if (!blk_queue_is_zoned(q) || !q->seq_zones) + return false; + return test_bit(blk_queue_zone_no(q, sector), q->seq_zones); +} + static inline bool rq_is_sync(struct request *rq) { return op_is_sync(rq->cmd_flags); @@ -1031,6 +1059,16 @@ static inline unsigned int blk_rq_cur_sectors(const struct request *rq) return blk_rq_cur_bytes(rq) >> 9; } +static inline unsigned int blk_rq_zone_no(struct request *rq) +{ + return blk_queue_zone_no(rq->q, blk_rq_pos(rq)); +} + +static inline unsigned int blk_rq_zone_is_seq(struct request *rq) +{ + return blk_queue_zone_is_seq(rq->q, blk_rq_pos(rq)); +} + /* * Some commands like WRITE SAME have a payload or data transfer size which * is different from the size of the request. Any driver that supports such @@ -1582,6 +1620,16 @@ static inline unsigned int bdev_zone_sectors(struct block_device *bdev) return 0; } +static inline unsigned int bdev_nr_zones(struct block_device *bdev) +{ + struct request_queue *q = bdev_get_queue(bdev); + + if (q) + return blk_queue_nr_zones(q); + + return 0; +} + static inline int queue_dma_alignment(struct request_queue *q) { return q ? q->dma_alignment : 511;