From patchwork Thu Dec 21 06:43:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 10127009 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 63D1A603B5 for ; Thu, 21 Dec 2017 06:44:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5968828905 for ; Thu, 21 Dec 2017 06:44:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4E72529612; Thu, 21 Dec 2017 06:44:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4E55E28905 for ; Thu, 21 Dec 2017 06:44:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751997AbdLUGod (ORCPT ); Thu, 21 Dec 2017 01:44:33 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:59776 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751163AbdLUGoY (ORCPT ); Thu, 21 Dec 2017 01:44:24 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1513838664; x=1545374664; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=ZZBvxnV3kpOeAgU4QQX3cLMjo1adUGxDCuY+ujq0Fxs=; b=SKifX/HSwlgYJuxXFThVAHCWFcd0MXdLWW+6KpXe75EdiV8EXWIccz8R NCU4dORza5Y/otMjpRWaGKDCK4Qgw2nskJhLJpsY1dW9wpxpBufc/VGkS L1OcBfuiWo8MrUEX0oGgPB2BIzhEYMR5lDbwGU9PUTi5dB6xRzQv5YYV+ HUGldGTLIdEHH9UlUSjtTu2G1S6LaOY8vEJfNFNvgY6utpWcJLRr6wHUZ 111fuVFEF8ZZN6f42F2q6gCYkk54/ZxsXXiJNtWyh9TLPJ07rkKY0tkdo NoED8WRYzvegSQBBBTlYRazrl+7I7Y503BhTdvowFu279l2WJPtqTt1PP A==; X-IronPort-AV: E=Sophos;i="5.45,434,1508774400"; d="scan'208";a="66300999" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 21 Dec 2017 14:44:23 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP; 20 Dec 2017 22:40:35 -0800 Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 20 Dec 2017 22:44:23 -0800 From: Damien Le Moal To: linux-scsi@vger.kernel.org, "Martin K . Petersen" , linux-block@vger.kernel.org, Jens Axboe Cc: Christoph Hellwig , Bart Van Assche Subject: [PATCH V9 6/7] sd_zbc: Initialize device request queue zoned data Date: Thu, 21 Dec 2017 15:43:43 +0900 Message-Id: <20171221064344.6228-7-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20171221064344.6228-1-damien.lemoal@wdc.com> References: <20171221064344.6228-1-damien.lemoal@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize the seq_zones_bitmap, seq_zones_wlock and nr_zones fields of the disk request queue on disk revalidate. As the seq_zones_bitmap and seq_zones_wlock allocations are identical, introduce the helper sd_zbc_alloc_zone_bitmap(). Using this helper, reallocate the bitmaps whenever the disk capacity (number of zones) changes. Signed-off-by: Damien Le Moal --- drivers/scsi/sd_zbc.c | 152 +++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 144 insertions(+), 8 deletions(-) diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c index 27793b9f54c0..c715b8363ce0 100644 --- a/drivers/scsi/sd_zbc.c +++ b/drivers/scsi/sd_zbc.c @@ -586,8 +586,123 @@ static int sd_zbc_check_zone_size(struct scsi_disk *sdkp) return 0; } +/** + * sd_zbc_alloc_zone_bitmap - Allocate a zone bitmap (one bit per zone). + * @sdkp: The disk of the bitmap + */ +static inline unsigned long *sd_zbc_alloc_zone_bitmap(struct scsi_disk *sdkp) +{ + struct request_queue *q = sdkp->disk->queue; + + return kzalloc_node(BITS_TO_LONGS(sdkp->nr_zones) + * sizeof(unsigned long), + GFP_KERNEL, q->node); +} + +/** + * sd_zbc_get_seq_zones - Parse report zones reply to identify sequential zones + * @sdkp: disk used + * @buf: report reply buffer + * @seq_zone_bitamp: bitmap of sequential zones to set + * + * Parse reported zone descriptors in @buf to identify sequential zones and + * set the reported zone bit in @seq_zones_bitmap accordingly. + * Since read-only and offline zones cannot be written, do not + * mark them as sequential in the bitmap. + * Return the LBA after the last zone reported. + */ +static sector_t sd_zbc_get_seq_zones(struct scsi_disk *sdkp, unsigned char *buf, + unsigned int buflen, + unsigned long *seq_zones_bitmap) +{ + sector_t lba, next_lba = sdkp->capacity; + unsigned int buf_len, list_length; + unsigned char *rec; + u8 type, cond; + + list_length = get_unaligned_be32(&buf[0]) + 64; + buf_len = min(list_length, buflen); + rec = buf + 64; + + while (rec < buf + buf_len) { + type = rec[0] & 0x0f; + cond = (rec[1] >> 4) & 0xf; + lba = get_unaligned_be64(&rec[16]); + if (type != ZBC_ZONE_TYPE_CONV && + cond != ZBC_ZONE_COND_READONLY && + cond != ZBC_ZONE_COND_OFFLINE) + set_bit(lba >> sdkp->zone_shift, seq_zones_bitmap); + next_lba = lba + get_unaligned_be64(&rec[8]); + rec += 64; + } + + return next_lba; +} + +/** + * sd_zbc_setup_seq_zones_bitmap - Initialize the disk seq zone bitmap. + * @sdkp: target disk + * + * Allocate a zone bitmap and initialize it by identifying sequential zones. + */ +static int sd_zbc_setup_seq_zones_bitmap(struct scsi_disk *sdkp) +{ + struct request_queue *q = sdkp->disk->queue; + unsigned long *seq_zones_bitmap; + sector_t lba = 0; + unsigned char *buf; + int ret = -ENOMEM; + + seq_zones_bitmap = sd_zbc_alloc_zone_bitmap(sdkp); + if (!seq_zones_bitmap) + return -ENOMEM; + + buf = kmalloc(SD_ZBC_BUF_SIZE, GFP_KERNEL); + if (!buf) + goto out; + + while (lba < sdkp->capacity) { + ret = sd_zbc_report_zones(sdkp, buf, SD_ZBC_BUF_SIZE, lba); + if (ret) + goto out; + lba = sd_zbc_get_seq_zones(sdkp, buf, SD_ZBC_BUF_SIZE, + seq_zones_bitmap); + } + + if (lba != sdkp->capacity) { + /* Something went wrong */ + ret = -EIO; + } + +out: + kfree(buf); + if (ret) { + kfree(seq_zones_bitmap); + return ret; + } + + q->seq_zones_bitmap = seq_zones_bitmap; + + return 0; +} + +static void sd_zbc_cleanup(struct scsi_disk *sdkp) +{ + struct request_queue *q = sdkp->disk->queue; + + kfree(q->seq_zones_bitmap); + q->seq_zones_bitmap = NULL; + + kfree(q->seq_zones_wlock); + q->seq_zones_wlock = NULL; + + q->nr_zones = 0; +} + static int sd_zbc_setup(struct scsi_disk *sdkp) { + struct request_queue *q = sdkp->disk->queue; + int ret; /* READ16/WRITE16 is mandatory for ZBC disks */ sdkp->device->use_16_for_rw = 1; @@ -599,15 +714,36 @@ static int sd_zbc_setup(struct scsi_disk *sdkp) sdkp->nr_zones = round_up(sdkp->capacity, sdkp->zone_blocks) >> sdkp->zone_shift; - if (!sdkp->zones_wlock) { - sdkp->zones_wlock = kcalloc(BITS_TO_LONGS(sdkp->nr_zones), - sizeof(unsigned long), - GFP_KERNEL); - if (!sdkp->zones_wlock) - return -ENOMEM; + /* + * Initialize the device request queue information if the number + * of zones changed. + */ + if (sdkp->nr_zones != q->nr_zones) { + + sd_zbc_cleanup(sdkp); + + q->nr_zones = sdkp->nr_zones; + if (sdkp->nr_zones) { + q->seq_zones_wlock = sd_zbc_alloc_zone_bitmap(sdkp); + if (!q->seq_zones_wlock) { + ret = -ENOMEM; + goto err; + } + + ret = sd_zbc_setup_seq_zones_bitmap(sdkp); + if (ret) { + sd_zbc_cleanup(sdkp); + goto err; + } + } + } return 0; + +err: + sd_zbc_cleanup(sdkp); + return ret; } int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buf) @@ -661,14 +797,14 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buf) err: sdkp->capacity = 0; + sd_zbc_cleanup(sdkp); return ret; } void sd_zbc_remove(struct scsi_disk *sdkp) { - kfree(sdkp->zones_wlock); - sdkp->zones_wlock = NULL; + sd_zbc_cleanup(sdkp); } void sd_zbc_print_zones(struct scsi_disk *sdkp)