From patchwork Tue Jan 21 08:15:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shin'ichiro Kawasaki X-Patchwork-Id: 13945934 Received: from esa6.hgst.iphmx.com (esa6.hgst.iphmx.com [216.71.154.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2651D198E6D for ; Tue, 21 Jan 2025 08:15:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=216.71.154.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737447328; cv=none; b=hbD+J5SPX6vOLKH41NL01k0yF16cS7Ahs6y3ms+esfMybU0p0179BBdLcNJCoO1Kk/gAXJxloX+6eXTTjCJm9f8WIJFhKZj/zFWoWuCK0qFJttDYOw6u6YhT4n4+cWaXnIapmFvs0ml2S7NvW9zKqA5I8pfqumrjYYzP9JRzZik= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737447328; c=relaxed/simple; bh=idBvWNyOsARkocpqGUpgtK7966oquXdeVbpcNU5HMhM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Mu4i4S25c7RwMib0xpalCR4ToGEwdyPe+sakORIg+tl/KiLneN+V8qNAamcddkr6Swt8Bv3RiJRhVfjPQ7kCVUhnSC97u+vIrDnl1M6vcCwJSWUDE2np8jweHivO8/qwcyG2dKiBSRG+KWwOu4PccZKft1iJQO3huqfmzpIga/U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com; spf=pass smtp.mailfrom=wdc.com; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b=I+mWywHu; arc=none smtp.client-ip=216.71.154.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=wdc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="I+mWywHu" DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1737447325; x=1768983325; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=idBvWNyOsARkocpqGUpgtK7966oquXdeVbpcNU5HMhM=; b=I+mWywHubCegNUftbDwFahio3RXfbUSDAWYdmgdzFp5CV6syApx/TyrP 4aXgPqr5iKr3Ikrq/WhLC7l7Yq29WAaSDA1Z095XNA0nXCt24f3Lr85nd /z8b/9FRdY22ThnEWv7iBfmu1gQLG0reA0SPL5cEvmkBeE7rTvlOZeh2d frncvOWNyoLlB00FPu8aC69VUEjoGpV/oGIWy1DpszaDgYVdg7932C7qb U/dRTt+nLzjUQv9KDQO0fbHNUjKLhtKRBR73DSeDVtUC4VGLwvBHCbQl/ 5nS+/2HIgfu1HOSRjJfDbViC/Fn7tMzrNxCYSH2LxI6xm7yw9RkgnYuRL A==; X-CSE-ConnectionGUID: JG8lKWFyQXeUsbWxwejXzA== X-CSE-MsgGUID: MFR4coJcTfKh3c0fitbSNQ== X-IronPort-AV: E=Sophos;i="6.13,221,1732550400"; d="scan'208";a="36182068" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 21 Jan 2025 16:15:20 +0800 IronPort-SDR: 678f4a03_MSMvudPOOefqOe+2GTuMkGtwN9Tnf9JrENDX1HnXSFxIp0L 47p1VKFuC0I6YjJ1c6XGzwXAe41WADh+r53hlQA== Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 20 Jan 2025 23:17:23 -0800 WDCIronportException: Internal Received: from unknown (HELO shindev.ssa.fujisawa.hgst.com) ([10.149.66.30]) by uls-op-cesaip01.wdc.com with ESMTP; 21 Jan 2025 00:15:19 -0800 From: Shin'ichiro Kawasaki To: linux-block@vger.kernel.org, Jens Axboe Cc: Damien Le Moal , Bart Van Assche , Shin'ichiro Kawasaki Subject: [PATCH for-next v4 1/5] null_blk: generate null_blk configfs features string Date: Tue, 21 Jan 2025 17:15:13 +0900 Message-ID: <20250121081517.1212575-2-shinichiro.kawasaki@wdc.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250121081517.1212575-1-shinichiro.kawasaki@wdc.com> References: <20250121081517.1212575-1-shinichiro.kawasaki@wdc.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The null_blk configfs file 'features' provides a string that lists available null_blk features for userspace programs to reference. The string is defined as a long constant in the code, which tends to be forgotten for updates. It also causes checkpatch.pl to report "WARNING: quoted string split across lines". To avoid these drawbacks, generate the feature string on the fly. Refer to the ca_name field of each element in the nullb_device_attrs table and concatenate them in the given buffer. Also, sorted nullb_device_attrs table elements in alphabetical order. Of note is that the feature "index" was missing before this commit. This commit adds it to the generated string. Suggested-by: Bart Van Assche Reviewed-by: Bart Van Assche Reviewed-by: Damien Le Moal Signed-off-by: Shin'ichiro Kawasaki --- drivers/block/null_blk/main.c | 90 ++++++++++++++++++++--------------- 1 file changed, 51 insertions(+), 39 deletions(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index d94ef37480bd..0725d221cff4 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -592,41 +592,41 @@ static ssize_t nullb_device_zone_offline_store(struct config_item *item, CONFIGFS_ATTR_WO(nullb_device_, zone_offline); static struct configfs_attribute *nullb_device_attrs[] = { - &nullb_device_attr_size, - &nullb_device_attr_completion_nsec, - &nullb_device_attr_submit_queues, - &nullb_device_attr_poll_queues, - &nullb_device_attr_home_node, - &nullb_device_attr_queue_mode, + &nullb_device_attr_badblocks, + &nullb_device_attr_blocking, &nullb_device_attr_blocksize, - &nullb_device_attr_max_sectors, - &nullb_device_attr_irqmode, + &nullb_device_attr_cache_size, + &nullb_device_attr_completion_nsec, + &nullb_device_attr_discard, + &nullb_device_attr_fua, + &nullb_device_attr_home_node, &nullb_device_attr_hw_queue_depth, &nullb_device_attr_index, - &nullb_device_attr_blocking, - &nullb_device_attr_use_per_node_hctx, - &nullb_device_attr_power, - &nullb_device_attr_memory_backed, - &nullb_device_attr_discard, + &nullb_device_attr_irqmode, + &nullb_device_attr_max_sectors, &nullb_device_attr_mbps, - &nullb_device_attr_cache_size, - &nullb_device_attr_badblocks, - &nullb_device_attr_zoned, - &nullb_device_attr_zone_size, - &nullb_device_attr_zone_capacity, - &nullb_device_attr_zone_nr_conv, - &nullb_device_attr_zone_max_open, - &nullb_device_attr_zone_max_active, - &nullb_device_attr_zone_append_max_sectors, - &nullb_device_attr_zone_readonly, - &nullb_device_attr_zone_offline, - &nullb_device_attr_zone_full, - &nullb_device_attr_virt_boundary, + &nullb_device_attr_memory_backed, &nullb_device_attr_no_sched, - &nullb_device_attr_shared_tags, - &nullb_device_attr_shared_tag_bitmap, - &nullb_device_attr_fua, + &nullb_device_attr_poll_queues, + &nullb_device_attr_power, + &nullb_device_attr_queue_mode, &nullb_device_attr_rotational, + &nullb_device_attr_shared_tag_bitmap, + &nullb_device_attr_shared_tags, + &nullb_device_attr_size, + &nullb_device_attr_submit_queues, + &nullb_device_attr_use_per_node_hctx, + &nullb_device_attr_virt_boundary, + &nullb_device_attr_zone_append_max_sectors, + &nullb_device_attr_zone_capacity, + &nullb_device_attr_zone_full, + &nullb_device_attr_zone_max_active, + &nullb_device_attr_zone_max_open, + &nullb_device_attr_zone_nr_conv, + &nullb_device_attr_zone_offline, + &nullb_device_attr_zone_readonly, + &nullb_device_attr_zone_size, + &nullb_device_attr_zoned, NULL, }; @@ -704,16 +704,28 @@ nullb_group_drop_item(struct config_group *group, struct config_item *item) static ssize_t memb_group_features_show(struct config_item *item, char *page) { - return snprintf(page, PAGE_SIZE, - "badblocks,blocking,blocksize,cache_size,fua," - "completion_nsec,discard,home_node,hw_queue_depth," - "irqmode,max_sectors,mbps,memory_backed,no_sched," - "poll_queues,power,queue_mode,shared_tag_bitmap," - "shared_tags,size,submit_queues,use_per_node_hctx," - "virt_boundary,zoned,zone_capacity,zone_max_active," - "zone_max_open,zone_nr_conv,zone_offline,zone_readonly," - "zone_size,zone_append_max_sectors,zone_full," - "rotational\n"); + + struct configfs_attribute **entry; + char delimiter = ','; + size_t left = PAGE_SIZE; + size_t written = 0; + int ret; + + for (entry = &nullb_device_attrs[0]; *entry && left > 0; entry++) { + if (!*(entry + 1)) + delimiter = '\n'; + ret = snprintf(page + written, left, "%s%c", (*entry)->ca_name, + delimiter); + if (ret >= left) { + WARN_ONCE(1, "Too many null_blk features to print\n"); + memzero_explicit(page, PAGE_SIZE); + return -ENOBUFS; + } + left -= ret; + written += ret; + } + + return written; } CONFIGFS_ATTR_RO(memb_group_, features); From patchwork Tue Jan 21 08:15:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shin'ichiro Kawasaki X-Patchwork-Id: 13945935 Received: from esa6.hgst.iphmx.com (esa6.hgst.iphmx.com [216.71.154.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF30A1AFB36 for ; Tue, 21 Jan 2025 08:15:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=216.71.154.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737447329; cv=none; b=IuWXrkAD2TO7Vae6VoqtyiZCJcuPcsKv0iOetPX9DQ0k1k9jYVW+4TRT1PjZ9HKsfaEgmnyau2JZ+9568tb9WiQ5Vf7cNNWqMcdV5Lt8auPFukJlFt/QfdLtPpEE3OZv1/KzgLMs7HNQJ2CXO6q+g7cG2y+XJOPMRxE7GlzRTZI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737447329; c=relaxed/simple; bh=NEU8/bjKUkJ2yiA62sKp1N5ee0MPHR3UFQl5iATMtoY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=skKddXxTsl0+s49i/a1duRgLvC1XCG3H+fWTpeDEP16n2fYS01Kb1gyf7CPp1fWtHv0Z9pMj6SrfkCT7J715r7oi5NrGwu9rEVXgUdXVddynwTIxLf1ixKRTBAqg1id7zqA8NYfS1oDnJ3pfi7Bjl4ABV1gz9Uwkq/bO560WPfk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com; spf=pass smtp.mailfrom=wdc.com; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b=EBgFfA20; arc=none smtp.client-ip=216.71.154.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=wdc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="EBgFfA20" DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1737447327; x=1768983327; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NEU8/bjKUkJ2yiA62sKp1N5ee0MPHR3UFQl5iATMtoY=; b=EBgFfA20VFNQym/P/m6GJXceB2wL9vAn6qf3v6Fmr6iCIVrS/3Rdqz9T YpKP+7IOUWjz8mvhg4SDd6hhS+qxaiNuXVN59qnpqqJSXVDVHbtcncUNC fXOYa4jm3Zs5s39AP0jc+HOqDsE+M9xJGuWKkeIxZJU0myTZLnE+8QsnZ iWtlwPlfx/B6H0oETbDDRyVkgm/2pbckXZsnn+RV1anw0njkm6+0svIox i5BaHW3T7L8MwZ3zEfNJmnDeSx0vmQVOQgBgZOmCMbyHf+1UKYdp0RbLY noJMO7HYfvv5aWDP/OxfzrTITneps+LPNuixhqX+BBZDNq1q9/bs0VIp9 A==; X-CSE-ConnectionGUID: tNw6URauQKy482JE8QmTQg== X-CSE-MsgGUID: SG1MoTHSS6eiZywd/MzBtw== X-IronPort-AV: E=Sophos;i="6.13,221,1732550400"; d="scan'208";a="36182070" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 21 Jan 2025 16:15:21 +0800 IronPort-SDR: 678f4a04_oKypfGbFm0Iovf4fT5bWjPPduYFtdkldL28u9D7iawBNsfB 4LaBBZOPJ6B5jRPujZeU/1W6hi8D0jS8IA4sggw== Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 20 Jan 2025 23:17:24 -0800 WDCIronportException: Internal Received: from unknown (HELO shindev.ssa.fujisawa.hgst.com) ([10.149.66.30]) by uls-op-cesaip01.wdc.com with ESMTP; 21 Jan 2025 00:15:20 -0800 From: Shin'ichiro Kawasaki To: linux-block@vger.kernel.org, Jens Axboe Cc: Damien Le Moal , Bart Van Assche , Shin'ichiro Kawasaki Subject: [PATCH for-next v4 2/5] null_blk: introduce badblocks_once parameter Date: Tue, 21 Jan 2025 17:15:14 +0900 Message-ID: <20250121081517.1212575-3-shinichiro.kawasaki@wdc.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250121081517.1212575-1-shinichiro.kawasaki@wdc.com> References: <20250121081517.1212575-1-shinichiro.kawasaki@wdc.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 When IO errors happen on real storage devices, the IOs repeated to the same target range can success by virtue of recovery features by devices, such as reserved block assignment. To simulate such IO errors and recoveries, introduce the new parameter badblocks_once parameter. When this parameter is set to 1, the specified badblocks are cleared after the first IO error, so that the next IO to the blocks succeed. Reviewed-by: Damien Le Moal Signed-off-by: Shin'ichiro Kawasaki --- drivers/block/null_blk/main.c | 11 ++++++++--- drivers/block/null_blk/null_blk.h | 1 + 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 0725d221cff4..2a060a6ea8c0 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -473,6 +473,7 @@ NULLB_DEVICE_ATTR(shared_tags, bool, NULL); NULLB_DEVICE_ATTR(shared_tag_bitmap, bool, NULL); NULLB_DEVICE_ATTR(fua, bool, NULL); NULLB_DEVICE_ATTR(rotational, bool, NULL); +NULLB_DEVICE_ATTR(badblocks_once, bool, NULL); static ssize_t nullb_device_power_show(struct config_item *item, char *page) { @@ -593,6 +594,7 @@ CONFIGFS_ATTR_WO(nullb_device_, zone_offline); static struct configfs_attribute *nullb_device_attrs[] = { &nullb_device_attr_badblocks, + &nullb_device_attr_badblocks_once, &nullb_device_attr_blocking, &nullb_device_attr_blocksize, &nullb_device_attr_cache_size, @@ -1315,10 +1317,13 @@ static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, sector_t first_bad; int bad_sectors; - if (badblocks_check(bb, sector, nr_sectors, &first_bad, &bad_sectors)) - return BLK_STS_IOERR; + if (!badblocks_check(bb, sector, nr_sectors, &first_bad, &bad_sectors)) + return BLK_STS_OK; - return BLK_STS_OK; + if (cmd->nq->dev->badblocks_once) + badblocks_clear(bb, first_bad, bad_sectors); + + return BLK_STS_IOERR; } static inline blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd, diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h index 6f9fe6171087..3c4c07f0418b 100644 --- a/drivers/block/null_blk/null_blk.h +++ b/drivers/block/null_blk/null_blk.h @@ -63,6 +63,7 @@ struct nullb_device { unsigned long flags; /* device flags */ unsigned int curr_cache; struct badblocks badblocks; + bool badblocks_once; unsigned int nr_zones; unsigned int nr_zones_imp_open; From patchwork Tue Jan 21 08:15:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shin'ichiro Kawasaki X-Patchwork-Id: 13945937 Received: from esa6.hgst.iphmx.com (esa6.hgst.iphmx.com [216.71.154.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E9E1187FEC for ; Tue, 21 Jan 2025 08:15:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=216.71.154.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737447330; cv=none; b=dyx8XKBbqDWveHhXVf1+tUOHwZe4o9rL9JSvZrXt3YADrA6i3JKiyE+b4wQ1Z+UdwAMt8O5ntfVdh489DvjEe4Ym4saqffVuqIaw6fa3Ms00lEvoK9AC9rbH7iYB0CE5jXfWnCNt3cGjeWmhZG9qXzsiqHi4hYZ8bE117p3yJlY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737447330; c=relaxed/simple; bh=BJjNFYAZw7k5DHjdhIv0vo04Mcnk9r/teFd1WG73BFg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YM+XCkSKHJ7zT+Ozz2SX2BDzCV+un0Q5am8MC6+oSS5Eym0ZOtVMc3Zm57vLhvwP66YSTYr07fEtgMmks4b+/yZWoKYwBIMtQn3AiBnubpQ2vGjcwpyerBxfebYUw+nqad3zyG52lpwlK3qEtFINxpX1rzMLd4ZTd4S5cWiuEn0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com; spf=pass smtp.mailfrom=wdc.com; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b=Ltgfg5X8; arc=none smtp.client-ip=216.71.154.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=wdc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="Ltgfg5X8" DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1737447328; x=1768983328; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BJjNFYAZw7k5DHjdhIv0vo04Mcnk9r/teFd1WG73BFg=; b=Ltgfg5X8n7x0mn20fHcxm/O6mN9Q7FrKA/gt/Xf9UOqJcy/hSZF1KS2f QOaB7zhmEebqTzN3IlSZfiKkmNQnkgEzMDmkdGm4hdugTj1Cs0H7TfGE9 Os5ySUWLre804lFg3KZ8g/VVNceRRHFOzeh+ybnylslQ5AKnHaqZM1Hkv GfKMovd9gfxkJ0LAwa2IKZm4TLQlgfrkgMayTthwt1kFQEC9fBrNwUFq/ MuQB2H3lR/VCgtW3bKsNnhhMh+V+vmJZ/WYaSR2mPHsMw3Jo6+HgCGMfb du+HM96fXeyI1Dx6JWurbE+JIp9NJopNTVn2ydyOWJfU8twnXT4Pey6YB w==; X-CSE-ConnectionGUID: cpNWjkUJSO+zIO7ldddDIQ== X-CSE-MsgGUID: f8mhoQq/TFK7gUK0ORj7Qg== X-IronPort-AV: E=Sophos;i="6.13,221,1732550400"; d="scan'208";a="36182073" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 21 Jan 2025 16:15:22 +0800 IronPort-SDR: 678f4a05_x5xzCB47XmevnEwmHbOAck0XMXzc5jl+2oON+KKtyCVhgGx wVXLOe6f57G88CNNgNzK2hfez4RM/p9nrSWa6Ug== Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 20 Jan 2025 23:17:25 -0800 WDCIronportException: Internal Received: from unknown (HELO shindev.ssa.fujisawa.hgst.com) ([10.149.66.30]) by uls-op-cesaip01.wdc.com with ESMTP; 21 Jan 2025 00:15:21 -0800 From: Shin'ichiro Kawasaki To: linux-block@vger.kernel.org, Jens Axboe Cc: Damien Le Moal , Bart Van Assche , Shin'ichiro Kawasaki Subject: [PATCH for-next v4 3/5] null_blk: fix zone resource management for badblocks Date: Tue, 21 Jan 2025 17:15:15 +0900 Message-ID: <20250121081517.1212575-4-shinichiro.kawasaki@wdc.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250121081517.1212575-1-shinichiro.kawasaki@wdc.com> References: <20250121081517.1212575-1-shinichiro.kawasaki@wdc.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 When the badblocks parameter is set for zoned null_blk, zone resource management does not work correctly. This issue arises because null_zone_write() modifies the zone resource status and then call null_process_cmd(), which handles the badblocks parameter. When badblocks cause IO failures and no IO happens, the zone resource status should not change. However, it has already changed. To fix the unexpected change in zone resource status, when writes are requested for sequential write required zones, handle badblocks not in null_process_cmd() but in null_zone_write(). Modify null_zone_write() to call null_handle_badblocks() before changing the zone resource status. Also, call null_handle_memory_backed() instead of null_process_cmd(). Signed-off-by: Shin'ichiro Kawasaki --- drivers/block/null_blk/main.c | 11 ++++------- drivers/block/null_blk/null_blk.h | 5 +++++ drivers/block/null_blk/zoned.c | 15 ++++++++++++--- 3 files changed, 21 insertions(+), 10 deletions(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 2a060a6ea8c0..87037cb375c9 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -1309,9 +1309,8 @@ static inline blk_status_t null_handle_throttled(struct nullb_cmd *cmd) return sts; } -static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, - sector_t sector, - sector_t nr_sectors) +blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, sector_t sector, + sector_t nr_sectors) { struct badblocks *bb = &cmd->nq->dev->badblocks; sector_t first_bad; @@ -1326,10 +1325,8 @@ static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, return BLK_STS_IOERR; } -static inline blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd, - enum req_op op, - sector_t sector, - sector_t nr_sectors) +blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd, enum req_op op, + sector_t sector, sector_t nr_sectors) { struct nullb_device *dev = cmd->nq->dev; diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h index 3c4c07f0418b..ee60f3a88796 100644 --- a/drivers/block/null_blk/null_blk.h +++ b/drivers/block/null_blk/null_blk.h @@ -132,6 +132,11 @@ blk_status_t null_handle_discard(struct nullb_device *dev, sector_t sector, sector_t nr_sectors); blk_status_t null_process_cmd(struct nullb_cmd *cmd, enum req_op op, sector_t sector, unsigned int nr_sectors); +blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, sector_t sector, + sector_t nr_sectors); +blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd, enum req_op op, + sector_t sector, sector_t nr_sectors); + #ifdef CONFIG_BLK_DEV_ZONED int null_init_zoned_dev(struct nullb_device *dev, struct queue_limits *lim); diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c index 0d5f9bf95229..09dae8d018aa 100644 --- a/drivers/block/null_blk/zoned.c +++ b/drivers/block/null_blk/zoned.c @@ -389,6 +389,12 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, goto unlock_zone; } + if (dev->badblocks.shift != -1) { + ret = null_handle_badblocks(cmd, sector, nr_sectors); + if (ret != BLK_STS_OK) + goto unlock_zone; + } + if (zone->cond == BLK_ZONE_COND_CLOSED || zone->cond == BLK_ZONE_COND_EMPTY) { if (dev->need_zone_res_mgmt) { @@ -412,9 +418,12 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, zone->cond = BLK_ZONE_COND_IMP_OPEN; } - ret = null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors); - if (ret != BLK_STS_OK) - goto unlock_zone; + if (dev->memory_backed) { + ret = null_handle_memory_backed(cmd, REQ_OP_WRITE, sector, + nr_sectors); + if (ret != BLK_STS_OK) + goto unlock_zone; + } zone->wp += nr_sectors; if (zone->wp == zone->start + zone->capacity) { From patchwork Tue Jan 21 08:15:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shin'ichiro Kawasaki X-Patchwork-Id: 13945936 Received: from esa6.hgst.iphmx.com (esa6.hgst.iphmx.com [216.71.154.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B8D51B4156 for ; Tue, 21 Jan 2025 08:15:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=216.71.154.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737447329; cv=none; b=H10eBF9xT6HhdSXJfQSVKe+bPH0x/aZ7sRI4oKyRkVVv07kah3uKzlO6+dNzoYeg00KnRAGGNSGm+l0GVEfQaniCIRUiwB5GfAOlQKvs8mVddtI8zdt4JtT3nEJIDXMbPJyAHp4viaacmKoGWpp3oBBSMHBJT3AoAC0x5CPACYE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737447329; c=relaxed/simple; bh=RarQCw0Q/MlT/SjTk/J/KIrVuLDkJ91fM5OsKJ2AiRk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iOrfejW+tL2XbVPOGDev8q6dlJkcWuCFO24o9fhcpOS3G8m+9yGX6kLMYK7Ybxaw5BMfSMgGRS7oHAKmjbe4lRreYdsKGQD6npidrVVO9lOTllK/d9B4pHWYnTA5S0cvE5gXB7kizDhew6DQJ5qUemyy4hsAVd28emQW2MkQgIE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com; spf=pass smtp.mailfrom=wdc.com; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b=eGchEs3n; arc=none smtp.client-ip=216.71.154.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=wdc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="eGchEs3n" DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1737447328; x=1768983328; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RarQCw0Q/MlT/SjTk/J/KIrVuLDkJ91fM5OsKJ2AiRk=; b=eGchEs3nW0r7jBneXKZthCcIwOGHO/HZDfJh5IxdhHE3Z6wYf5jvpFDc whA6ungLkD6zpEcTrd4Y8DTj/xrdlP9QAlzYpaIWIDrAFkh6/CFPSq+eE US14ko+DyTev8TkwKm2x10x5nXieK53X9HIFz24gv3q1syCgamPIG/iv1 RnJxazsml1I7eFJMOotSkyqOjEaVpEES6n2/WD9JRE/cXEyG+m47Np2Lv kONow00QlPMaxPQUevFEzCHH5KEYidwnfbpFLi+QU3P3j+Bt3fs5+FpH6 NUk+9pGt2Bam7ZaFB+aVGEOPrZ0fAKhH2c1gucYi73fc+xM3Gj8Fn2GFE Q==; X-CSE-ConnectionGUID: qe0dpo4HQGmvWefALMCxUA== X-CSE-MsgGUID: wOTOJ8ToReaNKyHY271t5w== X-IronPort-AV: E=Sophos;i="6.13,221,1732550400"; d="scan'208";a="36182076" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 21 Jan 2025 16:15:23 +0800 IronPort-SDR: 678f4a06_pnnUR6FvKru/rjqkJPCSiGM7VvwL6IkMPMeyZb0vPevWvrl gsDgP+rWUo1KF+0JhIRAAfodA/FUc0OevxWdXnA== Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 20 Jan 2025 23:17:26 -0800 WDCIronportException: Internal Received: from unknown (HELO shindev.ssa.fujisawa.hgst.com) ([10.149.66.30]) by uls-op-cesaip01.wdc.com with ESMTP; 21 Jan 2025 00:15:22 -0800 From: Shin'ichiro Kawasaki To: linux-block@vger.kernel.org, Jens Axboe Cc: Damien Le Moal , Bart Van Assche , Shin'ichiro Kawasaki Subject: [PATCH for-next v4 4/5] null_blk: pass transfer size to null_handle_rq() Date: Tue, 21 Jan 2025 17:15:16 +0900 Message-ID: <20250121081517.1212575-5-shinichiro.kawasaki@wdc.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250121081517.1212575-1-shinichiro.kawasaki@wdc.com> References: <20250121081517.1212575-1-shinichiro.kawasaki@wdc.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 As preparation to support partial data transfer, add a new argument to null_handle_rq() to pass the number of sectors to transfer. While at it, rename the function from null_handle_rq to null_handle_data_transfer. This commit does not change the behavior. Reviewed-by: Damien Le Moal Signed-off-by: Shin'ichiro Kawasaki --- drivers/block/null_blk/main.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 87037cb375c9..802576698812 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -1263,25 +1263,37 @@ static int null_transfer(struct nullb *nullb, struct page *page, return err; } -static blk_status_t null_handle_rq(struct nullb_cmd *cmd) +/* + * Transfer data for the given request. The transfer size is capped with the + * nr_sectors argument. + */ +static blk_status_t null_handle_data_transfer(struct nullb_cmd *cmd, + sector_t nr_sectors) { struct request *rq = blk_mq_rq_from_pdu(cmd); struct nullb *nullb = cmd->nq->dev->nullb; int err = 0; unsigned int len; sector_t sector = blk_rq_pos(rq); + unsigned int max_bytes = nr_sectors << SECTOR_SHIFT; + unsigned int transferred_bytes = 0; struct req_iterator iter; struct bio_vec bvec; spin_lock_irq(&nullb->lock); rq_for_each_segment(bvec, rq, iter) { len = bvec.bv_len; + if (transferred_bytes + len > max_bytes) + len = max_bytes - transferred_bytes; err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset, op_is_write(req_op(rq)), sector, rq->cmd_flags & REQ_FUA); if (err) break; sector += len >> SECTOR_SHIFT; + transferred_bytes += len; + if (transferred_bytes >= max_bytes) + break; } spin_unlock_irq(&nullb->lock); @@ -1333,7 +1345,7 @@ blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd, enum req_op op, if (op == REQ_OP_DISCARD) return null_handle_discard(dev, sector, nr_sectors); - return null_handle_rq(cmd); + return null_handle_data_transfer(cmd, nr_sectors); } static void nullb_zero_read_cmd_buffer(struct nullb_cmd *cmd) From patchwork Tue Jan 21 08:15:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shin'ichiro Kawasaki X-Patchwork-Id: 13945938 Received: from esa6.hgst.iphmx.com (esa6.hgst.iphmx.com [216.71.154.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 66F00145A18 for ; Tue, 21 Jan 2025 08:15:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=216.71.154.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737447331; cv=none; b=r5J+1bJkiSoCSZzMx5td9fWO7fS8owH7TeBDtk2NWbTKKPupIEJ1+mu4U2qpi0ssg35injOV0Kgwp38EE5rNbMQTPrYm3MnAOf1eMtdPDYe9E9cL4Q2ZKcNpUyrQTAG76gpjmhAkSH8rdZTKMgXagerEaruBs3BLmbjVJHNsbSk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737447331; c=relaxed/simple; bh=KLSUEiohhs4AMOli7iBI+RlQ2mE8H9Lfvc4dndf5SFE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mRlpE9N5zLd6mHljKR7xneG1wSzfJQy+aPvwe2vQ38188y04PrdJhgVkwV892bN117dSjEL6HYG/aFXlXx/o7qaYu/IjTkVzFl+JkJQWWVe0nafnf2gn1kRHjojVakUZrS9WnfrDA5V3RT1ZGFE5mKFIrpXh+C5lwa8n5iODBA0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com; spf=pass smtp.mailfrom=wdc.com; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b=Jyk+3htF; arc=none smtp.client-ip=216.71.154.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=wdc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="Jyk+3htF" DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1737447329; x=1768983329; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KLSUEiohhs4AMOli7iBI+RlQ2mE8H9Lfvc4dndf5SFE=; b=Jyk+3htFx2q8uUXgB0lI6yQ8EOvzJoPIscEjGVa1mg8LaKDEUtrZQXn2 TeNRLM39GLEAx6jyBt8vbuqC3TKt+CQbpq7b+fKq9FPq8WMsJPPRzcTQj VOlEdr4Pkj5lksJmex1KA74UyKNXXlCb4phMRQYWgr/IqSoTV8M8k025u sE51rCk+KOpQ5DNx6dE/Wa2HIQgowXNITZGd50mPZKwcMw6r0IM5qdUri +CVvxpSf0ZmtqbHOBfQywsUO2pjg02U1CZ6WdZx0xjpAOl3Emba96rNeL s/kUucQN44QCfZaJXNPOgH1/vYi5HdblRrNOyROOqArPhKiEAePOxN9uM g==; X-CSE-ConnectionGUID: ISlP110+RqCeG0z0jPLGxA== X-CSE-MsgGUID: dq3b81rgToacx4U74pYRgA== X-IronPort-AV: E=Sophos;i="6.13,221,1732550400"; d="scan'208";a="36182080" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 21 Jan 2025 16:15:25 +0800 IronPort-SDR: 678f4a07_e90RCU1LBmB7b8wyKC5p6lNeVM0atpqaqxnfJJk0HVRu5YQ n6pJVBUb9ycxAPJCKAF8OJAfDO6kEejEp+YZfrg== Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 20 Jan 2025 23:17:27 -0800 WDCIronportException: Internal Received: from unknown (HELO shindev.ssa.fujisawa.hgst.com) ([10.149.66.30]) by uls-op-cesaip01.wdc.com with ESMTP; 21 Jan 2025 00:15:23 -0800 From: Shin'ichiro Kawasaki To: linux-block@vger.kernel.org, Jens Axboe Cc: Damien Le Moal , Bart Van Assche , Shin'ichiro Kawasaki Subject: [PATCH for-next v4 5/5] null_blk: do partial IO for bad blocks Date: Tue, 21 Jan 2025 17:15:17 +0900 Message-ID: <20250121081517.1212575-6-shinichiro.kawasaki@wdc.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250121081517.1212575-1-shinichiro.kawasaki@wdc.com> References: <20250121081517.1212575-1-shinichiro.kawasaki@wdc.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The current null_blk implementation checks if any bad blocks exist in the target blocks of each IO. If so, the IO fails and data is not transferred for all of the IO target blocks. However, when real storage devices have bad blocks, the devices may transfer data partially up to the first bad blocks (e.g., SAS drives). Especially, when the IO is a write operation, such partial IO leaves partially written data on the device. To simulate such partial IO using null_blk, introduce the new parameter 'badblocks_partial_io'. When this parameter is set, null_handle_badblocks() returns the number of the sectors for the partial IO as its third pointer argument. Pass the returned number of sectors to the following calls to null_handle_memory_backend() in null_process_cmd() and null_zone_write(). Reviewed-by: Damien Le Moal Signed-off-by: Shin'ichiro Kawasaki --- drivers/block/null_blk/main.c | 40 ++++++++++++++++++++++++------- drivers/block/null_blk/null_blk.h | 4 ++-- drivers/block/null_blk/zoned.c | 9 ++++--- 3 files changed, 40 insertions(+), 13 deletions(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 802576698812..31d44cef6841 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -474,6 +474,7 @@ NULLB_DEVICE_ATTR(shared_tag_bitmap, bool, NULL); NULLB_DEVICE_ATTR(fua, bool, NULL); NULLB_DEVICE_ATTR(rotational, bool, NULL); NULLB_DEVICE_ATTR(badblocks_once, bool, NULL); +NULLB_DEVICE_ATTR(badblocks_partial_io, bool, NULL); static ssize_t nullb_device_power_show(struct config_item *item, char *page) { @@ -595,6 +596,7 @@ CONFIGFS_ATTR_WO(nullb_device_, zone_offline); static struct configfs_attribute *nullb_device_attrs[] = { &nullb_device_attr_badblocks, &nullb_device_attr_badblocks_once, + &nullb_device_attr_badblocks_partial_io, &nullb_device_attr_blocking, &nullb_device_attr_blocksize, &nullb_device_attr_cache_size, @@ -1321,19 +1323,40 @@ static inline blk_status_t null_handle_throttled(struct nullb_cmd *cmd) return sts; } +/* + * Check if the command should fail for the badblocks. If so, return + * BLK_STS_IOERR and return number of partial I/O sectors to be written or read, + * which may be less than the requested number of sectors. + * + * @cmd: The command to handle. + * @sector: The start sector for I/O. + * @nr_sectors: Specifies number of sectors to write or read, and returns the + * number of sectors to be written or read. + */ blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, sector_t sector, - sector_t nr_sectors) + unsigned int *nr_sectors) { struct badblocks *bb = &cmd->nq->dev->badblocks; + struct nullb_device *dev = cmd->nq->dev; + unsigned int block_sectors = dev->blocksize >> SECTOR_SHIFT; sector_t first_bad; int bad_sectors; + unsigned int partial_io_sectors = 0; - if (!badblocks_check(bb, sector, nr_sectors, &first_bad, &bad_sectors)) + if (!badblocks_check(bb, sector, *nr_sectors, &first_bad, &bad_sectors)) return BLK_STS_OK; if (cmd->nq->dev->badblocks_once) badblocks_clear(bb, first_bad, bad_sectors); + if (cmd->nq->dev->badblocks_partial_io) { + if (!IS_ALIGNED(first_bad, block_sectors)) + first_bad = ALIGN_DOWN(first_bad, block_sectors); + if (sector < first_bad) + partial_io_sectors = first_bad - sector; + } + *nr_sectors = partial_io_sectors; + return BLK_STS_IOERR; } @@ -1392,18 +1415,19 @@ blk_status_t null_process_cmd(struct nullb_cmd *cmd, enum req_op op, sector_t sector, unsigned int nr_sectors) { struct nullb_device *dev = cmd->nq->dev; + blk_status_t badblocks_ret = BLK_STS_OK; blk_status_t ret; - if (dev->badblocks.shift != -1) { - ret = null_handle_badblocks(cmd, sector, nr_sectors); + if (dev->badblocks.shift != -1) + badblocks_ret = null_handle_badblocks(cmd, sector, &nr_sectors); + + if (dev->memory_backed && nr_sectors) { + ret = null_handle_memory_backed(cmd, op, sector, nr_sectors); if (ret != BLK_STS_OK) return ret; } - if (dev->memory_backed) - return null_handle_memory_backed(cmd, op, sector, nr_sectors); - - return BLK_STS_OK; + return badblocks_ret; } static void null_handle_cmd(struct nullb_cmd *cmd, sector_t sector, diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h index ee60f3a88796..7bb6128dbaaf 100644 --- a/drivers/block/null_blk/null_blk.h +++ b/drivers/block/null_blk/null_blk.h @@ -64,6 +64,7 @@ struct nullb_device { unsigned int curr_cache; struct badblocks badblocks; bool badblocks_once; + bool badblocks_partial_io; unsigned int nr_zones; unsigned int nr_zones_imp_open; @@ -133,11 +134,10 @@ blk_status_t null_handle_discard(struct nullb_device *dev, sector_t sector, blk_status_t null_process_cmd(struct nullb_cmd *cmd, enum req_op op, sector_t sector, unsigned int nr_sectors); blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, sector_t sector, - sector_t nr_sectors); + unsigned int *nr_sectors); blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd, enum req_op op, sector_t sector, sector_t nr_sectors); - #ifdef CONFIG_BLK_DEV_ZONED int null_init_zoned_dev(struct nullb_device *dev, struct queue_limits *lim); int null_register_zoned_dev(struct nullb *nullb); diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c index 09dae8d018aa..c9f984445005 100644 --- a/drivers/block/null_blk/zoned.c +++ b/drivers/block/null_blk/zoned.c @@ -353,6 +353,7 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, struct nullb_device *dev = cmd->nq->dev; unsigned int zno = null_zone_no(dev, sector); struct nullb_zone *zone = &dev->zones[zno]; + blk_status_t badblocks_ret = BLK_STS_OK; blk_status_t ret; trace_nullb_zone_op(cmd, zno, zone->cond); @@ -390,9 +391,11 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, } if (dev->badblocks.shift != -1) { - ret = null_handle_badblocks(cmd, sector, nr_sectors); - if (ret != BLK_STS_OK) + badblocks_ret = null_handle_badblocks(cmd, sector, &nr_sectors); + if (badblocks_ret != BLK_STS_OK && !nr_sectors) { + ret = badblocks_ret; goto unlock_zone; + } } if (zone->cond == BLK_ZONE_COND_CLOSED || @@ -438,7 +441,7 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, zone->cond = BLK_ZONE_COND_FULL; } - ret = BLK_STS_OK; + ret = badblocks_ret; unlock_zone: null_unlock_zone(dev, zone);