From patchwork Sat Jan 25 01:29:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shin'ichiro Kawasaki X-Patchwork-Id: 13950091 Received: from esa1.hgst.iphmx.com (esa1.hgst.iphmx.com [68.232.141.245]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A6AC31E9917 for ; Sat, 25 Jan 2025 01:29:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=68.232.141.245 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737768553; cv=none; b=IfHq+6TMcLFYlSZikOknDNFq0fojla8vnMvzfjLqJdaETDBLJi58uQ4dW5RN6X+AURkaXAEJfDy/AfyATET/7DIyewj/V9pdWqYFugYkiVXHYZlBc0/Qr97tFCmVv181TeG1pWB4QSjSIDgfBpumoMgsjsIWHeFnP14QPZ2vS8I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737768553; c=relaxed/simple; bh=lpVVR2vFpnOdJuThNAP97hIcHW1+cg5WGeOZl0DMr8Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DlMY7tsO241FGdbddUniR9Bo9ZFKHapPQy+1Jl9Agx+ssDjDiQvGltYGyZIqawD3yLsrxT97JphpQj4a2KBE9RX1ZocsAAwXlNyXH17KskvPPUnpR8CEV7Cq+y5DBksExWg1aTH2PaZ71+I9S15FtYekFqewWQ5gHCXxhVEUfzU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com; spf=pass smtp.mailfrom=wdc.com; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b=TOvtrsPA; arc=none smtp.client-ip=68.232.141.245 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=wdc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="TOvtrsPA" DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1737768551; x=1769304551; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lpVVR2vFpnOdJuThNAP97hIcHW1+cg5WGeOZl0DMr8Y=; b=TOvtrsPAsmdPxtMLChK9XsmbEO6dOWW0GqX12LHDaQVKg6qemaXl3k7C tQydxL8gRcj0568m3ssCysYyVmuEkMVtCfVxNvbfcOjsVwMmFZpkMkXrS vyNDxLLOyzmbcRZDGMNolGFWpYZD4A5q7ynJMJMSrWLEhaKstfbpDn3xo YEgdLCES3AxnnIJGZg5OsrA9gn7CStwNjQETOl5ZFa/jnAVIYUAjci1aP 0GmRDlZX+yGWQB2UgTiQthpZFUd1kb8IPGfIDM+F2i4m5UHWh2cjZrIi+ 81ICSC/XZNVLYHWKNJLvFsYQHK7aJHv6RJ6FX3eLe4MlhB4caeyqHa5y9 Q==; X-CSE-ConnectionGUID: YA5m4sEuSmOcMqJ5HcpZSg== X-CSE-MsgGUID: 3s7opIyQQ92UCUJ/j2OCHw== X-IronPort-AV: E=Sophos;i="6.13,232,1732550400"; d="scan'208";a="37973935" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 25 Jan 2025 09:29:10 +0800 IronPort-SDR: 679430cc_H3JQm4Sg4BZx4FwB+/k830YlBAEuh02Zm+WP4Xm20k4lVRa 3US4UAt+r4jOElCVEDZL4zKbV1oC8bcuRQOvzMA== Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 24 Jan 2025 16:31:09 -0800 WDCIronportException: Internal Received: from unknown (HELO shindev.ssa.fujisawa.hgst.com) ([10.149.66.30]) by uls-op-cesaip01.wdc.com with ESMTP; 24 Jan 2025 17:29:10 -0800 From: Shin'ichiro Kawasaki To: linux-block@vger.kernel.org, Jens Axboe Cc: Damien Le Moal , Bart Van Assche , Shin'ichiro Kawasaki Subject: [PATCH v5 1/5] null_blk: generate null_blk configfs features string Date: Sat, 25 Jan 2025 10:29:04 +0900 Message-ID: <20250125012908.1259887-2-shinichiro.kawasaki@wdc.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250125012908.1259887-1-shinichiro.kawasaki@wdc.com> References: <20250125012908.1259887-1-shinichiro.kawasaki@wdc.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The null_blk configfs file 'features' provides a string that lists available null_blk features for userspace programs to reference. The string is defined as a long constant in the code, which tends to be forgotten for updates. It also causes checkpatch.pl to report "WARNING: quoted string split across lines". To avoid these drawbacks, generate the feature string on the fly. Refer to the ca_name field of each element in the nullb_device_attrs table and concatenate them in the given buffer. Also, sorted nullb_device_attrs table elements in alphabetical order. Of note is that the feature "index" was missing before this commit. This commit adds it to the generated string. Suggested-by: Bart Van Assche Reviewed-by: Bart Van Assche Reviewed-by: Damien Le Moal Reviewed-by: Chaitanya Kulkarni Signed-off-by: Shin'ichiro Kawasaki --- drivers/block/null_blk/main.c | 90 ++++++++++++++++++++--------------- 1 file changed, 51 insertions(+), 39 deletions(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index d94ef37480bd..0725d221cff4 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -592,41 +592,41 @@ static ssize_t nullb_device_zone_offline_store(struct config_item *item, CONFIGFS_ATTR_WO(nullb_device_, zone_offline); static struct configfs_attribute *nullb_device_attrs[] = { - &nullb_device_attr_size, - &nullb_device_attr_completion_nsec, - &nullb_device_attr_submit_queues, - &nullb_device_attr_poll_queues, - &nullb_device_attr_home_node, - &nullb_device_attr_queue_mode, + &nullb_device_attr_badblocks, + &nullb_device_attr_blocking, &nullb_device_attr_blocksize, - &nullb_device_attr_max_sectors, - &nullb_device_attr_irqmode, + &nullb_device_attr_cache_size, + &nullb_device_attr_completion_nsec, + &nullb_device_attr_discard, + &nullb_device_attr_fua, + &nullb_device_attr_home_node, &nullb_device_attr_hw_queue_depth, &nullb_device_attr_index, - &nullb_device_attr_blocking, - &nullb_device_attr_use_per_node_hctx, - &nullb_device_attr_power, - &nullb_device_attr_memory_backed, - &nullb_device_attr_discard, + &nullb_device_attr_irqmode, + &nullb_device_attr_max_sectors, &nullb_device_attr_mbps, - &nullb_device_attr_cache_size, - &nullb_device_attr_badblocks, - &nullb_device_attr_zoned, - &nullb_device_attr_zone_size, - &nullb_device_attr_zone_capacity, - &nullb_device_attr_zone_nr_conv, - &nullb_device_attr_zone_max_open, - &nullb_device_attr_zone_max_active, - &nullb_device_attr_zone_append_max_sectors, - &nullb_device_attr_zone_readonly, - &nullb_device_attr_zone_offline, - &nullb_device_attr_zone_full, - &nullb_device_attr_virt_boundary, + &nullb_device_attr_memory_backed, &nullb_device_attr_no_sched, - &nullb_device_attr_shared_tags, - &nullb_device_attr_shared_tag_bitmap, - &nullb_device_attr_fua, + &nullb_device_attr_poll_queues, + &nullb_device_attr_power, + &nullb_device_attr_queue_mode, &nullb_device_attr_rotational, + &nullb_device_attr_shared_tag_bitmap, + &nullb_device_attr_shared_tags, + &nullb_device_attr_size, + &nullb_device_attr_submit_queues, + &nullb_device_attr_use_per_node_hctx, + &nullb_device_attr_virt_boundary, + &nullb_device_attr_zone_append_max_sectors, + &nullb_device_attr_zone_capacity, + &nullb_device_attr_zone_full, + &nullb_device_attr_zone_max_active, + &nullb_device_attr_zone_max_open, + &nullb_device_attr_zone_nr_conv, + &nullb_device_attr_zone_offline, + &nullb_device_attr_zone_readonly, + &nullb_device_attr_zone_size, + &nullb_device_attr_zoned, NULL, }; @@ -704,16 +704,28 @@ nullb_group_drop_item(struct config_group *group, struct config_item *item) static ssize_t memb_group_features_show(struct config_item *item, char *page) { - return snprintf(page, PAGE_SIZE, - "badblocks,blocking,blocksize,cache_size,fua," - "completion_nsec,discard,home_node,hw_queue_depth," - "irqmode,max_sectors,mbps,memory_backed,no_sched," - "poll_queues,power,queue_mode,shared_tag_bitmap," - "shared_tags,size,submit_queues,use_per_node_hctx," - "virt_boundary,zoned,zone_capacity,zone_max_active," - "zone_max_open,zone_nr_conv,zone_offline,zone_readonly," - "zone_size,zone_append_max_sectors,zone_full," - "rotational\n"); + + struct configfs_attribute **entry; + char delimiter = ','; + size_t left = PAGE_SIZE; + size_t written = 0; + int ret; + + for (entry = &nullb_device_attrs[0]; *entry && left > 0; entry++) { + if (!*(entry + 1)) + delimiter = '\n'; + ret = snprintf(page + written, left, "%s%c", (*entry)->ca_name, + delimiter); + if (ret >= left) { + WARN_ONCE(1, "Too many null_blk features to print\n"); + memzero_explicit(page, PAGE_SIZE); + return -ENOBUFS; + } + left -= ret; + written += ret; + } + + return written; } CONFIGFS_ATTR_RO(memb_group_, features); From patchwork Sat Jan 25 01:29:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shin'ichiro Kawasaki X-Patchwork-Id: 13950092 Received: from esa1.hgst.iphmx.com (esa1.hgst.iphmx.com [68.232.141.245]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CE4181EE01C for ; Sat, 25 Jan 2025 01:29:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=68.232.141.245 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737768554; cv=none; b=jrhPLXIfgD8SQx7aVwz9YBWNpRhKB2LSDkeBsB5zuBQxZxcQauUPVMtJ3MAr1veaX1+1/0c6D5WhXoGrEqRN3z2UDLuV8Cuyj3mjEV9toT9ZMdb9gFqxdEkX3X+dlVHyOqq0hOSa4Me/rwT9YH5KxESG7gGvw1O3yDkpdzmq6nk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737768554; c=relaxed/simple; bh=3mdbxzZKMwLju/UgfYOtK0jHFq4qwY9NzTr+P6V24QM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bvKEc4jvKGh+Yg8PnpIGavb0Z5RAkM5UDPuzYdncITJFSYShHwJpQOnYHwE6TJnEZSN34Jpr7oj+UXXiIz8GkxavtFQrOqPyi473htph2zLz621Ivs2KMOHQN7OEW5DMHAU2zJ03DPHpN/RKvT92vKcB8T6TmeyWYph2vHpOWd4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com; spf=pass smtp.mailfrom=wdc.com; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b=QDq+Ula0; arc=none smtp.client-ip=68.232.141.245 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=wdc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="QDq+Ula0" DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1737768552; x=1769304552; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3mdbxzZKMwLju/UgfYOtK0jHFq4qwY9NzTr+P6V24QM=; b=QDq+Ula0qitunDvJ7akIxFtmDAMBasKAkU3mSEwMjEWeRtLN77WfD1nq l876bBBI0pVgGKNNAx0TOM/15lD2QLbWuTF6lXaK5DHDwrCAEBxhWk5Ec wiFnl7Ssmy+5o/YtA0ikvzZI2l4aMcvv76m9gPf+75YorRT+2N8aqDpSP 1uPVsMg/1X34EB0gi+HchMhNLJwYFNB1FpL9D31kiQjXTqUxkER6FG8Df V/dATPZotWyUTRDBzHSg+ssTFd3wVNRAFBuw99n3c/EOh0UXl0YK25jeV XSKLPCbpti+9kvk4UtcPbhUW+uQalA397wBEXYzPMN6j2d83oDZ38j371 w==; X-CSE-ConnectionGUID: tf4rVKulT/KvYAzQiF43tg== X-CSE-MsgGUID: 8ABCQZtCQ9S4/lC7my1E+A== X-IronPort-AV: E=Sophos;i="6.13,232,1732550400"; d="scan'208";a="37973936" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 25 Jan 2025 09:29:11 +0800 IronPort-SDR: 679430cd_tXGvxmcSDpyfmNUCfXwTITy9pEricDqib+mWvtGbm8+iKjL L6JoBG8pp9ol9M7ISPzFeGj+Iq5gC9tB6/+yGNA== Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 24 Jan 2025 16:31:10 -0800 WDCIronportException: Internal Received: from unknown (HELO shindev.ssa.fujisawa.hgst.com) ([10.149.66.30]) by uls-op-cesaip01.wdc.com with ESMTP; 24 Jan 2025 17:29:11 -0800 From: Shin'ichiro Kawasaki To: linux-block@vger.kernel.org, Jens Axboe Cc: Damien Le Moal , Bart Van Assche , Shin'ichiro Kawasaki Subject: [PATCH v5 2/5] null_blk: introduce badblocks_once parameter Date: Sat, 25 Jan 2025 10:29:05 +0900 Message-ID: <20250125012908.1259887-3-shinichiro.kawasaki@wdc.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250125012908.1259887-1-shinichiro.kawasaki@wdc.com> References: <20250125012908.1259887-1-shinichiro.kawasaki@wdc.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 When IO errors happen on real storage devices, the IOs repeated to the same target range can success by virtue of recovery features by devices, such as reserved block assignment. To simulate such IO errors and recoveries, introduce the new parameter badblocks_once parameter. When this parameter is set to 1, the specified badblocks are cleared after the first IO error, so that the next IO to the blocks succeed. Reviewed-by: Damien Le Moal Reviewed-by: Chaitanya Kulkarni Signed-off-by: Shin'ichiro Kawasaki --- drivers/block/null_blk/main.c | 11 ++++++++--- drivers/block/null_blk/null_blk.h | 1 + 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 0725d221cff4..2a060a6ea8c0 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -473,6 +473,7 @@ NULLB_DEVICE_ATTR(shared_tags, bool, NULL); NULLB_DEVICE_ATTR(shared_tag_bitmap, bool, NULL); NULLB_DEVICE_ATTR(fua, bool, NULL); NULLB_DEVICE_ATTR(rotational, bool, NULL); +NULLB_DEVICE_ATTR(badblocks_once, bool, NULL); static ssize_t nullb_device_power_show(struct config_item *item, char *page) { @@ -593,6 +594,7 @@ CONFIGFS_ATTR_WO(nullb_device_, zone_offline); static struct configfs_attribute *nullb_device_attrs[] = { &nullb_device_attr_badblocks, + &nullb_device_attr_badblocks_once, &nullb_device_attr_blocking, &nullb_device_attr_blocksize, &nullb_device_attr_cache_size, @@ -1315,10 +1317,13 @@ static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, sector_t first_bad; int bad_sectors; - if (badblocks_check(bb, sector, nr_sectors, &first_bad, &bad_sectors)) - return BLK_STS_IOERR; + if (!badblocks_check(bb, sector, nr_sectors, &first_bad, &bad_sectors)) + return BLK_STS_OK; - return BLK_STS_OK; + if (cmd->nq->dev->badblocks_once) + badblocks_clear(bb, first_bad, bad_sectors); + + return BLK_STS_IOERR; } static inline blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd, diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h index 6f9fe6171087..3c4c07f0418b 100644 --- a/drivers/block/null_blk/null_blk.h +++ b/drivers/block/null_blk/null_blk.h @@ -63,6 +63,7 @@ struct nullb_device { unsigned long flags; /* device flags */ unsigned int curr_cache; struct badblocks badblocks; + bool badblocks_once; unsigned int nr_zones; unsigned int nr_zones_imp_open; From patchwork Sat Jan 25 01:29:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shin'ichiro Kawasaki X-Patchwork-Id: 13950094 Received: from esa1.hgst.iphmx.com (esa1.hgst.iphmx.com [68.232.141.245]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 43D201DE4EA for ; Sat, 25 Jan 2025 01:29:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=68.232.141.245 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737768556; cv=none; b=pUFN5veTvj4IgeZFNvFShaW36TAXAQwwCNm95B6MkbcIuIE3Nri4Afam09SN5v7/5iLqnZfwKsKcAUJ3qa0U6kJlJIY6OyO+GNpgeCugBbJZH6TICLHI9yDA3Rny5k7aMBygy4v+3IUWtXOBHB65RUMDM7n1s6Xol4IzF6rnueg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737768556; c=relaxed/simple; bh=8Pdt1tvIrVsMimgcN/uIRrJNkh8c0fD9AG3fRrXntgY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XmwvXzfWupb9hUqnAHANvWX6/rh/ARI8RrTDD+mzGxCslKtcnVyKmLAJDjTWd8MNgelzG4CtQ0ogdgjmkf5+teuLNo+pgR4gBEECnl25UyQ9EcLTXBshYHnXB5WzG2I7Ca5cDmbvKrupkTlyLdfAf1F4ptuWyKiIEf/Jjyqh9yo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com; spf=pass smtp.mailfrom=wdc.com; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b=gb4duPT3; arc=none smtp.client-ip=68.232.141.245 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=wdc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="gb4duPT3" DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1737768553; x=1769304553; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8Pdt1tvIrVsMimgcN/uIRrJNkh8c0fD9AG3fRrXntgY=; b=gb4duPT3u/Zy1x9sCz45qAsI2HyWiZ6WJNbXte7MHvIF4oTc4SegfL2o NfAnJdWT1RMIlvMSlqcMk26nHBmI+5OpASQ3RXolchYd3ydFp/QrQWna/ +ksXhJhHivMH3/0sn1LJEczi+rs9K06TnobOTeS5CQ3Wg6Lxb8YMEsR4t YJH23+8UQ4LH6g0MSUF/de3hX3ALu7gErqWWjczLSj25d4rHnQxQwer/+ QLcct0BwFdGERUB6wPbbUdMsjyw+CdHYBRI6AIoJvOJbwLxJSC0vaNdNn eVppi8eHu4AXH//+v17/uQBd4e84lDQVyEoFVJbMeylpUY5W6QPLidDkr A==; X-CSE-ConnectionGUID: fcrWzTVNTMSDIFBoh8IQ0A== X-CSE-MsgGUID: 1b+klE8jT8aBsl4N2i4JUQ== X-IronPort-AV: E=Sophos;i="6.13,232,1732550400"; d="scan'208";a="37973937" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 25 Jan 2025 09:29:12 +0800 IronPort-SDR: 679430ce_vP3c4PU4sEN6i1z7lgJf054vCv0xk5KmDoKTFgjmyr10vXH +icp6ykjTNtLgwMn/fhiouuGM8RsL8erUClP9gw== Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 24 Jan 2025 16:31:11 -0800 WDCIronportException: Internal Received: from unknown (HELO shindev.ssa.fujisawa.hgst.com) ([10.149.66.30]) by uls-op-cesaip01.wdc.com with ESMTP; 24 Jan 2025 17:29:12 -0800 From: Shin'ichiro Kawasaki To: linux-block@vger.kernel.org, Jens Axboe Cc: Damien Le Moal , Bart Van Assche , Shin'ichiro Kawasaki Subject: [PATCH v5 3/5] null_blk: replace null_process_cmd() call in null_zone_write() Date: Sat, 25 Jan 2025 10:29:06 +0900 Message-ID: <20250125012908.1259887-4-shinichiro.kawasaki@wdc.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250125012908.1259887-1-shinichiro.kawasaki@wdc.com> References: <20250125012908.1259887-1-shinichiro.kawasaki@wdc.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 As a preparation to support partial data transfer by badblocks, replace the null_process_cmd() call in null_zone_write() with equivalent calls to null_handle_badblocks() and null_handle_memory_backed(). This commit does not change behavior. It enables null_handle_badblocks() to return the size of partial data transfer in the following commit, allowing null_zone_write() to move write pointers appropriately. Signed-off-by: Shin'ichiro Kawasaki Reviewed-by: Damien Le Moal --- drivers/block/null_blk/main.c | 11 ++++------- drivers/block/null_blk/null_blk.h | 5 +++++ drivers/block/null_blk/zoned.c | 15 ++++++++++++--- 3 files changed, 21 insertions(+), 10 deletions(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 2a060a6ea8c0..87037cb375c9 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -1309,9 +1309,8 @@ static inline blk_status_t null_handle_throttled(struct nullb_cmd *cmd) return sts; } -static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, - sector_t sector, - sector_t nr_sectors) +blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, sector_t sector, + sector_t nr_sectors) { struct badblocks *bb = &cmd->nq->dev->badblocks; sector_t first_bad; @@ -1326,10 +1325,8 @@ static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, return BLK_STS_IOERR; } -static inline blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd, - enum req_op op, - sector_t sector, - sector_t nr_sectors) +blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd, enum req_op op, + sector_t sector, sector_t nr_sectors) { struct nullb_device *dev = cmd->nq->dev; diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h index 3c4c07f0418b..ee60f3a88796 100644 --- a/drivers/block/null_blk/null_blk.h +++ b/drivers/block/null_blk/null_blk.h @@ -132,6 +132,11 @@ blk_status_t null_handle_discard(struct nullb_device *dev, sector_t sector, sector_t nr_sectors); blk_status_t null_process_cmd(struct nullb_cmd *cmd, enum req_op op, sector_t sector, unsigned int nr_sectors); +blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, sector_t sector, + sector_t nr_sectors); +blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd, enum req_op op, + sector_t sector, sector_t nr_sectors); + #ifdef CONFIG_BLK_DEV_ZONED int null_init_zoned_dev(struct nullb_device *dev, struct queue_limits *lim); diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c index 0d5f9bf95229..7677f6cf23f4 100644 --- a/drivers/block/null_blk/zoned.c +++ b/drivers/block/null_blk/zoned.c @@ -412,9 +412,18 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, zone->cond = BLK_ZONE_COND_IMP_OPEN; } - ret = null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors); - if (ret != BLK_STS_OK) - goto unlock_zone; + if (dev->badblocks.shift != -1) { + ret = null_handle_badblocks(cmd, sector, nr_sectors); + if (ret != BLK_STS_OK) + goto unlock_zone; + } + + if (dev->memory_backed) { + ret = null_handle_memory_backed(cmd, REQ_OP_WRITE, sector, + nr_sectors); + if (ret != BLK_STS_OK) + goto unlock_zone; + } zone->wp += nr_sectors; if (zone->wp == zone->start + zone->capacity) { From patchwork Sat Jan 25 01:29:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shin'ichiro Kawasaki X-Patchwork-Id: 13950093 Received: from esa1.hgst.iphmx.com (esa1.hgst.iphmx.com [68.232.141.245]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B1171E9917 for ; Sat, 25 Jan 2025 01:29:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=68.232.141.245 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737768556; cv=none; b=ToRNVdjE6EwjIMJMRlTQ68waPT6cBLoImqffj7r0tQhZ87X+aRiDvc0Y+YRjretfKUhJU7+4YH/ZWFQ30F1yP8FUa4T5c4bQPhtO53bP52Xus1zJDOnN0hlKFSu3CTro1JAydyFR8tRcPL1jAOTzufZS+DFvG4OglRG9Q1T+5AY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737768556; c=relaxed/simple; bh=jC2+DpbTafKWxG2bzE12PTGD3SAM47DFkXad3Ai9r5g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hRcXP4NzfFShVmsYmjsdzU5g4gxJzvdMf+KiDUTRGA+J9zj9dRMdEXw6BmoVqhJZeafGuxU31JuItKxK9yuHxZsnJUnBm+olqdDhPgtHdRY3PlRFe9uHjdIdiWZCvkGA4sHmCD8sgmAGm+I/I4zu8RedcJrCSAG3t8ZeEsaEzLE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com; spf=pass smtp.mailfrom=wdc.com; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b=THjyMMp6; arc=none smtp.client-ip=68.232.141.245 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=wdc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="THjyMMp6" DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1737768554; x=1769304554; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jC2+DpbTafKWxG2bzE12PTGD3SAM47DFkXad3Ai9r5g=; b=THjyMMp6Kfu/Qdq/7AeTgl4lbaP5YvVi6m6YOh02xM4Ir9lgQIpp7qqz wV/PbGl8jk2iQ/sNDJobtNGOO7eNMQKYezCvLaknCubDFbxdQrt33262o X/M/BnSa+GDlGHlur5HiqtAxvd/u9LjKQswBFnxXatFS4izuMbT7n1L0i gIW1BoRC5RAzEvfwm0BcFgqVHVXUhZVXe3+hg39Dt8/JH9PK45fO4ztK6 iYe82vK81NhAH3RX+MSbn6BHF2nAWpRTKwhZ1ovWG/cCLjWeXmD4oasGZ w7706ixPJZnRAnFoAXglM+HWt3kCCmG42v97sSUJd/AiNWzagPecDkVlF Q==; X-CSE-ConnectionGUID: oqYNTziJSOOfmBWQUrVVAQ== X-CSE-MsgGUID: lDWy8bNKSwWV/QP4W4Dhag== X-IronPort-AV: E=Sophos;i="6.13,232,1732550400"; d="scan'208";a="37973938" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 25 Jan 2025 09:29:14 +0800 IronPort-SDR: 679430cf_qTAcYBHnT+heo2KRiEXBmIZ2LZEKTesvFQ0wJUZkUWgbt4r q3cbH3kaaaM0SggQWNOWmEbqo/ZdCNoK3rgogIg== Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 24 Jan 2025 16:31:12 -0800 WDCIronportException: Internal Received: from unknown (HELO shindev.ssa.fujisawa.hgst.com) ([10.149.66.30]) by uls-op-cesaip01.wdc.com with ESMTP; 24 Jan 2025 17:29:13 -0800 From: Shin'ichiro Kawasaki To: linux-block@vger.kernel.org, Jens Axboe Cc: Damien Le Moal , Bart Van Assche , Shin'ichiro Kawasaki Subject: [PATCH v5 4/5] null_blk: pass transfer size to null_handle_rq() Date: Sat, 25 Jan 2025 10:29:07 +0900 Message-ID: <20250125012908.1259887-5-shinichiro.kawasaki@wdc.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250125012908.1259887-1-shinichiro.kawasaki@wdc.com> References: <20250125012908.1259887-1-shinichiro.kawasaki@wdc.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 As preparation to support partial data transfer, add a new argument to null_handle_rq() to pass the number of sectors to transfer. While at it, rename the function from null_handle_rq to null_handle_data_transfer. This commit does not change the behavior. Reviewed-by: Damien Le Moal Reviewed-by: Chaitanya Kulkarni Signed-off-by: Shin'ichiro Kawasaki --- drivers/block/null_blk/main.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 87037cb375c9..802576698812 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -1263,25 +1263,37 @@ static int null_transfer(struct nullb *nullb, struct page *page, return err; } -static blk_status_t null_handle_rq(struct nullb_cmd *cmd) +/* + * Transfer data for the given request. The transfer size is capped with the + * nr_sectors argument. + */ +static blk_status_t null_handle_data_transfer(struct nullb_cmd *cmd, + sector_t nr_sectors) { struct request *rq = blk_mq_rq_from_pdu(cmd); struct nullb *nullb = cmd->nq->dev->nullb; int err = 0; unsigned int len; sector_t sector = blk_rq_pos(rq); + unsigned int max_bytes = nr_sectors << SECTOR_SHIFT; + unsigned int transferred_bytes = 0; struct req_iterator iter; struct bio_vec bvec; spin_lock_irq(&nullb->lock); rq_for_each_segment(bvec, rq, iter) { len = bvec.bv_len; + if (transferred_bytes + len > max_bytes) + len = max_bytes - transferred_bytes; err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset, op_is_write(req_op(rq)), sector, rq->cmd_flags & REQ_FUA); if (err) break; sector += len >> SECTOR_SHIFT; + transferred_bytes += len; + if (transferred_bytes >= max_bytes) + break; } spin_unlock_irq(&nullb->lock); @@ -1333,7 +1345,7 @@ blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd, enum req_op op, if (op == REQ_OP_DISCARD) return null_handle_discard(dev, sector, nr_sectors); - return null_handle_rq(cmd); + return null_handle_data_transfer(cmd, nr_sectors); } static void nullb_zero_read_cmd_buffer(struct nullb_cmd *cmd) From patchwork Sat Jan 25 01:29:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shin'ichiro Kawasaki X-Patchwork-Id: 13950095 Received: from esa1.hgst.iphmx.com (esa1.hgst.iphmx.com [68.232.141.245]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6A3DB1EE01C for ; Sat, 25 Jan 2025 01:29:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=68.232.141.245 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737768557; cv=none; b=W6mBxyvrlQ09SPcEG9oD0ErFgzTsY+cBEB9IxnH63AhX1I/jfqvirfImGoLAfd2tvk1asUL8o5Bb1VHGbXnJTG1Mx3mKhRWTw56EKOb02ayf4Hc5BKVHDO/yrW/lgSue/szJD5Q/AjNEl5LwHUuS+YbPFKHWAzmlRhkUOfnWEq4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737768557; c=relaxed/simple; bh=1QFpEu855IlPwUN5hPDhMUG1oE9IAvShlL7oNmWUxpY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AhBju4pXOJh4VWskTpkpxZDRk11gVVyPQiN7lA7NyXD4wGqF/lY5sr3yxO3W4Exk4EVCnZ/wkrIpItM/FvPrfk+4KHWPAx8UBXt9mouQkAT4Mjmh1Q6iwRJfG5j4WLGmUrlmuEL1a1lRNz59AVy91ZNl0YT7KZ4QjpMFLQ2SDFU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com; spf=pass smtp.mailfrom=wdc.com; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b=mkkSKwbn; arc=none smtp.client-ip=68.232.141.245 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=wdc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=wdc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="mkkSKwbn" DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1737768555; x=1769304555; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1QFpEu855IlPwUN5hPDhMUG1oE9IAvShlL7oNmWUxpY=; b=mkkSKwbndnI85l0tOMQAhtv6psnfcLOohn05ns4Ansa2lWUu3UkE2lkJ bG82s9iqbkKry/HAfKHJDJ9qXpkIQrfYfBqFOtvEwAFyt73bc0lRo/Bpb 9eMI5o/ywpyDXhb9LkhrRgdIQuwnnQscK58/ZOi9O30s18NHv2Oclgx6e z3XVBvGmIbCiM6c4Ct/pUyNTCJ1ak3X2FP3klEasjhSLfVSOFmzXx14I8 s64C+/ZoTiI6ilyB6QKum0J0ue3c88+oZvO4lkPUB4xRIQG5380v1fulZ 39aAwEdp9Jqt6Mfn6jO1zOd49E56baIa2VbbzgemlOmRpogZBaq1MN3gF A==; X-CSE-ConnectionGUID: BX1FQoAaSnKYyZR8aIatwg== X-CSE-MsgGUID: y5SBXSZOQT+IRywaeNXYww== X-IronPort-AV: E=Sophos;i="6.13,232,1732550400"; d="scan'208";a="37973941" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 25 Jan 2025 09:29:15 +0800 IronPort-SDR: 679430d0_e2J5zYCH9AWwXy95323YGt09458rBN5kCsQXXK7hq6zfYrV KxUqCNoYxMD5hpHmdpxoMR5d6Rcwo6vP1e+cvvw== Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 24 Jan 2025 16:31:13 -0800 WDCIronportException: Internal Received: from unknown (HELO shindev.ssa.fujisawa.hgst.com) ([10.149.66.30]) by uls-op-cesaip01.wdc.com with ESMTP; 24 Jan 2025 17:29:14 -0800 From: Shin'ichiro Kawasaki To: linux-block@vger.kernel.org, Jens Axboe Cc: Damien Le Moal , Bart Van Assche , Shin'ichiro Kawasaki Subject: [PATCH v5 5/5] null_blk: do partial IO for bad blocks Date: Sat, 25 Jan 2025 10:29:08 +0900 Message-ID: <20250125012908.1259887-6-shinichiro.kawasaki@wdc.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250125012908.1259887-1-shinichiro.kawasaki@wdc.com> References: <20250125012908.1259887-1-shinichiro.kawasaki@wdc.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The current null_blk implementation checks if any bad blocks exist in the target blocks of each IO. If so, the IO fails and data is not transferred for all of the IO target blocks. However, when real storage devices have bad blocks, the devices may transfer data partially up to the first bad blocks (e.g., SAS drives). Especially, when the IO is a write operation, such partial IO leaves partially written data on the device. To simulate such partial IO using null_blk, introduce the new parameter 'badblocks_partial_io'. When this parameter is set, null_handle_badblocks() returns the number of the sectors for the partial IO as its third pointer argument. Pass the returned number of sectors to the following calls to null_handle_memory_backend() in null_process_cmd() and null_zone_write(). Reviewed-by: Damien Le Moal Reviewed-by: Chaitanya Kulkarni Signed-off-by: Shin'ichiro Kawasaki --- drivers/block/null_blk/main.c | 40 ++++++++++++++++++++++++------- drivers/block/null_blk/null_blk.h | 4 ++-- drivers/block/null_blk/zoned.c | 9 ++++--- 3 files changed, 40 insertions(+), 13 deletions(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 802576698812..31d44cef6841 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -474,6 +474,7 @@ NULLB_DEVICE_ATTR(shared_tag_bitmap, bool, NULL); NULLB_DEVICE_ATTR(fua, bool, NULL); NULLB_DEVICE_ATTR(rotational, bool, NULL); NULLB_DEVICE_ATTR(badblocks_once, bool, NULL); +NULLB_DEVICE_ATTR(badblocks_partial_io, bool, NULL); static ssize_t nullb_device_power_show(struct config_item *item, char *page) { @@ -595,6 +596,7 @@ CONFIGFS_ATTR_WO(nullb_device_, zone_offline); static struct configfs_attribute *nullb_device_attrs[] = { &nullb_device_attr_badblocks, &nullb_device_attr_badblocks_once, + &nullb_device_attr_badblocks_partial_io, &nullb_device_attr_blocking, &nullb_device_attr_blocksize, &nullb_device_attr_cache_size, @@ -1321,19 +1323,40 @@ static inline blk_status_t null_handle_throttled(struct nullb_cmd *cmd) return sts; } +/* + * Check if the command should fail for the badblocks. If so, return + * BLK_STS_IOERR and return number of partial I/O sectors to be written or read, + * which may be less than the requested number of sectors. + * + * @cmd: The command to handle. + * @sector: The start sector for I/O. + * @nr_sectors: Specifies number of sectors to write or read, and returns the + * number of sectors to be written or read. + */ blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, sector_t sector, - sector_t nr_sectors) + unsigned int *nr_sectors) { struct badblocks *bb = &cmd->nq->dev->badblocks; + struct nullb_device *dev = cmd->nq->dev; + unsigned int block_sectors = dev->blocksize >> SECTOR_SHIFT; sector_t first_bad; int bad_sectors; + unsigned int partial_io_sectors = 0; - if (!badblocks_check(bb, sector, nr_sectors, &first_bad, &bad_sectors)) + if (!badblocks_check(bb, sector, *nr_sectors, &first_bad, &bad_sectors)) return BLK_STS_OK; if (cmd->nq->dev->badblocks_once) badblocks_clear(bb, first_bad, bad_sectors); + if (cmd->nq->dev->badblocks_partial_io) { + if (!IS_ALIGNED(first_bad, block_sectors)) + first_bad = ALIGN_DOWN(first_bad, block_sectors); + if (sector < first_bad) + partial_io_sectors = first_bad - sector; + } + *nr_sectors = partial_io_sectors; + return BLK_STS_IOERR; } @@ -1392,18 +1415,19 @@ blk_status_t null_process_cmd(struct nullb_cmd *cmd, enum req_op op, sector_t sector, unsigned int nr_sectors) { struct nullb_device *dev = cmd->nq->dev; + blk_status_t badblocks_ret = BLK_STS_OK; blk_status_t ret; - if (dev->badblocks.shift != -1) { - ret = null_handle_badblocks(cmd, sector, nr_sectors); + if (dev->badblocks.shift != -1) + badblocks_ret = null_handle_badblocks(cmd, sector, &nr_sectors); + + if (dev->memory_backed && nr_sectors) { + ret = null_handle_memory_backed(cmd, op, sector, nr_sectors); if (ret != BLK_STS_OK) return ret; } - if (dev->memory_backed) - return null_handle_memory_backed(cmd, op, sector, nr_sectors); - - return BLK_STS_OK; + return badblocks_ret; } static void null_handle_cmd(struct nullb_cmd *cmd, sector_t sector, diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h index ee60f3a88796..7bb6128dbaaf 100644 --- a/drivers/block/null_blk/null_blk.h +++ b/drivers/block/null_blk/null_blk.h @@ -64,6 +64,7 @@ struct nullb_device { unsigned int curr_cache; struct badblocks badblocks; bool badblocks_once; + bool badblocks_partial_io; unsigned int nr_zones; unsigned int nr_zones_imp_open; @@ -133,11 +134,10 @@ blk_status_t null_handle_discard(struct nullb_device *dev, sector_t sector, blk_status_t null_process_cmd(struct nullb_cmd *cmd, enum req_op op, sector_t sector, unsigned int nr_sectors); blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, sector_t sector, - sector_t nr_sectors); + unsigned int *nr_sectors); blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd, enum req_op op, sector_t sector, sector_t nr_sectors); - #ifdef CONFIG_BLK_DEV_ZONED int null_init_zoned_dev(struct nullb_device *dev, struct queue_limits *lim); int null_register_zoned_dev(struct nullb *nullb); diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c index 7677f6cf23f4..4e5728f45989 100644 --- a/drivers/block/null_blk/zoned.c +++ b/drivers/block/null_blk/zoned.c @@ -353,6 +353,7 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, struct nullb_device *dev = cmd->nq->dev; unsigned int zno = null_zone_no(dev, sector); struct nullb_zone *zone = &dev->zones[zno]; + blk_status_t badblocks_ret = BLK_STS_OK; blk_status_t ret; trace_nullb_zone_op(cmd, zno, zone->cond); @@ -413,9 +414,11 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, } if (dev->badblocks.shift != -1) { - ret = null_handle_badblocks(cmd, sector, nr_sectors); - if (ret != BLK_STS_OK) + badblocks_ret = null_handle_badblocks(cmd, sector, &nr_sectors); + if (badblocks_ret != BLK_STS_OK && !nr_sectors) { + ret = badblocks_ret; goto unlock_zone; + } } if (dev->memory_backed) { @@ -438,7 +441,7 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, zone->cond = BLK_ZONE_COND_FULL; } - ret = BLK_STS_OK; + ret = badblocks_ret; unlock_zone: null_unlock_zone(dev, zone);