From patchwork Wed Apr 3 08:42:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 13615454 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 786C56D1A3; Wed, 3 Apr 2024 08:42:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712133773; cv=none; b=U5Vj2jTNL061l7swhCWwLPvK0tddi4CRvJlSzSqyjd18eTYP/WUY3CPTPx4wlwCwpBJsV94GFDbZF+AGYXaSvberqnAGcjDG+um6Jra/whcnh+YLmdBgo3l+xyWa15iJh7SC0JYvqx7Du48wcE5ipKBur299yhvUm8RGN+y83f4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712133773; c=relaxed/simple; bh=q0Zy99bP83WbjKg0IGQneTwwnlnEYLjCem44Js5QK/0=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=t6hyEmCGldC+MDJlH0tREHYd3cea4qYij+l4QULzoHDaXkefswmBp9sgWxJks8CLoLoijVQ4eJwslnlnMpkNQBv7obA+eY71x+s5UGYcKm3ySwE3OxoW4AexjPlID/pBLqps/5gRzXsjkQ6NWl2oP1OzxbuxRsSZLoem/yq5zJ4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=tNDfnk5s; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="tNDfnk5s" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BB466C433B1; Wed, 3 Apr 2024 08:42:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712133773; bh=q0Zy99bP83WbjKg0IGQneTwwnlnEYLjCem44Js5QK/0=; h=From:To:Subject:Date:In-Reply-To:References:From; b=tNDfnk5smhubh8TeFLFt+QL5tGEz9LT8xwaqAZkooRFAXeYFaogjIpTRAuczVbYdQ LAHZ5qZQjxpNIqu6K8XDx5LKA4cjWiGLcVCDvGH7dksz6EM29p4yWSBQNhslnwcEcn aH6qeWqHSYL1y7aYgtUSDGWajw06O1G0toKBfl792P61NUCSXxAvLNALYBUeNG6CXN HJh1r3wr1sCf3BhHxD1ak6K+1umqqd0vj4UBi7Zsv5TfM9kYF2LSzEkB5JV8s06ibm yfbJPfZMOH8IePydakXe8WbAItvI9yhfgSxKKrlXplkNQ6V1DypJt+Ngc0G6jgrVpy cbYmpEIefx2bw== From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe , linux-scsi@vger.kernel.org, "Martin K . Petersen" , dm-devel@lists.linux.dev, Mike Snitzer , linux-nvme@lists.infradead.org, Keith Busch , Christoph Hellwig Subject: [PATCH v5 02/28] block: Remove req_bio_endio() Date: Wed, 3 Apr 2024 17:42:21 +0900 Message-ID: <20240403084247.856481-3-dlemoal@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240403084247.856481-1-dlemoal@kernel.org> References: <20240403084247.856481-1-dlemoal@kernel.org> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Moving req_bio_endio() code into its only caller, blk_update_request(), allows reducing accesses to and tests of bio and request fields. Also, given that partial completions of zone append operations is not possible and that zone append operations cannot be merged, the update of the BIO sector using the request sector for these operations can be moved directly before the call to bio_endio(). Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Bart Van Assche Reviewed-by: Hannes Reinecke --- block/blk-mq.c | 58 ++++++++++++++++++++++++-------------------------- 1 file changed, 28 insertions(+), 30 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index b8dbfed8b28b..fcbf0953a179 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -762,31 +762,6 @@ void blk_dump_rq_flags(struct request *rq, char *msg) } EXPORT_SYMBOL(blk_dump_rq_flags); -static void req_bio_endio(struct request *rq, struct bio *bio, - unsigned int nbytes, blk_status_t error) -{ - if (unlikely(error)) { - bio->bi_status = error; - } else if (req_op(rq) == REQ_OP_ZONE_APPEND) { - /* - * Partial zone append completions cannot be supported as the - * BIO fragments may end up not being written sequentially. - */ - if (bio->bi_iter.bi_size != nbytes) - bio->bi_status = BLK_STS_IOERR; - else - bio->bi_iter.bi_sector = rq->__sector; - } - - bio_advance(bio, nbytes); - - if (unlikely(rq->rq_flags & RQF_QUIET)) - bio_set_flag(bio, BIO_QUIET); - /* don't actually finish bio if it's part of flush sequence */ - if (bio->bi_iter.bi_size == 0 && !(rq->rq_flags & RQF_FLUSH_SEQ)) - bio_endio(bio); -} - static void blk_account_io_completion(struct request *req, unsigned int bytes) { if (req->part && blk_do_io_stat(req)) { @@ -890,6 +865,8 @@ static void blk_complete_request(struct request *req) bool blk_update_request(struct request *req, blk_status_t error, unsigned int nr_bytes) { + bool is_flush = req->rq_flags & RQF_FLUSH_SEQ; + bool quiet = req->rq_flags & RQF_QUIET; int total_bytes; trace_block_rq_complete(req, error, nr_bytes); @@ -910,9 +887,8 @@ bool blk_update_request(struct request *req, blk_status_t error, if (blk_crypto_rq_has_keyslot(req) && nr_bytes >= blk_rq_bytes(req)) __blk_crypto_rq_put_keyslot(req); - if (unlikely(error && !blk_rq_is_passthrough(req) && - !(req->rq_flags & RQF_QUIET)) && - !test_bit(GD_DEAD, &req->q->disk->state)) { + if (unlikely(error && !blk_rq_is_passthrough(req) && !quiet) && + !test_bit(GD_DEAD, &req->q->disk->state)) { blk_print_req_error(req, error); trace_block_rq_error(req, error, nr_bytes); } @@ -924,12 +900,34 @@ bool blk_update_request(struct request *req, blk_status_t error, struct bio *bio = req->bio; unsigned bio_bytes = min(bio->bi_iter.bi_size, nr_bytes); - if (bio_bytes == bio->bi_iter.bi_size) + if (unlikely(error)) + bio->bi_status = error; + + if (bio_bytes == bio->bi_iter.bi_size) { req->bio = bio->bi_next; + } else if (req_op(req) == REQ_OP_ZONE_APPEND && + error == BLK_STS_OK) { + /* + * Partial zone append completions cannot be supported + * as the BIO fragments may end up not being written + * sequentially. + */ + bio->bi_status = BLK_STS_IOERR; + } /* Completion has already been traced */ bio_clear_flag(bio, BIO_TRACE_COMPLETION); - req_bio_endio(req, bio, bio_bytes, error); + if (unlikely(quiet)) + bio_set_flag(bio, BIO_QUIET); + + bio_advance(bio, bio_bytes); + + /* Don't actually finish bio if it's part of flush sequence */ + if (!bio->bi_iter.bi_size && !is_flush) { + if (req_op(req) == REQ_OP_ZONE_APPEND) + bio->bi_iter.bi_sector = req->__sector; + bio_endio(bio); + } total_bytes += bio_bytes; nr_bytes -= bio_bytes;