From patchwork Fri Apr 3 15:58:17 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 6157361 Return-Path: X-Original-To: patchwork-linux-scsi@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id CDA299F350 for ; Fri, 3 Apr 2015 16:01:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E0666203B6 for ; Fri, 3 Apr 2015 16:01:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D3F1C203DB for ; Fri, 3 Apr 2015 16:01:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752463AbbDCP6b (ORCPT ); Fri, 3 Apr 2015 11:58:31 -0400 Received: from mx0a-00082601.pphosted.com ([67.231.145.42]:56688 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752087AbbDCP6a (ORCPT ); Fri, 3 Apr 2015 11:58:30 -0400 Received: from pps.filterd (m0044010 [127.0.0.1]) by mx0a-00082601.pphosted.com (8.14.5/8.14.5) with SMTP id t33FuKcc009166; Fri, 3 Apr 2015 08:58:27 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=facebook; bh=cngXEEoJrhLlGEk7UB9BTCN1QFTIEMk0dF4k1FuEDUs=; b=GR8tUseP1QdvE3xjUyO+31zXbb0rJzQqM4lEMeX/I0qizMPfu6GLTOK6g+lEJgeYKScg Fh+qaiStF7zf3AnXJRfThZT2gbgLoGUIKNc8oPKPfu5h6f2RDRJe2PQvbYOifYeFT9cv /tIFJhCEFkCE9UI//Ke5/i9OEvVf2MGUJPk= Received: from mail.thefacebook.com ([199.201.64.23]) by mx0a-00082601.pphosted.com with ESMTP id 1thu7ag84v-2 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NOT); Fri, 03 Apr 2015 08:58:27 -0700 Received: from lenny.thefacebook.com (192.168.52.13) by mail.thefacebook.com (192.168.16.22) with Microsoft SMTP Server (TLS) id 14.3.195.1; Fri, 3 Apr 2015 08:58:26 -0700 From: Jens Axboe To: , , CC: , Jens Axboe Subject: [PATCH 1/7] blk-mq: allow the callback to blk_mq_tag_busy_iter() to stop looping Date: Fri, 3 Apr 2015 09:58:17 -0600 Message-ID: <1428076703-31014-2-git-send-email-axboe@fb.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1428076703-31014-1-git-send-email-axboe@fb.com> References: <1428076703-31014-1-git-send-email-axboe@fb.com> MIME-Version: 1.0 X-Originating-IP: [192.168.52.13] X-Proofpoint-Spam-Reason: safe X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.13.68, 1.0.33, 0.0.0000 definitions=2015-04-03_06:2015-04-03, 2015-04-03, 1970-01-01 signatures=0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently blk_mq_tag_busy_iter() loops all busy tags for a given hardware queue. But sometimes we are looking for a specific request, and when we find it, we don't have to keep looking over the rest of them. Change the busy_iter_fn callback to return a bool, where a true return will break out of the search. Update current callers (blk-mq timeout and NVMe IO cancel). Signed-off-by: Jens Axboe --- block/blk-mq-tag.c | 6 ++++-- block/blk-mq.c | 10 ++++++---- drivers/block/nvme-core.c | 7 ++++--- include/linux/blk-mq.h | 2 +- 4 files changed, 15 insertions(+), 10 deletions(-) diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index be3290cc0644..129a881b8ef1 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -430,8 +430,10 @@ static void bt_for_each(struct blk_mq_hw_ctx *hctx, bit < bm->depth; bit = find_next_bit(&bm->word, bm->depth, bit + 1)) { rq = blk_mq_tag_to_rq(hctx->tags, off + bit); - if (rq->q == hctx->queue) - fn(hctx, rq, data, reserved); + if (rq->q != hctx->queue) + continue; + if (fn(hctx, rq, data, reserved)) + break; } off += (1 << bt->bits_per_word); diff --git a/block/blk-mq.c b/block/blk-mq.c index b7b8933ec241..1cd34d4d707c 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -622,8 +622,8 @@ void blk_mq_rq_timed_out(struct request *req, bool reserved) } } -static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx, - struct request *rq, void *priv, bool reserved) +static bool blk_mq_check_expired(struct blk_mq_hw_ctx *hctx, struct request *rq, + void *priv, bool reserved) { struct blk_mq_timeout_data *data = priv; @@ -636,10 +636,10 @@ static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx, rq->errors = -EIO; blk_mq_complete_request(rq); } - return; + return false; } if (rq->cmd_flags & REQ_NO_TIMEOUT) - return; + return false; if (time_after_eq(jiffies, rq->deadline)) { if (!blk_mark_rq_complete(rq)) @@ -648,6 +648,8 @@ static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx, data->next = rq->deadline; data->next_set = 1; } + + return false; } static void blk_mq_rq_timer(unsigned long priv) diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c index e23be20a3417..46697ddfda82 100644 --- a/drivers/block/nvme-core.c +++ b/drivers/block/nvme-core.c @@ -1260,7 +1260,7 @@ static void nvme_abort_req(struct request *req) } } -static void nvme_cancel_queue_ios(struct blk_mq_hw_ctx *hctx, +static bool nvme_cancel_queue_ios(struct blk_mq_hw_ctx *hctx, struct request *req, void *data, bool reserved) { struct nvme_queue *nvmeq = data; @@ -1270,12 +1270,12 @@ static void nvme_cancel_queue_ios(struct blk_mq_hw_ctx *hctx, struct nvme_completion cqe; if (!blk_mq_request_started(req)) - return; + return false; cmd = blk_mq_rq_to_pdu(req); if (cmd->ctx == CMD_CTX_CANCELLED) - return; + return false; if (blk_queue_dying(req->q)) cqe.status = cpu_to_le16((NVME_SC_ABORT_REQ | NVME_SC_DNR) << 1); @@ -1287,6 +1287,7 @@ static void nvme_cancel_queue_ios(struct blk_mq_hw_ctx *hctx, req->tag, nvmeq->qid); ctx = cancel_cmd_info(cmd, &fn); fn(nvmeq, ctx, &cqe); + return false; } static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 7aec86127335..b216deab80d9 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -94,7 +94,7 @@ typedef int (init_request_fn)(void *, struct request *, unsigned int, typedef void (exit_request_fn)(void *, struct request *, unsigned int, unsigned int); -typedef void (busy_iter_fn)(struct blk_mq_hw_ctx *, struct request *, void *, +typedef bool (busy_iter_fn)(struct blk_mq_hw_ctx *, struct request *, void *, bool); struct blk_mq_ops {