From patchwork Fri Nov 20 16:35:21 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 7669871 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 3A4DCBF90C for ; Fri, 20 Nov 2015 16:42:04 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5C7512041A for ; Fri, 20 Nov 2015 16:42:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 60D0C20384 for ; Fri, 20 Nov 2015 16:42:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760434AbbKTQmB (ORCPT ); Fri, 20 Nov 2015 11:42:01 -0500 Received: from bombadil.infradead.org ([198.137.202.9]:54432 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759756AbbKTQmA (ORCPT ); Fri, 20 Nov 2015 11:42:00 -0500 Received: from p3ee3e1bb.dip0.t-ipconnect.de ([62.227.225.187] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux)) id 1Zzol1-0008C4-KY; Fri, 20 Nov 2015 16:41:56 +0000 From: Christoph Hellwig To: keith.busch@intel.com, axboe@fb.com Cc: linux-nvme@lists.infradead.org, linux-block@vger.kernel.org Subject: [PATCH 26/47] nvme: merge nvme_abort_req and nvme_timeout Date: Fri, 20 Nov 2015 17:35:21 +0100 Message-Id: <1448037342-18384-27-git-send-email-hch@lst.de> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1448037342-18384-1-git-send-email-hch@lst.de> References: <1448037342-18384-1-git-send-email-hch@lst.de> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We want to be able to return bettern error values frmo nvme_timeout, which is significantly easier if the two functions are merged. Also clean up and reduce the printk spew so that we only get one message per abort. Signed-off-by: Christoph Hellwig Signed-off-by: Keith Busch --- drivers/nvme/host/pci.c | 47 ++++++++++++++++++----------------------------- 1 file changed, 18 insertions(+), 29 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index df3ddeb..58cff75 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1070,13 +1070,7 @@ static int adapter_delete_sq(struct nvme_dev *dev, u16 sqid) return adapter_delete_queue(dev, nvme_admin_delete_sq, sqid); } -/** - * nvme_abort_req - Attempt aborting a request - * - * Schedule controller reset if the command was already aborted once before and - * still hasn't been returned to the driver, or if this is the admin queue. - */ -static void nvme_abort_req(struct request *req) +static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) { struct nvme_cmd_info *cmd_rq = blk_mq_rq_to_pdu(req); struct nvme_queue *nvmeq = cmd_rq->nvmeq; @@ -1085,6 +1079,11 @@ static void nvme_abort_req(struct request *req) struct nvme_cmd_info *abort_cmd; struct nvme_command cmd; + /* + * Schedule controller reset if the command was already aborted once + * before and still hasn't been returned to the driver, or if this is + * the admin queue. + */ if (!nvmeq->qid || cmd_rq->aborted) { spin_lock_irq(&dev_list_lock); if (!__nvme_reset(dev)) { @@ -1093,16 +1092,16 @@ static void nvme_abort_req(struct request *req) req->tag, nvmeq->qid); } spin_unlock_irq(&dev_list_lock); - return; + return BLK_EH_RESET_TIMER; } if (!dev->ctrl.abort_limit) - return; + return BLK_EH_RESET_TIMER; abort_req = blk_mq_alloc_request(dev->ctrl.admin_q, WRITE, BLK_MQ_REQ_NOWAIT); if (IS_ERR(abort_req)) - return; + return BLK_EH_RESET_TIMER; abort_cmd = blk_mq_rq_to_pdu(abort_req); nvme_set_info(abort_cmd, abort_req, abort_completion); @@ -1116,9 +1115,16 @@ static void nvme_abort_req(struct request *req) --dev->ctrl.abort_limit; cmd_rq->aborted = 1; - dev_warn(nvmeq->q_dmadev, "Aborting I/O %d QID %d\n", req->tag, - nvmeq->qid); + dev_warn(nvmeq->q_dmadev, "I/O %d QID %d timeout, aborting\n", + req->tag, nvmeq->qid); nvme_submit_cmd(dev->queues[0], &cmd); + + /* + * The aborted req will be completed on receiving the abort req. + * We enable the timer again. If hit twice, it'll cause a device reset, + * as the device then is in a faulty state. + */ + return BLK_EH_RESET_TIMER; } static void nvme_cancel_queue_ios(struct request *req, void *data, bool reserved) @@ -1149,23 +1155,6 @@ static void nvme_cancel_queue_ios(struct request *req, void *data, bool reserved fn(nvmeq, ctx, &cqe); } -static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) -{ - struct nvme_cmd_info *cmd = blk_mq_rq_to_pdu(req); - struct nvme_queue *nvmeq = cmd->nvmeq; - - dev_warn(nvmeq->q_dmadev, "Timeout I/O %d QID %d\n", req->tag, - nvmeq->qid); - nvme_abort_req(req); - - /* - * The aborted req will be completed on receiving the abort req. - * We enable the timer again. If hit twice, it'll cause a device reset, - * as the device then is in a faulty state. - */ - return BLK_EH_RESET_TIMER; -} - static void nvme_free_queue(struct nvme_queue *nvmeq) { dma_free_coherent(nvmeq->q_dmadev, CQ_SIZE(nvmeq->q_depth),