From patchwork Wed Jul 5 06:53:08 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sagi Grimberg X-Patchwork-Id: 9825923 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 55C96602F0 for ; Wed, 5 Jul 2017 06:54:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4CDEC262FF for ; Wed, 5 Jul 2017 06:54:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4066A26E16; Wed, 5 Jul 2017 06:54:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, URIBL_BLACK autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CA404262FF for ; Wed, 5 Jul 2017 06:54:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752441AbdGEGyh (ORCPT ); Wed, 5 Jul 2017 02:54:37 -0400 Received: from bombadil.infradead.org ([65.50.211.133]:56477 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752440AbdGEGyg (ORCPT ); Wed, 5 Jul 2017 02:54:36 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=hawQJ4PZUmqLG3KO2BhjB4POns15PhPQMpCp1Hp4ZFk=; b=PsXqDUTp/9IMpQvmj3gn/GwGv lIZKK40M30uoUAXgTK9IeBIGYL6ELLXIMC9xH7f0/zBgkMCbmeEk4I33CBTkdNKUiXHVUjCDnLKBQ Sjh89npvj9MaH9qyXcxZwi/i4UCYlHm/6A4JP2C/r5UgBjme0cMJzCJCEt1Yw4kS2zWyoH/ZRLzNG vxfNJAa7BOp/htiVV2VKzKXDcFVOefwN8LBvsGJtCN4o/Zm/lSLSfaesQHQ3wL+j3QtzS4eI3WbDK EOHypUundKSx7mX8mZN/R4gdhqTZzx68H8TS3VnLJFk9RDaFvHEtJDW471HL8+U/dfgvKlb5LZ2OU q40cLlXDA==; Received: from bzq-82-81-101-184.red.bezeqint.net ([82.81.101.184] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtpsa (Exim 4.87 #1 (Red Hat Linux)) id 1dSeCn-0008IF-A5; Wed, 05 Jul 2017 06:54:35 +0000 From: Sagi Grimberg To: Jens Axboe , linux-block@vger.kernel.org Cc: linux-nvme@lists.infradead.org, Christoph Hellwig , Keith Busch , Ming Lei Subject: [PATCH v2 5/8] nvme: kick requeue list when requeueing a request instead of when starting the queues Date: Wed, 5 Jul 2017 09:53:08 +0300 Message-Id: <1499237591-15861-6-git-send-email-sagi@grimberg.me> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1499237591-15861-1-git-send-email-sagi@grimberg.me> References: <1499237591-15861-1-git-send-email-sagi@grimberg.me> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When we requeue a request, we can always insert the request back to the scheduler instead of doing it when restarting the queues and kicking the requeue work, so get rid of the requeue kick in nvme (core and drivers). Also, now there is no need start hw queues in nvme_kill_queues We don't stop the hw queues anymore, so no need to start them. Signed-off-by: Sagi Grimberg --- drivers/nvme/host/core.c | 19 ++----------------- drivers/nvme/host/fc.c | 4 +--- drivers/nvme/host/pci.c | 1 - drivers/nvme/host/rdma.c | 1 - 4 files changed, 3 insertions(+), 22 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index d70df1d0072d..48cafaa6fbc5 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -131,7 +131,7 @@ void nvme_complete_rq(struct request *req) { if (unlikely(nvme_req(req)->status && nvme_req_needs_retry(req))) { nvme_req(req)->retries++; - blk_mq_requeue_request(req, !blk_mq_queue_stopped(req->q)); + blk_mq_requeue_request(req, true); return; } @@ -2694,9 +2694,6 @@ void nvme_kill_queues(struct nvme_ctrl *ctrl) /* Forcibly unquiesce queues to avoid blocking dispatch */ blk_mq_unquiesce_queue(ctrl->admin_q); - /* Forcibly start all queues to avoid having stuck requests */ - blk_mq_start_hw_queues(ctrl->admin_q); - list_for_each_entry(ns, &ctrl->namespaces, list) { /* * Revalidating a dead namespace sets capacity to 0. This will @@ -2709,16 +2706,6 @@ void nvme_kill_queues(struct nvme_ctrl *ctrl) /* Forcibly unquiesce queues to avoid blocking dispatch */ blk_mq_unquiesce_queue(ns->queue); - - /* - * Forcibly start all queues to avoid having stuck requests. - * Note that we must ensure the queues are not stopped - * when the final removal happens. - */ - blk_mq_start_hw_queues(ns->queue); - - /* draining requests in requeue list */ - blk_mq_kick_requeue_list(ns->queue); } mutex_unlock(&ctrl->namespaces_mutex); } @@ -2787,10 +2774,8 @@ void nvme_start_queues(struct nvme_ctrl *ctrl) struct nvme_ns *ns; mutex_lock(&ctrl->namespaces_mutex); - list_for_each_entry(ns, &ctrl->namespaces, list) { + list_for_each_entry(ns, &ctrl->namespaces, list) blk_mq_unquiesce_queue(ns->queue); - blk_mq_kick_requeue_list(ns->queue); - } mutex_unlock(&ctrl->namespaces_mutex); } EXPORT_SYMBOL_GPL(nvme_start_queues); diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c index 3e35fc622680..8d55e7827932 100644 --- a/drivers/nvme/host/fc.c +++ b/drivers/nvme/host/fc.c @@ -2321,10 +2321,8 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl) if (ret) goto out_delete_hw_queue; - if (ctrl->ctrl.state != NVME_CTRL_NEW) { + if (ctrl->ctrl.state != NVME_CTRL_NEW) blk_mq_unquiesce_queue(ctrl->ctrl.admin_q); - blk_mq_kick_requeue_list(ctrl->ctrl.admin_q); - } ret = nvmf_connect_admin_queue(&ctrl->ctrl); if (ret) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index df7c8a355075..8d8e2eddb3da 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1353,7 +1353,6 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev) } } else { blk_mq_unquiesce_queue(dev->ctrl.admin_q); - blk_mq_kick_requeue_list(dev->ctrl.admin_q); } return 0; diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index c8bf2606ba64..f2cf9638b4d2 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -792,7 +792,6 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work) * new IO */ blk_mq_unquiesce_queue(ctrl->ctrl.admin_q); - blk_mq_kick_requeue_list(ctrl->ctrl.admin_q); nvme_start_queues(&ctrl->ctrl); nvme_rdma_reconnect_or_remove(ctrl);