From patchwork Wed Oct 17 09:07:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "jianchao.wang" X-Patchwork-Id: 10645137 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 29A1513A4 for ; Wed, 17 Oct 2018 09:05:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1CD432AC37 for ; Wed, 17 Oct 2018 09:05:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 10EE72AC3C; Wed, 17 Oct 2018 09:05:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 643FD2AC37 for ; Wed, 17 Oct 2018 09:05:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727211AbeJQQ76 (ORCPT ); Wed, 17 Oct 2018 12:59:58 -0400 Received: from userp2120.oracle.com ([156.151.31.85]:46548 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727132AbeJQQ75 (ORCPT ); Wed, 17 Oct 2018 12:59:57 -0400 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w9H8x9dd068787; Wed, 17 Oct 2018 09:05:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2018-07-02; bh=Grv+GYtoc1VcEXlvQ5o3aB19+wCy6Omjqj4746tPSfY=; b=i0/NA+N0D1zeRPcwf/Gd0lX4eRfiVvJ5kQ2FpW+XmTmUqsBQwlOuB4hAil8jiEsKDOUb CTqNCr2Mj0se8olAnU9M8clS+6xiTtPvrd7HV1j/hFqXjb8UVIb+Ier6xiHVvAFF1Uxr bHQQ/n0LjOn6TV697tpRQ/k4RjYb37qZdASG/vuAXxhp/vZkinh6Deiz3uQD+xZsa9wS 6pVKiOhWArB51iDNdms2FE9StN3Q595F9iY2Ct066/hDYQaTCFYttmciljdRelAjN+Cd CFX8I9lnabZGPw1S2nYJ+QU2BOZB7tus+oao65SBCHIZVIeXlM36cFMDKzYZReirtwSm tA== Received: from userv0021.oracle.com (userv0021.oracle.com [156.151.31.71]) by userp2120.oracle.com with ESMTP id 2n39brdh09-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 17 Oct 2018 09:05:07 +0000 Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by userv0021.oracle.com (8.14.4/8.14.4) with ESMTP id w9H952YR013222 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 17 Oct 2018 09:05:02 GMT Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id w9H951l8026698; Wed, 17 Oct 2018 09:05:02 GMT Received: from will-ThinkCentre-M910s.cn.oracle.com (/10.182.70.254) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 17 Oct 2018 02:05:01 -0700 From: Jianchao Wang To: axboe@kernel.dk Cc: hch@lst.de, keith.busch@linux.intel.com, ming.lei@redhat.com, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH V2 1/3] blk-mq: introduce bio retrieve mechanism Date: Wed, 17 Oct 2018 17:07:10 +0800 Message-Id: <1539767232-9389-2-git-send-email-jianchao.w.wang@oracle.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1539767232-9389-1-git-send-email-jianchao.w.wang@oracle.com> References: <1539767232-9389-1-git-send-email-jianchao.w.wang@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9048 signatures=668706 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=4 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1810170082 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently request requeue mechanism cannot work well with updating nr_hw_queues. Because the requests are highly bound with specific hw queue, requests on the dying hw queue have to be failed. And this could be fatal for filesystem. In addition, the request_queue need to be frozen and drained before updating nr_hw_queues, if IO timeout, we have to depend on the LLDD to do recovery. But the recovery path maybe sleeping to wait the the request_queue to be drained. IO hang comes up. To avoid the two case above, we introduce bio retrieve mechanism. The bio retrieving will do following things: - flush requests on hctx->dispatch, sw queue or io scheduler queue - take the bios down from the requests and end the requests - requeue this bios and submit them through generic_make_request again later. Then we could avoid to fail requests on dying hw queue and depend on storage device to drain request_queue. Signed-off-by: Jianchao Wang --- block/blk-core.c | 2 ++ block/blk-mq-sched.c | 88 ++++++++++++++++++++++++++++++++++++++++++++++++++ block/blk-mq.c | 42 ++++++++++++++++++++++++ include/linux/blk-mq.h | 4 +++ include/linux/blkdev.h | 2 ++ 5 files changed, 138 insertions(+) diff --git a/block/blk-core.c b/block/blk-core.c index cdfabc5..f3c6fa8 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -807,6 +807,8 @@ void blk_cleanup_queue(struct request_queue *q) /* @q won't process any more request, flush async actions */ del_timer_sync(&q->backing_dev_info->laptop_mode_wb_timer); + if (q->mq_ops) + cancel_delayed_work_sync(&q->bio_requeue_work); blk_sync_queue(q); /* diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 29bfe80..9d0b2a2 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -422,6 +422,94 @@ void blk_mq_sched_insert_requests(struct request_queue *q, blk_mq_run_hw_queue(hctx, run_queue_async); } +static void blk_mq_sched_retrieve_one_req(struct request *rq, + struct bio_list *list) +{ + struct bio *bio; + + blk_steal_bios(list, rq); + blk_mq_end_request(rq, BLK_STS_OK); + + bio_list_for_each(bio, list) { + /* + * bio with BIO_QUEUE_ENTERED will enter queue with + * blk_queue_enter_live. + */ + bio_clear_flag(bio, BIO_QUEUE_ENTERED); + } +} + +static void __blk_mq_sched_retrieve_bios(struct blk_mq_hw_ctx *hctx) +{ + struct request_queue *q = hctx->queue; + struct bio_list bio_list; + LIST_HEAD(rq_list); + struct request *rq; + + bio_list_init(&bio_list); + + if (!list_empty_careful(&hctx->dispatch)) { + spin_lock(&hctx->lock); + if (!list_empty(&hctx->dispatch)) + list_splice_tail_init(&hctx->dispatch, &rq_list); + spin_unlock(&hctx->lock); + } + + if (!q->elevator) + blk_mq_flush_busy_ctxs(hctx, &rq_list); + + while (!list_empty(&rq_list)) { + rq = list_first_entry(&rq_list, struct request, queuelist); + list_del_init(&rq->queuelist); + blk_mq_sched_retrieve_one_req(rq, &bio_list); + } + + if (q->elevator) { + struct elevator_queue *e = hctx->queue->elevator; + + while (e->type->ops.mq.has_work && + e->type->ops.mq.has_work(hctx)) { + rq = e->type->ops.mq.dispatch_request(hctx); + if (!rq) + continue; + + blk_mq_sched_retrieve_one_req(rq, &bio_list); + } + } + /* For the request with RQF_FLUSH_SEQ, blk_mq_end_request cannot end them + * but just push the flush sm. So there could still be rqs in flush queue, + * the caller will check q_usage_counter and come back again. + */ + blk_mq_requeue_bios(q, &bio_list, false); +} + +/* + * When blk_mq_sched_retrieve_bios returns: + * - All the rqs are ended, q_usage_counter is zero + * - All the bios are queued to q->requeue_bios + */ +void blk_mq_sched_retrieve_bios(struct request_queue *q) +{ + struct blk_mq_hw_ctx *hctx; + int i; + + BUG_ON(!atomic_read(&q->mq_freeze_depth) || + !blk_queue_quiesced(q)); + + /* + * Kick the requeue_work to flush the reqs in requeue_list + */ + blk_mq_kick_requeue_list(q); + + while (!percpu_ref_is_zero(&q->q_usage_counter)) { + queue_for_each_hw_ctx(q, hctx, i) + __blk_mq_sched_retrieve_bios(hctx); + } + + blk_mq_requeue_bios(q, NULL, true); +} +EXPORT_SYMBOL_GPL(blk_mq_sched_retrieve_bios); + static void blk_mq_sched_free_tags(struct blk_mq_tag_set *set, struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx) diff --git a/block/blk-mq.c b/block/blk-mq.c index dcf10e3..f75598b 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -706,6 +706,46 @@ void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list) } EXPORT_SYMBOL(blk_mq_requeue_request); +static void blk_mq_bio_requeue_work(struct work_struct *work) +{ + struct request_queue *q = + container_of(work, struct request_queue, bio_requeue_work.work); + struct bio *bio; + + /* Defects: + * - Bios from all cpus have to be issued on one. + * - The requeued older bios have to contend tags with following + * new bios. + */ + while (true) { + spin_lock_irq(&q->requeue_lock); + bio = bio_list_pop(&q->requeue_bios); + spin_unlock_irq(&q->requeue_lock); + if (!bio) + break; + /* + * generic_make_request could invoke blk_queue_enter, then + * - sleep when queue is frozen + * - return with failing the bio when the queue is DYING + */ + generic_make_request(bio); + } +} + +void blk_mq_requeue_bios(struct request_queue *q, + struct bio_list *bio_list, bool kick) +{ + if (bio_list) { + spin_lock_irq(&q->requeue_lock); + bio_list_merge(&q->requeue_bios, bio_list); + spin_unlock_irq(&q->requeue_lock); + } + + if (kick) + kblockd_mod_delayed_work_on(WORK_CPU_UNBOUND, &q->bio_requeue_work, 0); +} +EXPORT_SYMBOL(blk_mq_requeue_bios); + static void blk_mq_requeue_work(struct work_struct *work) { struct request_queue *q = @@ -2695,7 +2735,9 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, q->sg_reserved_size = INT_MAX; INIT_DELAYED_WORK(&q->requeue_work, blk_mq_requeue_work); + INIT_DELAYED_WORK(&q->bio_requeue_work, blk_mq_bio_requeue_work); INIT_LIST_HEAD(&q->requeue_list); + bio_list_init(&q->requeue_bios); spin_lock_init(&q->requeue_lock); blk_queue_make_request(q, blk_mq_make_request); diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 2286dc1..52187d4 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -260,6 +260,10 @@ void blk_mq_end_request(struct request *rq, blk_status_t error); void __blk_mq_end_request(struct request *rq, blk_status_t error); void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list); +void blk_mq_requeue_bios(struct request_queue *q, + struct bio_list *bio_list, bool kick); +void blk_mq_sched_retrieve_bios(struct request_queue *q); + void blk_mq_add_to_requeue_list(struct request *rq, bool at_head, bool kick_requeue_list); void blk_mq_kick_requeue_list(struct request_queue *q); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 6120756..0c83948 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -633,8 +633,10 @@ struct request_queue { struct blk_flush_queue *fq; struct list_head requeue_list; + struct bio_list requeue_bios; spinlock_t requeue_lock; struct delayed_work requeue_work; + struct delayed_work bio_requeue_work; struct mutex sysfs_lock; From patchwork Wed Oct 17 09:07:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "jianchao.wang" X-Patchwork-Id: 10645139 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B588D13A4 for ; Wed, 17 Oct 2018 09:05:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A96E82ABC4 for ; Wed, 17 Oct 2018 09:05:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9DB4A2AC39; Wed, 17 Oct 2018 09:05:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3C7812ABC4 for ; Wed, 17 Oct 2018 09:05:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727127AbeJQQ7z (ORCPT ); Wed, 17 Oct 2018 12:59:55 -0400 Received: from userp2130.oracle.com ([156.151.31.86]:47978 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726904AbeJQQ7z (ORCPT ); Wed, 17 Oct 2018 12:59:55 -0400 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w9H8x6dH114878; Wed, 17 Oct 2018 09:05:06 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2018-07-02; bh=khVFuOwx5RaV2RTaA9/8d01Q+bnD4cjbDGh7bO2VOu8=; b=Bq8xb5u3Cvs07EzBKW/IbsgJE2C9L5+QW3dQVeYnys8UuCT8iDxf/Shp22rzU2RP9fRv uP1Ymy6VcLbd0Y17jiB28z89EoSG/CezN/jejdaab3hXGC3AjBObIuVQ6dR46FhnpmNC 3053HimAws3XOIxjubJvhIffrk2CHEqdhHbkU+sSNa3I1HmSEBQmhqvx0p9gjHTkMzky t4yuR+Jk0ykdOc3O3C7LpYRgmiqHCU+4CmsAmuVzx06RvW0kiy1KnpMtCsJDVOjzcOdH jKh+jlyRQfp9MtIR4iT5uFpvCmkEBYZkN0nv1ugG/9ticJSOO/76Eidbk1Bp2/7cn2Cm HQ== Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by userp2130.oracle.com with ESMTP id 2n384u5tjc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 17 Oct 2018 09:05:06 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id w9H954gK009269 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 17 Oct 2018 09:05:05 GMT Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id w9H95362024765; Wed, 17 Oct 2018 09:05:04 GMT Received: from will-ThinkCentre-M910s.cn.oracle.com (/10.182.70.254) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 17 Oct 2018 02:05:03 -0700 From: Jianchao Wang To: axboe@kernel.dk Cc: hch@lst.de, keith.busch@linux.intel.com, ming.lei@redhat.com, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH V2 2/3] blk-mq: retrieve bios before update nr_hw_queues Date: Wed, 17 Oct 2018 17:07:11 +0800 Message-Id: <1539767232-9389-3-git-send-email-jianchao.w.wang@oracle.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1539767232-9389-1-git-send-email-jianchao.w.wang@oracle.com> References: <1539767232-9389-1-git-send-email-jianchao.w.wang@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9048 signatures=668706 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=1 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=807 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1810170082 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP retrieve bios of all requests on queue to drain requests, then needn't depend on storage device to drain the queue any more. Signed-off-by: Jianchao Wang --- block/blk-mq.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index f75598b..6d89d3e 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -3085,7 +3085,10 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, return; list_for_each_entry(q, &set->tag_list, tag_set_list) - blk_mq_freeze_queue(q); + blk_freeze_queue_start(q); + + list_for_each_entry(q, &set->tag_list, tag_set_list) + blk_mq_sched_retrieve_bios(q); /* * Sync with blk_mq_queue_tag_busy_iter. */ From patchwork Wed Oct 17 09:07:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "jianchao.wang" X-Patchwork-Id: 10645163 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6D0A8109C for ; Wed, 17 Oct 2018 09:06:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5D4A22ABC4 for ; Wed, 17 Oct 2018 09:06:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5112D2AC40; Wed, 17 Oct 2018 09:06:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 04A3B2ABC4 for ; Wed, 17 Oct 2018 09:06:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727285AbeJQRBD (ORCPT ); Wed, 17 Oct 2018 13:01:03 -0400 Received: from userp2120.oracle.com ([156.151.31.85]:47446 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727238AbeJQRBD (ORCPT ); Wed, 17 Oct 2018 13:01:03 -0400 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w9H8x6kx068776; Wed, 17 Oct 2018 09:05:13 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2018-07-02; bh=xar3IM2xcUOuRJ8vfJjoYMneHSe65id2InNHRVWwp3A=; b=EQp2+c1Ndp02uPFSrxuN91k54Ptqp7wFL69jMDCtPpAzB+m4W9Cy4tuKINsd/Le+lh/q dqulCAKefZT/puIgxYBWA5WLbBsvsP7h6OUEwtx/f2yXDHIrAuiN+MqUxILgFapAcwOS a3fBT+yKGLiMbwCnIUGsYMQiwJTH1kmNUhejVCSFMVzn17Yol2iVaftllcY/H43XCiyV b2Vigzpq+0NR0fT3Gcxd98MOJA7ozOij4GqZcLfyam2cxO954yina7t5IQE63F4hNtPB 1Yu1cFXf+8QyibkwvSFg2FUgBD8ltenCv//KL7m7JJbLmmVHvFRcPwXaLeSyiDrgF6z2 Cw== Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by userp2120.oracle.com with ESMTP id 2n39brdh0r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 17 Oct 2018 09:05:13 +0000 Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id w9H956pw009465 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 17 Oct 2018 09:05:07 GMT Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id w9H955KW026741; Wed, 17 Oct 2018 09:05:06 GMT Received: from will-ThinkCentre-M910s.cn.oracle.com (/10.182.70.254) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 17 Oct 2018 02:05:05 -0700 From: Jianchao Wang To: axboe@kernel.dk Cc: hch@lst.de, keith.busch@linux.intel.com, ming.lei@redhat.com, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH V2 3/3] nvme-pci: unquiesce queues after update hw queues Date: Wed, 17 Oct 2018 17:07:12 +0800 Message-Id: <1539767232-9389-4-git-send-email-jianchao.w.wang@oracle.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1539767232-9389-1-git-send-email-jianchao.w.wang@oracle.com> References: <1539767232-9389-1-git-send-email-jianchao.w.wang@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9048 signatures=668706 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=1 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=933 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1810170082 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP updating hw queues uses bio retrieve to drain the queues. unquiescing queues before that is not needed and will cause requests to be issued to dead hw queue. So move unquiescing queues after updating hw queues, as well as the wait freeze. Signed-off-by: Jianchao Wang --- drivers/nvme/host/pci.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index d668682..dbf6904 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -2327,11 +2327,11 @@ static void nvme_reset_work(struct work_struct *work) nvme_remove_namespaces(&dev->ctrl); new_state = NVME_CTRL_ADMIN_ONLY; } else { - nvme_start_queues(&dev->ctrl); - nvme_wait_freeze(&dev->ctrl); /* hit this only when allocate tagset fails */ if (nvme_dev_add(dev)) new_state = NVME_CTRL_ADMIN_ONLY; + nvme_start_queues(&dev->ctrl); + nvme_wait_freeze(&dev->ctrl); nvme_unfreeze(&dev->ctrl); }