From patchwork Mon Oct 30 22:41:59 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10033387 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B05F16039A for ; Mon, 30 Oct 2017 22:42:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A22E52898D for ; Mon, 30 Oct 2017 22:42:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 973F528990; Mon, 30 Oct 2017 22:42:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0CF752898D for ; Mon, 30 Oct 2017 22:42:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753079AbdJ3WmY (ORCPT ); Mon, 30 Oct 2017 18:42:24 -0400 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:25300 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752492AbdJ3WmJ (ORCPT ); Mon, 30 Oct 2017 18:42:09 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1509403330; x=1540939330; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=p14vUvpj++j5muX3SP3oi74qDC44zypu5EoTiCnf4Cw=; b=nJ/uZGiWpGhjRJvTekGrd9hOgm/xobHnBav+mz6NfqThMCPnbF988sf/ iqtmoPRJ3FGpFHwAoen60EgobwfklWrBJkueTe0nM4jChGdQ46J/NBaW+ d7qGSRNOfwuGN8Nw5GvLdg85WRODoo3J/nwpEiJh741dJd7NUDL9HwPb7 i9hiHZIiVWAHM7oTOOcdYIf4uxswZviFFeoKlueO58yAjSmFpoNbIubjY C9OhDp2V88zsMdABDLqayalUYClEX6U4Ul2m6TrTuXngzVZlC1hfYCQTN 9+3pp5ne3K5sv8rQmW9Ek1TT22CNVZVKPqXkCFolW4cATCdyZfG9NOOHp Q==; X-IronPort-AV: E=Sophos;i="5.44,321,1505750400"; d="scan'208";a="60181971" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 31 Oct 2017 06:42:09 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP; 30 Oct 2017 15:40:17 -0700 Received: from unknown (HELO MILHUBIP03.sdcorp.global.sandisk.com) ([10.177.9.96]) by uls-op-cesaip02.wdc.com with ESMTP; 30 Oct 2017 15:42:08 -0700 Received: from milsmgip11.sandisk.com (10.177.9.6) by MILHUBIP03.sdcorp.global.sandisk.com (10.177.9.96) with Microsoft SMTP Server id 14.3.319.2; Mon, 30 Oct 2017 15:42:08 -0700 X-AuditID: 0ab10959-41c5898000002c97-91-59f7aabf66f2 Received: from thinkpad-bart.int.fusionio.com ( [10.177.8.100]) by (Symantec Messaging Gateway) with SMTP id FF.AD.11415.FBAA7F95; Mon, 30 Oct 2017 15:42:07 -0700 (PDT) From: Bart Van Assche To: Jens Axboe CC: , , "Christoph Hellwig" , "Martin K . Petersen" , Oleksandr Natalenko , Ming Lei , Martin Steigerwald , "Bart Van Assche" Subject: [PATCH v11 1/7] block: Make q_usage_counter also track legacy requests Date: Mon, 30 Oct 2017 15:41:59 -0700 Message-ID: <20171030224205.25212-2-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20171030224205.25212-1-bart.vanassche@wdc.com> References: <20171030224205.25212-1-bart.vanassche@wdc.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrKJMWRmVeSWpSXmKPExsXCtZEjRXf/qu+RBivOylmsvtvPZnHpzxdG i5WrjzJZ7L2lbdF9fQebxfLj/5gslsxqZrI4NBlILHyxidmB0+Py2VKPiev5PHbfbGDzeHmJ w+Pj01ssHu/3XWXz+LxJzqP9QDdTAEcUl01Kak5mWWqRvl0CV8bi9aUFTdIVS28/ZmtgnCbW xcjJISFgIrH+30zGLkYuDiGB1YwSb089YQFJsAnoSZyat48JxBYRUJDo+b2SDaSIWeAgk8S/ 5v1gRcIC/hJrPu9hBrFZBFQlti5dD2bzClhL/NlwlQ1ig7zE+wX3GUFsTgEbiUWbvrOD2EJA NbsmTmeawMi9gJFhFaNYbmZOcW56ZoGhoV5xYl5KZnG2XnJ+7iZGcEBxRu5gfDrR/BAjEwen VANjSNSXJobl15j8zi9ecWHuwsyzPH5C/ycXnVy3s2LuTR7GxV2WuhEvbMI9Z95hXL2jM+u3 wL3rT93WJj9fxCrSn19o+ntSp97u0N75O9vfButI30+XvnZzx6dz3yKsOGc8PFKupfB9kVh9 g7QY0zMbzjMT+pdlPY9q217REdlTsHnrmnNqKxT/KbEUZyQaajEXFScCADLytvvYAQAA MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ming Lei This patch makes it possible to pause request allocation for the legacy block layer by calling blk_mq_freeze_queue() and blk_mq_unfreeze_queue(). Signed-off-by: Ming Lei [ bvanassche: Combined two patches into one, edited a comment and made sure REQ_NOWAIT is handled properly in blk_old_get_request() ] Signed-off-by: Bart Van Assche Reviewed-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn Reviewed-by: Hannes Reinecke Tested-by: Martin Steigerwald Cc: Ming Lei --- block/blk-core.c | 12 ++++++++++++ block/blk-mq.c | 10 ++-------- 2 files changed, 14 insertions(+), 8 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index bb4fce694a60..ec4eafb5af9f 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -611,6 +611,9 @@ void blk_set_queue_dying(struct request_queue *q) } spin_unlock_irq(q->queue_lock); } + + /* Make blk_queue_enter() reexamine the DYING flag. */ + wake_up_all(&q->mq_freeze_wq); } EXPORT_SYMBOL_GPL(blk_set_queue_dying); @@ -1397,16 +1400,22 @@ static struct request *blk_old_get_request(struct request_queue *q, unsigned int op, gfp_t gfp_mask) { struct request *rq; + int ret = 0; WARN_ON_ONCE(q->mq_ops); /* create ioc upfront */ create_io_context(gfp_mask, q->node); + ret = blk_queue_enter(q, !(gfp_mask & __GFP_DIRECT_RECLAIM) || + (op & REQ_NOWAIT)); + if (ret) + return ERR_PTR(ret); spin_lock_irq(q->queue_lock); rq = get_request(q, op, NULL, gfp_mask); if (IS_ERR(rq)) { spin_unlock_irq(q->queue_lock); + blk_queue_exit(q); return rq; } @@ -1578,6 +1587,7 @@ void __blk_put_request(struct request_queue *q, struct request *req) blk_free_request(rl, req); freed_request(rl, sync, rq_flags); blk_put_rl(rl); + blk_queue_exit(q); } } EXPORT_SYMBOL_GPL(__blk_put_request); @@ -1859,8 +1869,10 @@ static blk_qc_t blk_queue_bio(struct request_queue *q, struct bio *bio) * Grab a free request. This is might sleep but can not fail. * Returns with the queue unlocked. */ + blk_queue_enter_live(q); req = get_request(q, bio->bi_opf, bio, GFP_NOIO); if (IS_ERR(req)) { + blk_queue_exit(q); __wbt_done(q->rq_wb, wb_acct); if (PTR_ERR(req) == -ENOMEM) bio->bi_status = BLK_STS_RESOURCE; diff --git a/block/blk-mq.c b/block/blk-mq.c index 097ca3ece716..59b7de6b616b 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -125,7 +125,8 @@ void blk_freeze_queue_start(struct request_queue *q) freeze_depth = atomic_inc_return(&q->mq_freeze_depth); if (freeze_depth == 1) { percpu_ref_kill(&q->q_usage_counter); - blk_mq_run_hw_queues(q, false); + if (q->mq_ops) + blk_mq_run_hw_queues(q, false); } } EXPORT_SYMBOL_GPL(blk_freeze_queue_start); @@ -255,13 +256,6 @@ void blk_mq_wake_waiters(struct request_queue *q) queue_for_each_hw_ctx(q, hctx, i) if (blk_mq_hw_queue_mapped(hctx)) blk_mq_tag_wakeup_all(hctx->tags, true); - - /* - * If we are called because the queue has now been marked as - * dying, we need to ensure that processes currently waiting on - * the queue are notified as well. - */ - wake_up_all(&q->mq_freeze_wq); } bool blk_mq_can_queue(struct blk_mq_hw_ctx *hctx)