From patchwork Fri Dec 1 00:08:44 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10085879 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B6538605D2 for ; Fri, 1 Dec 2017 00:08:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A68D22A43F for ; Fri, 1 Dec 2017 00:08:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9B5B92A44B; Fri, 1 Dec 2017 00:08:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3CF6E2A43F for ; Fri, 1 Dec 2017 00:08:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751156AbdLAAIv (ORCPT ); Thu, 30 Nov 2017 19:08:51 -0500 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:23712 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750923AbdLAAIu (ORCPT ); Thu, 30 Nov 2017 19:08:50 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1512086931; x=1543622931; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=qh2XE/okepgDJxF2swxKG2FH9i1mYgHaCpy+70mKrks=; b=XU50lma38Z+UQI5FDkE/Y9XRcwnQ52MIU83eBSgFHF77Ncp5SrA5cAPX ez3kmCVdQN93hKol/D4ukbLwF5cWeMV6owv+1t3U0mIpcPa9YqOYqQiCK xETQxUK4OoCcja7dIENehOnOyQo3CsU2rN3PHB6DeDd48D2Lw3hKDkvND 8jsFJmZze3lsMp9HNPtGfCLFOkCMukR64ZZyai/C8hZGYh+NzCdcVjJZd JRTtuX7K+t5Bj5ZasfWwEpvo3vms/wD4vacLIbA0fnGAyCBPc4AtPC+4g IdRz0Dbu1VFaOEvFEsoM80n3EJa6pEKF57UfxKDXwUyFklTXxlaq4xnWn A==; X-IronPort-AV: E=Sophos;i="5.45,343,1508774400"; d="scan'208";a="63876004" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 01 Dec 2017 08:08:50 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP; 30 Nov 2017 16:06:07 -0800 Received: from thinkpad-bart.sdcorp.global.sandisk.com (HELO thinkpad-bart.int.fusionio.com) ([10.11.166.51]) by uls-op-cesaip02.wdc.com with ESMTP; 30 Nov 2017 16:08:50 -0800 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Omar Sandoval , Hannes Reinecke , Johannes Thumshirn Subject: [PATCH 3/7] blk-mq: Make blk_mq_mark_tag_wait() easier to read Date: Thu, 30 Nov 2017 16:08:44 -0800 Message-Id: <20171201000848.2656-4-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.15.0 In-Reply-To: <20171201000848.2656-1-bart.vanassche@wdc.com> References: <20171201000848.2656-1-bart.vanassche@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Reduce the number of return statements from three to one. Reduce the number of spin_unlock(&this_hctx->lock) calls from two to one. Fix a misleading comment: since blk-mq-tag.c always uses wake_up_all() other waiters are woken up whether or not the current hctx is removed from the wait list. This patch does not change any functionality. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Omar Sandoval Cc: Hannes Reinecke Cc: Johannes Thumshirn --- block/blk-mq.c | 41 +++++++++++++++-------------------------- 1 file changed, 15 insertions(+), 26 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 7f290a91a612..26fec4dfa40f 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1039,6 +1039,11 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx **hctx, if (!shared_tags) { if (!test_bit(BLK_MQ_S_SCHED_RESTART, &this_hctx->state)) set_bit(BLK_MQ_S_SCHED_RESTART, &this_hctx->state); + ret = blk_mq_get_driver_tag(rq, hctx, false); + /* + * Don't clear RESTART here, someone else could have set it. + * At most this will cost an extra queue run. + */ } else { wait = &this_hctx->dispatch_wait; if (!list_empty_careful(&wait->entry)) @@ -1052,37 +1057,21 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx **hctx, ws = bt_wait_ptr(&this_hctx->tags->bitmap_tags, this_hctx); add_wait_queue(&ws->wait, wait); - } - - /* - * It's possible that a tag was freed in the window between the - * allocation failure and adding the hardware queue to the wait - * queue. - */ - ret = blk_mq_get_driver_tag(rq, hctx, false); - - if (!shared_tags) { /* - * Don't clear RESTART here, someone else could have set it. - * At most this will cost an extra queue run. + * It's possible that a tag was freed in the window between the + * allocation failure and adding the hardware queue to the wait + * queue. */ - return ret; - } else { - if (!ret) { - spin_unlock(&this_hctx->lock); - return false; + ret = blk_mq_get_driver_tag(rq, hctx, false); + /* If we got a tag remove ourselves from the wait queue. */ + if (ret) { + spin_lock_irq(&ws->wait.lock); + list_del_init(&wait->entry); + spin_unlock_irq(&ws->wait.lock); } - - /* - * We got a tag, remove ourselves from the wait queue to ensure - * someone else gets the wakeup. - */ - spin_lock_irq(&ws->wait.lock); - list_del_init(&wait->entry); - spin_unlock_irq(&ws->wait.lock); spin_unlock(&this_hctx->lock); - return true; } + return ret; } bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,