From patchwork Mon Dec 5 18:27:06 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 9461297 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9E7956071F for ; Mon, 5 Dec 2016 18:27:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8928E209CF for ; Mon, 5 Dec 2016 18:27:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7E09227C2D; Mon, 5 Dec 2016 18:27:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5694427C0C for ; Mon, 5 Dec 2016 18:27:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752782AbcLES1d (ORCPT ); Mon, 5 Dec 2016 13:27:33 -0500 Received: from mx0a-00082601.pphosted.com ([67.231.145.42]:40660 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752779AbcLES1b (ORCPT ); Mon, 5 Dec 2016 13:27:31 -0500 Received: from pps.filterd (m0044010.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.17/8.16.0.17) with SMTP id uB5IJKZV007987; Mon, 5 Dec 2016 10:27:24 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=facebook; bh=gClf9Fn6KfBdZ6xs+6GMGaI6FSharOAboCCL2FesCiU=; b=OG6KtkPVV9sF27Q5KZJSK0gxONcIDmGevt1tYRu0NPY2XmhMxAdNc4HgL7PCkVfJXdmb fK34C6bbqBwXWDxRe+oOp3lC9UqrUiC27hRNVGt03ch/9z9Pl1tEbheYgE/mIRK9q0Tv 7mkNkD1CkMSFvPEQ2XXUS7pvQ7KL2sAdZsI= Received: from mail.thefacebook.com ([199.201.64.23]) by mx0a-00082601.pphosted.com with ESMTP id 27558kc5aw-1 (version=TLSv1 cipher=ECDHE-RSA-AES256-SHA bits=256 verify=NOT); Mon, 05 Dec 2016 10:27:24 -0800 Received: from localhost.localdomain (192.168.54.13) by mail.thefacebook.com (192.168.16.24) with Microsoft SMTP Server (TLS) id 14.3.294.0; Mon, 5 Dec 2016 10:27:23 -0800 From: Jens Axboe To: , , CC: , Jens Axboe Subject: [PATCH 7/7] block: drop irq+lock when flushing queue plugs Date: Mon, 5 Dec 2016 11:27:06 -0700 Message-ID: <1480962426-15767-8-git-send-email-axboe@fb.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1480962426-15767-1-git-send-email-axboe@fb.com> References: <1480962426-15767-1-git-send-email-axboe@fb.com> MIME-Version: 1.0 X-Originating-IP: [192.168.54.13] X-Proofpoint-Spam-Reason: safe X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-12-05_14:, , signatures=0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Not convinced this is a faster approach, and it does look IRQs off longer than otherwise. With mq+scheduling, it's a problem since it forces us to offload the queue running. If we get rid of it, we can run the queue without the queue lock held. Signed-off-by: Jens Axboe --- block/blk-core.c | 32 ++++++++++++++------------------ 1 file changed, 14 insertions(+), 18 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 8be12ba91f8e..2c61d2020c3f 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -3204,18 +3204,21 @@ static int plug_rq_cmp(void *priv, struct list_head *a, struct list_head *b) * plugger did not intend it. */ static void queue_unplugged(struct request_queue *q, unsigned int depth, - bool from_schedule) + bool from_schedule, unsigned long flags) __releases(q->queue_lock) { trace_block_unplug(q, depth, !from_schedule); - if (q->mq_ops) - blk_mq_run_hw_queues(q, true); - else if (from_schedule) - blk_run_queue_async(q); - else - __blk_run_queue(q); - spin_unlock(q->queue_lock); + if (q->mq_ops) { + spin_unlock_irqrestore(q->queue_lock, flags); + blk_mq_run_hw_queues(q, from_schedule); + } else { + if (from_schedule) + blk_run_queue_async(q); + else + __blk_run_queue(q); + spin_unlock_irqrestore(q->queue_lock, flags); + } } static void flush_plug_callbacks(struct blk_plug *plug, bool from_schedule) @@ -3283,11 +3286,6 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule) q = NULL; depth = 0; - /* - * Save and disable interrupts here, to avoid doing it for every - * queue lock we have to take. - */ - local_irq_save(flags); while (!list_empty(&list)) { rq = list_entry_rq(list.next); list_del_init(&rq->queuelist); @@ -3297,10 +3295,10 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule) * This drops the queue lock */ if (q) - queue_unplugged(q, depth, from_schedule); + queue_unplugged(q, depth, from_schedule, flags); q = rq->q; depth = 0; - spin_lock(q->queue_lock); + spin_lock_irqsave(q->queue_lock, flags); } /* @@ -3329,9 +3327,7 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule) * This drops the queue lock */ if (q) - queue_unplugged(q, depth, from_schedule); - - local_irq_restore(flags); + queue_unplugged(q, depth, from_schedule, flags); } void blk_finish_plug(struct blk_plug *plug)