From patchwork Wed Sep 14 16:29:36 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Snitzer X-Patchwork-Id: 9331953 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C560B6077A for ; Wed, 14 Sep 2016 16:29:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B788E2A1C2 for ; Wed, 14 Sep 2016 16:29:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AC41B2A1C5; Wed, 14 Sep 2016 16:29:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5C2542A1C2 for ; Wed, 14 Sep 2016 16:29:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759871AbcINQ3r (ORCPT ); Wed, 14 Sep 2016 12:29:47 -0400 Received: from mx1.redhat.com ([209.132.183.28]:39830 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759540AbcINQ3q (ORCPT ); Wed, 14 Sep 2016 12:29:46 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 28D58A24C7; Wed, 14 Sep 2016 16:29:46 +0000 (UTC) Received: from localhost (vpn-49-123.rdu2.redhat.com [10.10.49.123]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u8EGTjP2008189 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 14 Sep 2016 12:29:45 -0400 From: Mike Snitzer To: axboe@kernel.dk, dm-devel@redhat.com Cc: linux-block@vger.kernel.org Subject: [PATCH v2 6/6] dm mpath: delay the requeue of blk-mq requests while all paths down Date: Wed, 14 Sep 2016 12:29:36 -0400 Message-Id: <1473870576-54331-7-git-send-email-snitzer@redhat.com> In-Reply-To: <1473870576-54331-1-git-send-email-snitzer@redhat.com> References: <1473870576-54331-1-git-send-email-snitzer@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Wed, 14 Sep 2016 16:29:46 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Return DM_MAPIO_DELAY_REQUEUE from .clone_and_map_rq. Also, return false from .busy, if all paths are down, so that blk-mq requests get mapped via .clone_and_map_rq -- which results in DM_MAPIO_DELAY_REQUEUE being returned to dm-rq. This change allows for a noticeable reduction in cpu utilization (reduced kworker load) while all paths are down, e.g.: system CPU idleness (as measured by fio's --idle-prof=system): before: system: 86.58% after: system: 98.60% Signed-off-by: Mike Snitzer Reviewed-by: Hannes Reinecke --- drivers/md/dm-mpath.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c index f69715b..f31fa13 100644 --- a/drivers/md/dm-mpath.c +++ b/drivers/md/dm-mpath.c @@ -550,9 +550,9 @@ static int __multipath_map(struct dm_target *ti, struct request *clone, pgpath = choose_pgpath(m, nr_bytes); if (!pgpath) { - if (!must_push_back_rq(m)) - r = -EIO; /* Failed */ - return r; + if (must_push_back_rq(m)) + return DM_MAPIO_DELAY_REQUEUE; + return -EIO; /* Failed */ } else if (test_bit(MPATHF_QUEUE_IO, &m->flags) || test_bit(MPATHF_PG_INIT_REQUIRED, &m->flags)) { pg_init_all_paths(m); @@ -1992,11 +1992,14 @@ static int multipath_busy(struct dm_target *ti) struct priority_group *pg, *next_pg; struct pgpath *pgpath; - /* pg_init in progress or no paths available */ - if (atomic_read(&m->pg_init_in_progress) || - (!atomic_read(&m->nr_valid_paths) && test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags))) + /* pg_init in progress */ + if (atomic_read(&m->pg_init_in_progress)) return true; + /* no paths available, for blk-mq: rely on IO mapping to delay requeue */ + if (!atomic_read(&m->nr_valid_paths) && test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags)) + return (m->queue_mode != DM_TYPE_MQ_REQUEST_BASED); + /* Guess which priority_group will be used at next mapping time */ pg = lockless_dereference(m->current_pg); next_pg = lockless_dereference(m->next_pg);