From patchwork Mon Sep 16 02:16:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiubo Li X-Patchwork-Id: 11146231 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 11D5914E5 for ; Mon, 16 Sep 2019 02:16:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E831420873 for ; Mon, 16 Sep 2019 02:16:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729385AbfIPCQs (ORCPT ); Sun, 15 Sep 2019 22:16:48 -0400 Received: from mx1.redhat.com ([209.132.183.28]:34816 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727635AbfIPCQs (ORCPT ); Sun, 15 Sep 2019 22:16:48 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 1C5653086258; Mon, 16 Sep 2019 02:16:48 +0000 (UTC) Received: from localhost.localdomain (unknown [10.70.39.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id C0E515C1D6; Mon, 16 Sep 2019 02:16:40 +0000 (UTC) From: xiubli@redhat.com To: josef@toxicpanda.com, axboe@kernel.dk Cc: mchristi@redhat.com, linux-block@vger.kernel.org, Xiubo Li , Gabriel Krisman Bertazi , Ming Lei Subject: [PATCHv2 1/2] blk-mq: Avoid memory reclaim when allocating request map Date: Mon, 16 Sep 2019 07:46:30 +0530 Message-Id: <20190916021631.4327-2-xiubli@redhat.com> In-Reply-To: <20190916021631.4327-1-xiubli@redhat.com> References: <20190916021631.4327-1-xiubli@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.49]); Mon, 16 Sep 2019 02:16:48 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Xiubo Li For some storage drivers, such as the nbd, when there has new socket connections added, it will update the hardware queue number by calling blk_mq_update_nr_hw_queues(), in which it will freeze all the queues first. And then tries to do the hardware queue updating stuff. But int blk_mq_alloc_rq_map()-->blk_mq_init_tags(), when allocating memory for tags, it may cause the mm do the memories direct reclaiming, since the queues has been freezed, so if needs to flush the page cache to disk, it will stuck in generic_make_request()-->blk_queue_enter() by waiting the queues to be unfreezed and then cause deadlock here. Since the memory size requested here is a small one, which will make it not that easy to happen with a large size, but in theory this could happen when the OS is running in pressure and out of memory. Gabriel Krisman Bertazi has hit the similar issue by fixing it in commit 36e1f3d10786 ("blk-mq: Avoid memory reclaim when remapping queues"), but might forget this part. Signed-off-by: Xiubo Li CC: Gabriel Krisman Bertazi Reviewed-by: Ming Lei --- block/blk-mq-tag.c | 5 +++-- block/blk-mq-tag.h | 5 ++++- block/blk-mq.c | 3 ++- 3 files changed, 9 insertions(+), 4 deletions(-) diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 008388e82b5c..04ee0e4c3fa1 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -462,7 +462,8 @@ static struct blk_mq_tags *blk_mq_init_bitmap_tags(struct blk_mq_tags *tags, struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags, unsigned int reserved_tags, - int node, int alloc_policy) + int node, int alloc_policy, + gfp_t flags) { struct blk_mq_tags *tags; @@ -471,7 +472,7 @@ struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags, return NULL; } - tags = kzalloc_node(sizeof(*tags), GFP_KERNEL, node); + tags = kzalloc_node(sizeof(*tags), flags, node); if (!tags) return NULL; diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h index 61deab0b5a5a..296e0bc97126 100644 --- a/block/blk-mq-tag.h +++ b/block/blk-mq-tag.h @@ -22,7 +22,10 @@ struct blk_mq_tags { }; -extern struct blk_mq_tags *blk_mq_init_tags(unsigned int nr_tags, unsigned int reserved_tags, int node, int alloc_policy); +extern struct blk_mq_tags *blk_mq_init_tags(unsigned int nr_tags, + unsigned int reserved_tags, + int node, int alloc_policy, + gfp_t flags); extern void blk_mq_free_tags(struct blk_mq_tags *tags); extern unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data); diff --git a/block/blk-mq.c b/block/blk-mq.c index 240416057f28..9c52e4dfe132 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2090,7 +2090,8 @@ struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, node = set->numa_node; tags = blk_mq_init_tags(nr_tags, reserved_tags, node, - BLK_MQ_FLAG_TO_ALLOC_POLICY(set->flags)); + BLK_MQ_FLAG_TO_ALLOC_POLICY(set->flags), + GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY); if (!tags) return NULL; From patchwork Mon Sep 16 02:16:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiubo Li X-Patchwork-Id: 11146233 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BB899912 for ; Mon, 16 Sep 2019 02:16:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A506020873 for ; Mon, 16 Sep 2019 02:16:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729386AbfIPCQv (ORCPT ); Sun, 15 Sep 2019 22:16:51 -0400 Received: from mx1.redhat.com ([209.132.183.28]:33612 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727635AbfIPCQv (ORCPT ); Sun, 15 Sep 2019 22:16:51 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CC5E258569; Mon, 16 Sep 2019 02:16:50 +0000 (UTC) Received: from localhost.localdomain (unknown [10.70.39.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9A4E05C1D6; Mon, 16 Sep 2019 02:16:48 +0000 (UTC) From: xiubli@redhat.com To: josef@toxicpanda.com, axboe@kernel.dk Cc: mchristi@redhat.com, linux-block@vger.kernel.org, Xiubo Li , Ming Lei Subject: [PATCHv2 2/2] blk-mq: use BLK_MQ_GFP_FLAGS macro instead Date: Mon, 16 Sep 2019 07:46:31 +0530 Message-Id: <20190916021631.4327-3-xiubli@redhat.com> In-Reply-To: <20190916021631.4327-1-xiubli@redhat.com> References: <20190916021631.4327-1-xiubli@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Mon, 16 Sep 2019 02:16:50 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Xiubo Li There at least 6 places are using the same combined GFP flags, switch them to one macro instead to make the code get cleaner. Signed-off-by: Xiubo Li Reviewed-by: Ming Lei --- block/blk-mq.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 9c52e4dfe132..a5faad4690cf 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -39,6 +39,8 @@ #include "blk-mq-sched.h" #include "blk-rq-qos.h" +#define BLK_MQ_GFP_FLAGS (GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY) + static void blk_mq_poll_stats_start(struct request_queue *q); static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb); @@ -2091,21 +2093,19 @@ struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, tags = blk_mq_init_tags(nr_tags, reserved_tags, node, BLK_MQ_FLAG_TO_ALLOC_POLICY(set->flags), - GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY); + BLK_MQ_GFP_FLAGS); if (!tags) return NULL; tags->rqs = kcalloc_node(nr_tags, sizeof(struct request *), - GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY, - node); + BLK_MQ_GFP_FLAGS, node); if (!tags->rqs) { blk_mq_free_tags(tags); return NULL; } tags->static_rqs = kcalloc_node(nr_tags, sizeof(struct request *), - GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY, - node); + BLK_MQ_GFP_FLAGS, node); if (!tags->static_rqs) { kfree(tags->rqs); blk_mq_free_tags(tags); @@ -2167,7 +2167,7 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, do { page = alloc_pages_node(node, - GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY | __GFP_ZERO, + BLK_MQ_GFP_FLAGS | __GFP_ZERO, this_order); if (page) break; @@ -2333,7 +2333,7 @@ blk_mq_alloc_hctx(struct request_queue *q, struct blk_mq_tag_set *set, int node) { struct blk_mq_hw_ctx *hctx; - gfp_t gfp = GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY; + gfp_t gfp = BLK_MQ_GFP_FLAGS; hctx = kzalloc_node(blk_mq_hw_ctx_size(set), gfp, node); if (!hctx) @@ -3194,7 +3194,7 @@ static bool blk_mq_elv_switch_none(struct list_head *head, if (!q->elevator) return true; - qe = kmalloc(sizeof(*qe), GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY); + qe = kmalloc(sizeof(*qe), BLK_MQ_GFP_FLAGS); if (!qe) return false;