From patchwork Mon Sep 30 01:52:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiubo Li X-Patchwork-Id: 11165931 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A63F113B1 for ; Mon, 30 Sep 2019 01:52:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8F81521882 for ; Mon, 30 Sep 2019 01:52:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729010AbfI3Bwh (ORCPT ); Sun, 29 Sep 2019 21:52:37 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47074 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726390AbfI3Bwh (ORCPT ); Sun, 29 Sep 2019 21:52:37 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A422D30034A4; Mon, 30 Sep 2019 01:52:36 +0000 (UTC) Received: from localhost.localdomain (unknown [10.70.39.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 27B325D9C3; Mon, 30 Sep 2019 01:52:33 +0000 (UTC) From: xiubli@redhat.com To: josef@toxicpanda.com, axboe@kernel.dk Cc: mchristi@redhat.com, ming.lei@redhat.com, linux-block@vger.kernel.org, Xiubo Li , Gabriel Krisman Bertazi Subject: [PATCH v4 1/2] blk-mq: Avoid memory reclaim when allocating request map Date: Mon, 30 Sep 2019 07:22:12 +0530 Message-Id: <20190930015213.8865-2-xiubli@redhat.com> In-Reply-To: <20190930015213.8865-1-xiubli@redhat.com> References: <20190930015213.8865-1-xiubli@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.47]); Mon, 30 Sep 2019 01:52:36 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Xiubo Li For some storage drivers, such as the nbd, when there has new socket connections added, it will update the hardware queue number by calling blk_mq_update_nr_hw_queues(), in which it will freeze all the queues first. And then tries to do the hardware queue updating stuff. But int blk_mq_alloc_rq_map()-->blk_mq_init_tags(), when allocating memory for tags, it may cause the mm do the memories direct reclaiming, since the queues has been freezed, so if needs to flush the page cache to disk, it will stuck in generic_make_request()-->blk_queue_enter() by waiting the queues to be unfreezed and then cause deadlock here. Since the memory size requested here is a small one, which will make it not that easy to happen with a large size, but in theory this could happen when the OS is running in pressure and out of memory. Gabriel Krisman Bertazi has hit the similar issue by fixing it in commit 36e1f3d10786 ("blk-mq: Avoid memory reclaim when remapping queues"), but might forget this part. Signed-off-by: Xiubo Li CC: Gabriel Krisman Bertazi Reviewed-by: Ming Lei --- block/blk-mq-tag.c | 5 +++-- block/blk-mq-tag.h | 5 ++++- block/blk-mq.c | 3 ++- 3 files changed, 9 insertions(+), 4 deletions(-) diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 008388e82b5c..04ee0e4c3fa1 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -462,7 +462,8 @@ static struct blk_mq_tags *blk_mq_init_bitmap_tags(struct blk_mq_tags *tags, struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags, unsigned int reserved_tags, - int node, int alloc_policy) + int node, int alloc_policy, + gfp_t flags) { struct blk_mq_tags *tags; @@ -471,7 +472,7 @@ struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags, return NULL; } - tags = kzalloc_node(sizeof(*tags), GFP_KERNEL, node); + tags = kzalloc_node(sizeof(*tags), flags, node); if (!tags) return NULL; diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h index 61deab0b5a5a..296e0bc97126 100644 --- a/block/blk-mq-tag.h +++ b/block/blk-mq-tag.h @@ -22,7 +22,10 @@ struct blk_mq_tags { }; -extern struct blk_mq_tags *blk_mq_init_tags(unsigned int nr_tags, unsigned int reserved_tags, int node, int alloc_policy); +extern struct blk_mq_tags *blk_mq_init_tags(unsigned int nr_tags, + unsigned int reserved_tags, + int node, int alloc_policy, + gfp_t flags); extern void blk_mq_free_tags(struct blk_mq_tags *tags); extern unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data); diff --git a/block/blk-mq.c b/block/blk-mq.c index 240416057f28..9c52e4dfe132 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2090,7 +2090,8 @@ struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, node = set->numa_node; tags = blk_mq_init_tags(nr_tags, reserved_tags, node, - BLK_MQ_FLAG_TO_ALLOC_POLICY(set->flags)); + BLK_MQ_FLAG_TO_ALLOC_POLICY(set->flags), + GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY); if (!tags) return NULL; From patchwork Mon Sep 30 01:52:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiubo Li X-Patchwork-Id: 11165933 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5605976 for ; Mon, 30 Sep 2019 01:52:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3F76921882 for ; Mon, 30 Sep 2019 01:52:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729232AbfI3Bwn (ORCPT ); Sun, 29 Sep 2019 21:52:43 -0400 Received: from mx1.redhat.com ([209.132.183.28]:45080 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726390AbfI3Bwn (ORCPT ); Sun, 29 Sep 2019 21:52:43 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 01F803082145; Mon, 30 Sep 2019 01:52:43 +0000 (UTC) Received: from localhost.localdomain (unknown [10.70.39.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id C2F685D9C3; Mon, 30 Sep 2019 01:52:40 +0000 (UTC) From: xiubli@redhat.com To: josef@toxicpanda.com, axboe@kernel.dk Cc: mchristi@redhat.com, ming.lei@redhat.com, linux-block@vger.kernel.org, Xiubo Li Subject: [PATCH v4 2/2] blk-mq: use BLK_MQ_GFP_FLAGS macro instead Date: Mon, 30 Sep 2019 07:22:13 +0530 Message-Id: <20190930015213.8865-3-xiubli@redhat.com> In-Reply-To: <20190930015213.8865-1-xiubli@redhat.com> References: <20190930015213.8865-1-xiubli@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.42]); Mon, 30 Sep 2019 01:52:43 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Xiubo Li There are at least 6 places are using the same combined GFP flags, switch them to one macro instead to make the code get cleaner. Signed-off-by: Xiubo Li Reviewed-by: Ming Lei --- block/blk-mq.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 9c52e4dfe132..3d3b3e5787b0 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -39,6 +39,8 @@ #include "blk-mq-sched.h" #include "blk-rq-qos.h" +#define BLK_MQ_GFP_FLAGS (GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY) + static void blk_mq_poll_stats_start(struct request_queue *q); static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb); @@ -2091,21 +2093,19 @@ struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, tags = blk_mq_init_tags(nr_tags, reserved_tags, node, BLK_MQ_FLAG_TO_ALLOC_POLICY(set->flags), - GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY); + BLK_MQ_GFP_FLAGS); if (!tags) return NULL; tags->rqs = kcalloc_node(nr_tags, sizeof(struct request *), - GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY, - node); + BLK_MQ_GFP_FLAGS, node); if (!tags->rqs) { blk_mq_free_tags(tags); return NULL; } tags->static_rqs = kcalloc_node(nr_tags, sizeof(struct request *), - GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY, - node); + BLK_MQ_GFP_FLAGS, node); if (!tags->static_rqs) { kfree(tags->rqs); blk_mq_free_tags(tags); @@ -2167,7 +2167,7 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, do { page = alloc_pages_node(node, - GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY | __GFP_ZERO, + BLK_MQ_GFP_FLAGS | __GFP_ZERO, this_order); if (page) break; @@ -2188,7 +2188,8 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, * Allow kmemleak to scan these pages as they contain pointers * to additional allocations like via ops->init_request(). */ - kmemleak_alloc(p, order_to_size(this_order), 1, GFP_NOIO); + kmemleak_alloc(p, order_to_size(this_order), 1, + BLK_MQ_GFP_FLAGS); entries_per_page = order_to_size(this_order) / rq_size; to_do = min(entries_per_page, depth - i); left -= to_do * rq_size; @@ -2333,7 +2334,7 @@ blk_mq_alloc_hctx(struct request_queue *q, struct blk_mq_tag_set *set, int node) { struct blk_mq_hw_ctx *hctx; - gfp_t gfp = GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY; + gfp_t gfp = BLK_MQ_GFP_FLAGS; hctx = kzalloc_node(blk_mq_hw_ctx_size(set), gfp, node); if (!hctx) @@ -3194,7 +3195,7 @@ static bool blk_mq_elv_switch_none(struct list_head *head, if (!q->elevator) return true; - qe = kmalloc(sizeof(*qe), GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY); + qe = kmalloc(sizeof(*qe), BLK_MQ_GFP_FLAGS); if (!qe) return false;