From patchwork Fri Sep 30 13:23:30 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Gordeev X-Patchwork-Id: 9358269 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 870406086A for ; Fri, 30 Sep 2016 13:24:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 79F7A2A029 for ; Fri, 30 Sep 2016 13:24:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 699BF2A027; Fri, 30 Sep 2016 13:24:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 033922A027 for ; Fri, 30 Sep 2016 13:24:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933462AbcI3NY2 (ORCPT ); Fri, 30 Sep 2016 09:24:28 -0400 Received: from mx1.redhat.com ([209.132.183.28]:60534 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933677AbcI3NXx (ORCPT ); Fri, 30 Sep 2016 09:23:53 -0400 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id DAF70C04B316; Fri, 30 Sep 2016 13:23:51 +0000 (UTC) Received: from dhcp-27-118.brq.redhat.com (dhcp-27-122.brq.redhat.com [10.34.27.122]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u8UDNfDY023542; Fri, 30 Sep 2016 09:23:51 -0400 From: Alexander Gordeev To: linux-kernel@vger.kernel.org Cc: Alexander Gordeev , linux-block@vger.kernel.org Subject: [PATCH v2 8/8] blk-mq: Cleanup (de-)allocation of blk_mq_hw_ctx::ctxs Date: Fri, 30 Sep 2016 15:23:30 +0200 Message-Id: In-Reply-To: References: X-Scanned-By: MIMEDefang 2.68 on 10.5.11.26 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Fri, 30 Sep 2016 13:23:51 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Handling of blk_mq_hw_ctx::ctxs field (de-)allocation is confusing due to special treatment of the field introduced in commit c3b4afca7023 ("blk-mq: free hctx->ctxs in queue's release handler")'. Make it bit more readable by binding hctx and hctx->ctxs (de-)allocation. CC: linux-block@vger.kernel.org Signed-off-by: Alexander Gordeev Reviewed-by: Sagi Grimberg --- block/blk-mq.c | 51 ++++++++++++++++++++++++++++++++------------------- 1 file changed, 32 insertions(+), 19 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 78ee5af..03654af 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1641,18 +1641,9 @@ static int blk_mq_init_hctx(struct request_queue *q, hctx->tags = set->tags[hctx_idx]; - /* - * Allocate space for all possible cpus to avoid allocation at - * runtime - */ - hctx->ctxs = kmalloc_node(nr_cpu_ids * sizeof(void *), - GFP_KERNEL, node); - if (!hctx->ctxs) - goto unregister_cpu_notifier; - if (sbitmap_init_node(&hctx->ctx_map, nr_cpu_ids, ilog2(8), GFP_KERNEL, node)) - goto free_ctxs; + goto unregister_cpu_notifier; hctx->nr_ctx = 0; @@ -1679,8 +1670,6 @@ static int blk_mq_init_hctx(struct request_queue *q, set->ops->exit_hctx(hctx, hctx_idx); free_bitmap: sbitmap_free(&hctx->ctx_map); - free_ctxs: - kfree(hctx->ctxs); unregister_cpu_notifier: blk_mq_remove_cpuhp(hctx); free_cpumask_var(hctx->cpumask); @@ -1848,6 +1837,33 @@ static void blk_mq_add_queue_tag_set(struct blk_mq_tag_set *set, mutex_unlock(&set->tag_list_lock); } +static struct blk_mq_hw_ctx *alloc_hctx(int node) +{ + struct blk_mq_hw_ctx *hctx = kzalloc_node(sizeof(*hctx), + GFP_KERNEL, node); + if (!hctx) + return NULL; + + /* + * Allocate space for all possible cpus to avoid allocation at + * runtime + */ + hctx->ctxs = kmalloc_node(nr_cpu_ids * sizeof(void *), + GFP_KERNEL, node); + if (!hctx->ctxs) { + kfree(hctx); + return NULL; + } + + return hctx; +} + +static void free_hctx(struct blk_mq_hw_ctx *hctx) +{ + kfree(hctx->ctxs); + kfree(hctx); +} + /* * It is the actual release handler for mq, but we do it from * request queue's release handler for avoiding use-after-free @@ -1863,8 +1879,7 @@ void blk_mq_release(struct request_queue *q) queue_for_each_hw_ctx(q, hctx, i) { if (!hctx) continue; - kfree(hctx->ctxs); - kfree(hctx); + free_hctx(hctx); } q->mq_map = NULL; @@ -1909,12 +1924,12 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, if (node == NUMA_NO_NODE) node = set->numa_node; - hctx = kzalloc_node(sizeof(*hctx), GFP_KERNEL, node); + hctx = alloc_hctx(node); if (!hctx) break; if (blk_mq_init_hctx(q, set, hctx, i, node)) { - kfree(hctx); + free_hctx(hctx); break; } @@ -1936,9 +1951,7 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, } blk_mq_exit_hctx(q, set, hctx, j); - - kfree(hctx->ctxs); - kfree(hctx); + free_hctx(hctx); } q->nr_hw_queues = i; blk_mq_sysfs_register(q);