From patchwork Wed Jan 17 20:21:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10172655 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 37A6C60230 for ; Thu, 18 Jan 2018 09:46:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 29DF7269E2 for ; Thu, 18 Jan 2018 09:46:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2863426E75; Thu, 18 Jan 2018 09:46:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 826612766D for ; Thu, 18 Jan 2018 09:46:41 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 324246E5E2; Thu, 18 Jan 2018 09:40:48 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) by gabe.freedesktop.org (Postfix) with ESMTPS id 245396E1C4 for ; Wed, 17 Jan 2018 20:23:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=wpiM8CZm1dy0IKRaPRruGuexHowKpTyFYTPs2f4JnBo=; b=SuKKTqmN/xrlggfp2p9UDn9T1 CqDinXTd9uRuO35DxAIVXnjsm03Jz64WsymaehGPOoqSvP/jSwBs4qOfaliJlEOARMMhoZUm3SBad KnTT0p86gwC4ZIV0t2NEHHcQZ+/MxwBT6/88FOE5kTmEaWCbdFJDkDqoJN+n/KyEFJxJN5XhkDYXd dGzhWlcXwjM6wOdZq9mjxlGqra3hZgMbjZw4HYoibSsYUP4sFzkO4S9Z6H4jOoFt5Di6brPEv7JlP FGGo726f7Y9+QCWGEieWjx/esn1uumprjYjDD1Qhg6oX2HhF2D/85cg1TmrqMHmQex+fcWwvWYrSz P58Az0baQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.89 #1 (Red Hat Linux)) id 1ebuEc-0006Ea-Pj; Wed, 17 Jan 2018 20:22:58 +0000 From: Matthew Wilcox To: linux-kernel@vger.kernel.org Date: Wed, 17 Jan 2018 12:21:43 -0800 Message-Id: <20180117202203.19756-80-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180117202203.19756-1-willy@infradead.org> References: <20180117202203.19756-1-willy@infradead.org> Cc: linux-s390@vger.kernel.org, David Howells , linux-nilfs@vger.kernel.org, Matthew Wilcox , linux-sh@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-usb@vger.kernel.org, linux-remoteproc@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, Stefano Stabellini , linux-fsdevel@vger.kernel.org, cgroups@vger.kernel.org, Bjorn Andersson , linux-btrfs@vger.kernel.org Subject: [Intel-gfx] [PATCH v6 79/99] blk-cgroup: Convert to XArray X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox This call to radix_tree_preload is awkward. At the point of allocation, we're under not only a local lock, but also under the queue lock. So we can't back out, drop the lock and retry the allocation. Replace this preload call with a call to xa_reserve() which will ensure the memory is allocated. Signed-off-by: Matthew Wilcox --- block/bfq-cgroup.c | 4 ++-- block/blk-cgroup.c | 52 ++++++++++++++++++++++------------------------ block/cfq-iosched.c | 4 ++-- include/linux/blk-cgroup.h | 5 ++--- 4 files changed, 31 insertions(+), 34 deletions(-) diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index da1525ec4c87..0648aaa6498b 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -860,7 +860,7 @@ static int bfq_io_set_weight_legacy(struct cgroup_subsys_state *css, return ret; ret = 0; - spin_lock_irq(&blkcg->lock); + xa_lock_irq(&blkcg->blkg_array); bfqgd->weight = (unsigned short)val; hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) { struct bfq_group *bfqg = blkg_to_bfqg(blkg); @@ -894,7 +894,7 @@ static int bfq_io_set_weight_legacy(struct cgroup_subsys_state *css, bfqg->entity.prio_changed = 1; } } - spin_unlock_irq(&blkcg->lock); + xa_unlock_irq(&blkcg->blkg_array); return ret; } diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 4117524ca45b..37962d52f1a8 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -146,12 +146,12 @@ struct blkcg_gq *blkg_lookup_slowpath(struct blkcg *blkcg, struct blkcg_gq *blkg; /* - * Hint didn't match. Look up from the radix tree. Note that the + * Hint didn't match. Fetch from the xarray. Note that the * hint can only be updated under queue_lock as otherwise @blkg - * could have already been removed from blkg_tree. The caller is + * could have already been removed from blkg_array. The caller is * responsible for grabbing queue_lock if @update_hint. */ - blkg = radix_tree_lookup(&blkcg->blkg_tree, q->id); + blkg = xa_load(&blkcg->blkg_array, q->id); if (blkg && blkg->q == q) { if (update_hint) { lockdep_assert_held(q->queue_lock); @@ -223,8 +223,8 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg, } /* insert */ - spin_lock(&blkcg->lock); - ret = radix_tree_insert(&blkcg->blkg_tree, q->id, blkg); + xa_lock(&blkcg->blkg_array); + ret = xa_err(__xa_store(&blkcg->blkg_array, q->id, blkg, GFP_NOWAIT)); if (likely(!ret)) { hlist_add_head_rcu(&blkg->blkcg_node, &blkcg->blkg_list); list_add(&blkg->q_node, &q->blkg_list); @@ -237,7 +237,7 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg, } } blkg->online = true; - spin_unlock(&blkcg->lock); + xa_unlock(&blkcg->blkg_array); if (!ret) return blkg; @@ -314,7 +314,7 @@ static void blkg_destroy(struct blkcg_gq *blkg) int i; lockdep_assert_held(blkg->q->queue_lock); - lockdep_assert_held(&blkcg->lock); + lockdep_assert_held(&blkcg->blkg_array.xa_lock); /* Something wrong if we are trying to remove same group twice */ WARN_ON_ONCE(list_empty(&blkg->q_node)); @@ -334,7 +334,7 @@ static void blkg_destroy(struct blkcg_gq *blkg) blkg->online = false; - radix_tree_delete(&blkcg->blkg_tree, blkg->q->id); + xa_erase(&blkcg->blkg_array, blkg->q->id); list_del_init(&blkg->q_node); hlist_del_init_rcu(&blkg->blkcg_node); @@ -368,9 +368,9 @@ static void blkg_destroy_all(struct request_queue *q) list_for_each_entry_safe(blkg, n, &q->blkg_list, q_node) { struct blkcg *blkcg = blkg->blkcg; - spin_lock(&blkcg->lock); + xa_lock(&blkcg->blkg_array); blkg_destroy(blkg); - spin_unlock(&blkcg->lock); + xa_unlock(&blkcg->blkg_array); } q->root_blkg = NULL; @@ -443,7 +443,7 @@ static int blkcg_reset_stats(struct cgroup_subsys_state *css, int i; mutex_lock(&blkcg_pol_mutex); - spin_lock_irq(&blkcg->lock); + xa_lock_irq(&blkcg->blkg_array); /* * Note that stat reset is racy - it doesn't synchronize against @@ -462,7 +462,7 @@ static int blkcg_reset_stats(struct cgroup_subsys_state *css, } } - spin_unlock_irq(&blkcg->lock); + xa_unlock_irq(&blkcg->blkg_array); mutex_unlock(&blkcg_pol_mutex); return 0; } @@ -1012,7 +1012,7 @@ static void blkcg_css_offline(struct cgroup_subsys_state *css) { struct blkcg *blkcg = css_to_blkcg(css); - spin_lock_irq(&blkcg->lock); + xa_lock_irq(&blkcg->blkg_array); while (!hlist_empty(&blkcg->blkg_list)) { struct blkcg_gq *blkg = hlist_entry(blkcg->blkg_list.first, @@ -1023,13 +1023,13 @@ static void blkcg_css_offline(struct cgroup_subsys_state *css) blkg_destroy(blkg); spin_unlock(q->queue_lock); } else { - spin_unlock_irq(&blkcg->lock); + xa_unlock_irq(&blkcg->blkg_array); cpu_relax(); - spin_lock_irq(&blkcg->lock); + xa_lock_irq(&blkcg->blkg_array); } } - spin_unlock_irq(&blkcg->lock); + xa_unlock_irq(&blkcg->blkg_array); wb_blkcg_offline(blkcg); } @@ -1096,8 +1096,7 @@ blkcg_css_alloc(struct cgroup_subsys_state *parent_css) pol->cpd_init_fn(cpd); } - spin_lock_init(&blkcg->lock); - INIT_RADIX_TREE(&blkcg->blkg_tree, GFP_NOWAIT | __GFP_NOWARN); + xa_init_flags(&blkcg->blkg_array, XA_FLAGS_LOCK_IRQ); INIT_HLIST_HEAD(&blkcg->blkg_list); #ifdef CONFIG_CGROUP_WRITEBACK INIT_LIST_HEAD(&blkcg->cgwb_list); @@ -1132,14 +1131,14 @@ blkcg_css_alloc(struct cgroup_subsys_state *parent_css) int blkcg_init_queue(struct request_queue *q) { struct blkcg_gq *new_blkg, *blkg; - bool preloaded; int ret; new_blkg = blkg_alloc(&blkcg_root, q, GFP_KERNEL); if (!new_blkg) return -ENOMEM; - preloaded = !radix_tree_preload(GFP_KERNEL); + if (xa_reserve(&blkcg_root.blkg_array, q->id, GFP_KERNEL)) + return -ENOMEM; /* * Make sure the root blkg exists and count the existing blkgs. As @@ -1152,11 +1151,10 @@ int blkcg_init_queue(struct request_queue *q) spin_unlock_irq(q->queue_lock); rcu_read_unlock(); - if (preloaded) - radix_tree_preload_end(); - - if (IS_ERR(blkg)) + if (IS_ERR(blkg)) { + xa_erase(&blkcg_root.blkg_array, q->id); return PTR_ERR(blkg); + } q->root_blkg = blkg; q->root_rl.blkg = blkg; @@ -1374,8 +1372,8 @@ void blkcg_deactivate_policy(struct request_queue *q, __clear_bit(pol->plid, q->blkcg_pols); list_for_each_entry(blkg, &q->blkg_list, q_node) { - /* grab blkcg lock too while removing @pd from @blkg */ - spin_lock(&blkg->blkcg->lock); + /* grab xa_lock too while removing @pd from @blkg */ + xa_lock(&blkg->blkcg->blkg_array); if (blkg->pd[pol->plid]) { if (pol->pd_offline_fn) @@ -1384,7 +1382,7 @@ void blkcg_deactivate_policy(struct request_queue *q, blkg->pd[pol->plid] = NULL; } - spin_unlock(&blkg->blkcg->lock); + xa_unlock(&blkg->blkcg->blkg_array); } spin_unlock_irq(q->queue_lock); diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c index 9f342ef1ad42..a51bef7af8df 100644 --- a/block/cfq-iosched.c +++ b/block/cfq-iosched.c @@ -1827,7 +1827,7 @@ static int __cfq_set_weight(struct cgroup_subsys_state *css, u64 val, if (val < min || val > max) return -ERANGE; - spin_lock_irq(&blkcg->lock); + xa_lock_irq(&blkcg->blkg_array); cfqgd = blkcg_to_cfqgd(blkcg); if (!cfqgd) { ret = -EINVAL; @@ -1859,7 +1859,7 @@ static int __cfq_set_weight(struct cgroup_subsys_state *css, u64 val, } out: - spin_unlock_irq(&blkcg->lock); + xa_unlock_irq(&blkcg->blkg_array); return ret; } diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h index e9825ff57b15..6278c49d3997 100644 --- a/include/linux/blk-cgroup.h +++ b/include/linux/blk-cgroup.h @@ -17,7 +17,7 @@ #include #include #include -#include +#include #include #include #include @@ -44,9 +44,8 @@ struct blkcg_gq; struct blkcg { struct cgroup_subsys_state css; - spinlock_t lock; - struct radix_tree_root blkg_tree; + struct xarray blkg_array; struct blkcg_gq __rcu *blkg_hint; struct hlist_head blkg_list;