From patchwork Tue Sep 8 09:01:57 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 7139261 Return-Path: X-Original-To: patchwork-linux-btrfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 580F1BEEC1 for ; Tue, 8 Sep 2015 09:04:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5CCED206F9 for ; Tue, 8 Sep 2015 09:04:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 52EDC206F6 for ; Tue, 8 Sep 2015 09:04:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754368AbbIHJEI (ORCPT ); Tue, 8 Sep 2015 05:04:08 -0400 Received: from cn.fujitsu.com ([59.151.112.132]:22293 "EHLO heian.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1754129AbbIHJEF (ORCPT ); Tue, 8 Sep 2015 05:04:05 -0400 X-IronPort-AV: E=Sophos;i="5.15,520,1432569600"; d="scan'208";a="100466476" Received: from bogon (HELO edo.cn.fujitsu.com) ([10.167.33.5]) by heian.cn.fujitsu.com with ESMTP; 08 Sep 2015 17:07:00 +0800 Received: from G08CNEXCHPEKD01.g08.fujitsu.local (localhost.localdomain [127.0.0.1]) by edo.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id t8893nmD009478 for ; Tue, 8 Sep 2015 17:03:50 +0800 Received: from localhost.localdomain (10.167.226.33) by G08CNEXCHPEKD01.g08.fujitsu.local (10.167.33.89) with Microsoft SMTP Server (TLS) id 14.3.181.6; Tue, 8 Sep 2015 17:04:01 +0800 From: Qu Wenruo To: Subject: [PATCH 04/19] btrfs: qgroup: Introduce function to insert non-overlap reserve range Date: Tue, 8 Sep 2015 17:01:57 +0800 Message-ID: <1441702920-21278-1-git-send-email-quwenruo@cn.fujitsu.com> X-Mailer: git-send-email 2.5.1 In-Reply-To: <1441702615-18333-1-git-send-email-quwenruo@cn.fujitsu.com> References: <1441702615-18333-1-git-send-email-quwenruo@cn.fujitsu.com> MIME-Version: 1.0 X-Originating-IP: [10.167.226.33] Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP New function insert_data_ranges() will insert non-overlap reserve ranges into reserve map. It provides the basis for later qgroup reserve map implement. Signed-off-by: Qu Wenruo --- fs/btrfs/qgroup.c | 124 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 124 insertions(+) diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c index fc24fc3..a4e3af4 100644 --- a/fs/btrfs/qgroup.c +++ b/fs/btrfs/qgroup.c @@ -2577,6 +2577,130 @@ find_reserve_range(struct btrfs_qgroup_data_rsv_map *map, u64 start) } /* + * Insert one data range + * [start,len) here won't overflap with each other. + * + * Return 0 if range is inserted and tmp is not used. + * Return > 0 if range is inserted and tmp is used. + * No catchable error case. Only possible error will cause BUG_ON() as + * that's logical error. + */ +static int insert_data_range(struct btrfs_qgroup_data_rsv_map *map, + struct data_rsv_range *tmp, + u64 start, u64 len) +{ + struct rb_node **p = &map->root.rb_node; + struct rb_node *parent = NULL; + struct rb_node *tmp_node = NULL; + struct data_rsv_range *range = NULL; + struct data_rsv_range *prev_range = NULL; + struct data_rsv_range *next_range = NULL; + int prev_merged = 0; + int next_merged = 0; + int ret = 0; + + while (*p) { + parent = *p; + range = rb_entry(parent, struct data_rsv_range, node); + if (range->start < start) + p = &(*p)->rb_right; + else if (range->start > start) + p = &(*p)->rb_left; + else + BUG_ON(1); + } + + /* Empty tree, goto isolated case */ + if (!range) + goto insert_isolated; + + /* get adjusted ranges */ + if (range->start < start) { + prev_range = range; + tmp_node = rb_next(parent); + if (tmp) + next_range = rb_entry(tmp_node, struct data_rsv_range, + node); + } else { + next_range = range; + tmp_node = rb_prev(parent); + if (tmp) + prev_range = rb_entry(tmp_node, struct data_rsv_range, + node); + } + + /* try to merge with previous and next ranges */ + if (prev_range && prev_range->start + prev_range->len == start) { + prev_merged = 1; + prev_range->len += len; + } + if (next_range && start + len == next_range->start) { + next_merged = 1; + + /* + * the range can be merged with adjusted two ranges into one, + * remove the tailing range. + */ + if (prev_merged) { + prev_range->len += next_range->len; + rb_erase(&next_range->node, &map->root); + kfree(next_range); + } else { + next_range->start = start; + next_range->len += len; + } + } + +insert_isolated: + /* isolated case, need to insert range now */ + if (!next_merged && !prev_merged) { + BUG_ON(!tmp); + + tmp->start = start; + tmp->len = len; + rb_link_node(&tmp->node, parent, p); + rb_insert_color(&tmp->node, &map->root); + ret = 1; + } + return ret; +} + +/* + * insert reserve range and merge them if possible + * + * Return 0 if all inserted and tmp not used + * Return > 0 if all inserted and tmp used + * No catchable error return value. + */ +static int insert_data_ranges(struct btrfs_qgroup_data_rsv_map *map, + struct data_rsv_range *tmp, + struct ulist *insert_list) +{ + struct ulist_node *unode; + struct ulist_iterator uiter; + int tmp_used = 0; + int ret = 0; + + ULIST_ITER_INIT(&uiter); + while ((unode = ulist_next(insert_list, &uiter))) { + ret = insert_data_range(map, tmp, unode->val, unode->aux); + + /* + * insert_data_range() won't return error return value, + * no need to hanle <0 case. + * + * Also tmp should be used at most one time, so clear it to + * NULL to cooperate with sanity check in insert_data_range(). + */ + if (ret > 0) { + tmp_used = 1; + tmp = NULL; + } + } + return tmp_used; +} + +/* * Init data_rsv_map for a given inode. * * This is needed at write time as quota can be disabled and then enabled