From patchwork Tue Dec 1 07:11:26 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 7733241 Return-Path: X-Original-To: patchwork-linux-btrfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 647149F39D for ; Tue, 1 Dec 2015 07:14:26 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6A637206AC for ; Tue, 1 Dec 2015 07:14:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 94CE1206B0 for ; Tue, 1 Dec 2015 07:14:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755155AbbLAHOU (ORCPT ); Tue, 1 Dec 2015 02:14:20 -0500 Received: from cn.fujitsu.com ([59.151.112.132]:62711 "EHLO heian.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1755129AbbLAHOT (ORCPT ); Tue, 1 Dec 2015 02:14:19 -0500 X-IronPort-AV: E=Sophos;i="5.20,346,1444665600"; d="scan'208";a="1025408" Received: from unknown (HELO cn.fujitsu.com) ([10.167.250.3]) by heian.cn.fujitsu.com with ESMTP; 01 Dec 2015 15:14:07 +0800 Received: from localhost.localdomain (unknown [10.167.226.34]) by cn.fujitsu.com (Postfix) with ESMTP id 9E76E40444D0 for ; Tue, 1 Dec 2015 15:14:02 +0800 (CST) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH v2 06/25] btrfs-progs: convert: Introduce function to calculate the available space Date: Tue, 1 Dec 2015 15:11:26 +0800 Message-Id: <1448953905-28673-7-git-send-email-quwenruo@cn.fujitsu.com> X-Mailer: git-send-email 2.6.2 In-Reply-To: <1448953905-28673-1-git-send-email-quwenruo@cn.fujitsu.com> References: <1448953905-28673-1-git-send-email-quwenruo@cn.fujitsu.com> MIME-Version: 1.0 X-yoursite-MailScanner-Information: Please contact the ISP for more information X-yoursite-MailScanner-ID: 9E76E40444D0.AB8BD X-yoursite-MailScanner: Found to be clean X-yoursite-MailScanner-From: quwenruo@cn.fujitsu.com X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Introduce a new function, calculate_available_space() to get available space cache_tree data_chunks cache_tree. Unlike old implement, this function will do the new work: 1) batch used ext* data space. To ensure data chunks will recovery them all. And restore the result into mkfs_cfg->convert_data_chunks for later user. 2) avoid SB and reserved space at chunk level Both batched data space or free space will not cover reserved space, like sb or the first 1M. Signed-off-by: Qu Wenruo --- btrfs-convert.c | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 90 insertions(+), 2 deletions(-) diff --git a/btrfs-convert.c b/btrfs-convert.c index 2fef1ed..8026907 100644 --- a/btrfs-convert.c +++ b/btrfs-convert.c @@ -2637,12 +2637,100 @@ static int wipe_reserved_ranges(struct cache_tree *tree, u64 min_stripe_size, return ret; } +static int calculate_available_space(struct btrfs_convert_context *cctx) +{ + struct cache_tree *used = &cctx->used; + struct cache_tree *data_chunks = &cctx->data_chunks; + struct cache_tree *free = &cctx->free; + struct cache_extent *cache; + u64 cur_off = 0; + /* + * Twice the minimal chunk size, to allow later wipe_reserved_ranges() + * works without need to consider overlap + */ + u64 min_stripe_size = 2 * 16 * 1024 * 1024; + int ret; + + /* Calculate data_chunks */ + for (cache = first_cache_extent(used); cache; + cache = next_cache_extent(cache)) { + u64 cur_len; + + if (cache->start + cache->size < cur_off) + continue; + if (cache->start > cur_off + min_stripe_size) + cur_off = cache->start; + cur_len = max(cache->start + cache->size - cur_off, + min_stripe_size); + ret = add_merge_cache_extent(data_chunks, cur_off, cur_len); + if (ret < 0) + goto out; + cur_off += cur_len; + } + /* + * remove reserved ranges, so we won't ever bother relocating an old + * filesystem extent to other place. + */ + ret = wipe_reserved_ranges(data_chunks, min_stripe_size, 1); + if (ret < 0) + goto out; + + cur_off = 0; + /* + * Calculate free space + * Always round up the start bytenr, to avoid metadata extent corss + * stripe boundary, as later mkfs_convert() won't have all the extent + * allocation check + */ + for (cache = first_cache_extent(data_chunks); cache; + cache = next_cache_extent(cache)) { + if (cache->start < cur_off) + continue; + if (cache->start > cur_off) { + u64 insert_start; + u64 len; + + len = cache->start - round_up(cur_off, + BTRFS_STRIPE_LEN); + insert_start = round_up(cur_off, BTRFS_STRIPE_LEN); + + ret = add_merge_cache_extent(free, insert_start, len); + if (ret < 0) + goto out; + } + cur_off = cache->start + cache->size; + } + /* Don't forget the last range */ + if (cctx->total_bytes > cur_off) { + u64 len = cctx->total_bytes - cur_off; + u64 insert_start; + + insert_start = round_up(cur_off, BTRFS_STRIPE_LEN); + + ret = add_merge_cache_extent(free, insert_start, len); + if (ret < 0) + goto out; + } + + /* Remove reserved bytes */ + ret = wipe_reserved_ranges(free, min_stripe_size, 0); +out: + return ret; +} /* - * Read used space + * Read used space, and since we have the used space, + * calcuate data_chunks and free for later mkfs */ static int convert_read_used_space(struct btrfs_convert_context *cctx) { - return cctx->convert_ops->read_used_space(cctx); + int ret; + + ret = cctx->convert_ops->read_used_space(cctx); + if (ret) + return ret; + + ret = calculate_available_space(cctx); + return ret; } static int do_convert(const char *devname, int datacsum, int packing, int noxattr,