From patchwork Tue Dec 7 15:35:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Aota X-Patchwork-Id: 12662135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF08AC43217 for ; Tue, 7 Dec 2021 15:36:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238908AbhLGPjt (ORCPT ); Tue, 7 Dec 2021 10:39:49 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:35689 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238913AbhLGPjs (ORCPT ); Tue, 7 Dec 2021 10:39:48 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1638891377; x=1670427377; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JavpZOJSy917ZTOKrvFANscGut3+lDHjqpXWNZGNlNU=; b=Z5iwq3RO9p92vJdIgm2sV7fLN/P8SqggQmb8bTd0IUK43njjQt2xJAGX kajZvCmxGDCRCkSDwWNnQ5ALjNF36IaathPVCekkA1ospp8Bo1gEQRdPl GrwXGyu7R0TaRvmHejIhGqXSqvc5iL83B6OM2ioAIt78i+w0e5fbVAJJs TRWfTJlp4xmgvVJPIqCB8S80qsII376/TkM9j03fR4VOjJs2Q28z1B26J pY7FTLAmbSmVtB1tPw5JYsXf09jUhUnFWM0SEPJQs3G2Xv4+iDcKY9g82 7F6dBs4KqwazhbjmUIHkDsonoIFPL9nbGEvf+mj2ixKsxTNWVqgM0Z5Kw Q==; X-IronPort-AV: E=Sophos;i="5.87,293,1631548800"; d="scan'208";a="192442649" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 07 Dec 2021 23:36:17 +0800 IronPort-SDR: I5Psf62Mi2JXTh+u2CcWIYkxGN9g93RMoV/MlJFr6+QTUmCsLvIpNilmADQNwFl/JEyJvDX5cJ SBZlrA+kZFZBnAIt/CQHp5qRdS8LeCP4qVWHHQEsKQ61h71SkZYQ/srukEyCCeDEUItCe7inrR eqMXT4MY7DlldgvtZTxbP0k06+MLU9CX1qcfRt4Il+foYGFSMH4WVfcLCAdKLp4621YG2kwM6f 6BfH29558iP2AbBCtLkSA5xXkW+o7IwJsTMNP5/zIf6Hw5SjtXcmbTbSIdmo6vSNrxv3zg4eDf Wn/aBOvhMYxGEwQpl8wKK2PD Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2021 07:09:23 -0800 IronPort-SDR: NKrXtCNuWho0MAt7UuWkNmX0xefmAeowf0WjYlEvZYZ4UjHoZf8YR5Kgg6o9Upn/b2kl1XhqWp 02myEN9NzWjNc+UWbn2RWPNhnPNYLJNH3KxWUFD9Noq+TYXtXbsFmccUAlZeqqBLTHl6VtlfbF /wTW8TsaA7rNaNeV9ucWkYPwvECBZa0DwGbpXfbmZdRTT4Ol/Gl/2r8b4KhuGy8dLHAbZB3mB6 SKilMK6ibw7LYl7PXEzrY5CX5hSWNBrkp1KfJrSpmlOJAcNpN+0ahX5j0x0/p/8r+K9X6FzDqA 0to= WDCIronportException: Internal Received: from hx4cl13.ad.shared (HELO naota-xeon.wdc.com) ([10.225.50.41]) by uls-op-cesaip02.wdc.com with ESMTP; 07 Dec 2021 07:36:18 -0800 From: Naohiro Aota To: linux-btrfs@vger.kernel.org Cc: David Sterba , Naohiro Aota Subject: [PATCH 1/3] btrfs: zoned: unset dedicated block group on allocation failure Date: Wed, 8 Dec 2021 00:35:47 +0900 Message-Id: <20211207153549.2946602-2-naohiro.aota@wdc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211207153549.2946602-1-naohiro.aota@wdc.com> References: <20211207153549.2946602-1-naohiro.aota@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Allocating an extent from a block group can fail for various reasons. When an allocation from a dedicated block group (for tree-log or relocation data) fails, we need to unregister it as a dedicated one so that we can allocate a new block group for the dedicated one. However, we are returning early when the block group in case it is read-only, fully used, or not be able to activate the zone. As a result, we keep the non-usable block group as a dedicated one, leading to further allocation failure. With many block groups, the allocator will iterate hopeless loop to find a free extent, results in a hung task. Fix the issue by delaying the return and doing the proper cleanups. Signed-off-by: Naohiro Aota --- fs/btrfs/extent-tree.c | 20 ++++++++++++++++---- 1 file changed, 16 insertions(+), 4 deletions(-) diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index 3fd736a02c1e..34200c1a7da0 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -3790,23 +3790,35 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group, spin_unlock(&fs_info->relocation_bg_lock); if (skip) return 1; + /* Check RO and no space case before trying to activate it */ spin_lock(&block_group->lock); if (block_group->ro || block_group->alloc_offset == block_group->zone_capacity) { - spin_unlock(&block_group->lock); - return 1; + ret = 1; + /* + * May need to clear fs_info->{treelog,data_reloc}_bg. + * Return the error after taking the locks. + */ } spin_unlock(&block_group->lock); - if (!btrfs_zone_activate(block_group)) - return 1; + if (!ret && !btrfs_zone_activate(block_group)) { + ret = 1; + /* + * May need to clear fs_info->{treelog,data_reloc}_bg. + * Return the error after taking the locks. + */ + } spin_lock(&space_info->lock); spin_lock(&block_group->lock); spin_lock(&fs_info->treelog_bg_lock); spin_lock(&fs_info->relocation_bg_lock); + if (ret) + goto out; + ASSERT(!ffe_ctl->for_treelog || block_group->start == fs_info->treelog_bg || fs_info->treelog_bg == 0); From patchwork Tue Dec 7 15:35:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Aota X-Patchwork-Id: 12662137 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B220FC433F5 for ; Tue, 7 Dec 2021 15:36:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238917AbhLGPju (ORCPT ); Tue, 7 Dec 2021 10:39:50 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:35689 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238916AbhLGPjt (ORCPT ); Tue, 7 Dec 2021 10:39:49 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1638891378; x=1670427378; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VyzKTouzv9P2XOsDIRci5FGaoImY0Z5Q6PCFEjhdbnI=; b=QfwuBOIYDjN7JkfppUzLL05l2R6NaGgs2v8TZQ5xX9+aA2w4TAUsw4WR Qrb5Tp4JESgbWxoIBWDGZdVvO0+4muF/g72N/IJV4w09TUS35BttvB8OV 1d+tRtYkmQhA1UEAoZnd/sFd/+tWRyNL+/4DD5buYMClsoclTa4KKkzBI 7QofFRk/M7QkNjtVRay99Z+JFQW8o5TOyldNq4oJo8vWKe4UdRv+qOkBk UE/4dtGlVWjy5FtruXIqmN4Fq04kPxMh0lp/kk2UUGZIUdWh0BKh2Oipi eTkXaYsyP2f3GYGATB45Lr+SIhChBWfo2L7X+fRzJZPHCsvGVhpnmRxVF g==; X-IronPort-AV: E=Sophos;i="5.87,293,1631548800"; d="scan'208";a="192442651" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 07 Dec 2021 23:36:18 +0800 IronPort-SDR: gvbG7bhBg8FY2N2aGnbHYrRhHOYcENPfAHHvn7Uy2SbpKk/yrZlVyzf62LkZQLkr8HvnL9I4KG /K+3ix9rSDnbgUouOoUJADGk4zuzCuKeXhEPUAwfEnCo7U1Cb2DA/YuGmCBFNpLKRMioe3s1Wi bjz9C+0dr4SyBHzAjOV4kSsXQQHie5tC9hpH6JELRN/jcePl10vCgdCj/G1nhp2ImwA/aSvdmK Pf64MIx7LOrzyn9+NRKt4LagyLiUdUiCHm7X+UUSsM8BcIZQh68t9nIXvouDC8Gnqqmg8uCF76 ucKHWj2TVlr1/Fw6xZsDVfpv Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2021 07:09:24 -0800 IronPort-SDR: qA7xqYWEtdQ5MFxIKYCTvT3oCdXlw9iYWZY/meC8xTcRaV9PWbdVKQDiNtMwdPF6yXZdq/xsf8 NNvlpYUvjvaGvpC6XEMgI0i5S4V0w6m2UjfAkYJymqwd4u24cuEjUozDoNopb50UhdCjKbywJV d9PpAZlrz7tq9PUmxF0Pbe1pcf1GwTeDYrLZ989nOXzKe3ls0RzijwXdAqPdxEJWUDaFyddvSd ERDH8bd+Z4fjQ3Uq+djLXEDD5Gon2loSV8pb9WZNFdHl8FHj+++uiMV9OzqpPYa/NE3OYGgftJ a20= WDCIronportException: Internal Received: from hx4cl13.ad.shared (HELO naota-xeon.wdc.com) ([10.225.50.41]) by uls-op-cesaip02.wdc.com with ESMTP; 07 Dec 2021 07:36:18 -0800 From: Naohiro Aota To: linux-btrfs@vger.kernel.org Cc: David Sterba , Naohiro Aota Subject: [PATCH 2/3] btrfs: add extent allocator hook to decide to allocate chunk or not Date: Wed, 8 Dec 2021 00:35:48 +0900 Message-Id: <20211207153549.2946602-3-naohiro.aota@wdc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211207153549.2946602-1-naohiro.aota@wdc.com> References: <20211207153549.2946602-1-naohiro.aota@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Introduce a new hook for an extent allocator policy. With the new hook, a policy can decide to allocate a new block group or not. If not, it will return -ENOSPC, so btrfs_reserve_extent() will cut the allocation size in half and retry the allocation if min_alloc_size is large enough. The hook has a place holder and will be replaced with the real implementation in the next patch. Signed-off-by: Naohiro Aota --- fs/btrfs/extent-tree.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index 34200c1a7da0..5ec512673dc5 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -3959,6 +3959,19 @@ static void found_extent(struct find_free_extent_ctl *ffe_ctl, } } +static bool can_allocate_chunk(struct btrfs_fs_info *fs_info, + struct find_free_extent_ctl *ffe_ctl) +{ + switch (ffe_ctl->policy) { + case BTRFS_EXTENT_ALLOC_CLUSTERED: + return true; + case BTRFS_EXTENT_ALLOC_ZONED: + return true; + default: + BUG(); + } +} + static int chunk_allocation_failed(struct find_free_extent_ctl *ffe_ctl) { switch (ffe_ctl->policy) { @@ -4046,6 +4059,10 @@ static int find_free_extent_update_loop(struct btrfs_fs_info *fs_info, struct btrfs_trans_handle *trans; int exist = 0; + /*Check if allocation policy allows to create a new chunk */ + if (!can_allocate_chunk(fs_info, ffe_ctl)) + return -ENOSPC; + trans = current->journal_info; if (trans) exist = 1; From patchwork Tue Dec 7 15:35:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Aota X-Patchwork-Id: 12662141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B629C433EF for ; Tue, 7 Dec 2021 15:36:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238921AbhLGPju (ORCPT ); Tue, 7 Dec 2021 10:39:50 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:35689 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238916AbhLGPju (ORCPT ); Tue, 7 Dec 2021 10:39:50 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1638891379; x=1670427379; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DCxYfL3uIZjRnzZip4bZzYhjB6rfPyfShRoAZauzxwM=; b=BNdtbhh6DmsihMnXhP5xTaRTCp05MP64l8zXbxu0I5xjUC27yykeQIUQ /485NTM1XfGqeSv+Kf5TgsjFA9mf0JMOdTZrHYkNUTRhMnQ5lJNjCVawX vxrpwnCpmDc45iM+j7lr3tHDwqqRdp4vBwfog1g8lzIArIv73lkXTcL8N 1Ksm6q3ESTz4QkYvB8hQ1TiQg5TqZIDcXATPXmiHDOt/OYTKEbhrWd5Dj x84TrUjqHaXkzmGId6aa2Ee5ggzW8NOBUIgg/Z2iDVdgkScyydzRUKO7b wad5e0yA88c4YV8tAmT77+HaEez/jH8g2HNBwr4je/mq+Kql6RtXlkNRR Q==; X-IronPort-AV: E=Sophos;i="5.87,293,1631548800"; d="scan'208";a="192442653" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 07 Dec 2021 23:36:19 +0800 IronPort-SDR: OGh0fw4IZ2As+rii/1/Rw2HbqyKlDL72iiwoeSkq1Cx5hc4gVqNyuBEuhAYFcbD12WIMXHkxdB D00V5se6M7zbg3T8qh5oDX7Bq5fhmn5p8ltHGRMRJRNZfphpXSJedUFu1XdNz+Q5EC4y4TOeNh CPHLEjVvtWb5djd92byrMQZUttrd7HpgXt5PQW7h59oGLJDFKeTgHMpISVO4cTlxVMxsTd/Rhd 2BDLCIyo52Dn4yeAyfoVYhmuwjXdyxTR43yrpuyqJm/bKcUgYG7+2ja/swuTRpK7HqoNRoXnYf Kvo1F+7lvKoycaFUjFohqyS8 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2021 07:09:24 -0800 IronPort-SDR: pIm3QcFvqUKK0Db7nVkjE1ORQ6K/uY/wPIwfRNyCsJxNSbh2NdPIYvw6kJiOeHcQmjdr06FNH5 HviX7woHdeH00Arw6uMeBlOoEU2c+YqbY1ys+BLgDoL6PSdKAH7VUgJb6PLkHESReIotDZ9Pyi gn7qu6RU8HWtOj4o7fZrazAem9wYwqWK1eUOQm8G6SUv2eAwHfONjln00ir/jVSVm9Si6grXdS qvY4iiBY+xoCX+A2qg8E71BO7z4nt4fml5neBuRnlmidjDzdT1EknLbH2Krq3ASr2cXRaO4VmZ ARY= WDCIronportException: Internal Received: from hx4cl13.ad.shared (HELO naota-xeon.wdc.com) ([10.225.50.41]) by uls-op-cesaip02.wdc.com with ESMTP; 07 Dec 2021 07:36:20 -0800 From: Naohiro Aota To: linux-btrfs@vger.kernel.org Cc: David Sterba , Naohiro Aota Subject: [PATCH 3/3] btrfs: zoned: fix chunk allocation condition for zoned allocator Date: Wed, 8 Dec 2021 00:35:49 +0900 Message-Id: <20211207153549.2946602-4-naohiro.aota@wdc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211207153549.2946602-1-naohiro.aota@wdc.com> References: <20211207153549.2946602-1-naohiro.aota@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The ZNS specification defines a limit on the number of "active" zones. That limit impose us to limit the number of block groups which can be used for an allocation at the same time. Not to exceed the limit, we reuse the existing active block groups as much as possible when we can't activate any other zones without sacrificing an already activated block group in commit a85f05e59bc1 ("btrfs: zoned: avoid chunk allocation if active block group has enough space"). However, the check is wrong in two ways. First, it checks the condition for every raid index (ffe_ctl->index). Even if it reaches the condition and "ffe_ctl->max_extent_size >= ffe_ctl->min_alloc_size" is met, there can be other block groups having enough space to hold ffe_ctl->num_bytes. (Actually, this won't happen in the current zoned code as it only supports SINGLE profile. But, it can happen once it enables other RAID types.) Second, it checks the active zone availability depending on the raid index. The raid index is just an index for space_info->block_groups, so it has nothing to do with chunk allocation. These mistakes are causing a faulty allocation in a certain situation. Consider we are running zoned btrfs on a device whose max_active_zone == 0 (no limit). And, suppose no block group have a room to fit ffe_ctl->num_bytes but some room to meet ffe_ctl->min_alloc_size (i.e. max_extent_size > num_bytes >= min_alloc_size). In this situation, the following occur: - With SINGLE raid_index, it reaches the chunk allocation checking code - The check returns true because we can activate a new zone (no limit) - But, before allocating the chunk, it iterates to the next raid index (RAID5) - Since there are no RAID5 block groups on zoned mode, it again reaches the check code - The check returns false because of btrfs_can_activate_zone()'s "if (raid_index != BTRFS_RAID_SINGLE)" part - That results in returning -ENOSPC without allocating a new chunk As a result, we end up hitting -ENOSPC too early. Move the check to the right place in the can_allocate_chunk() hook, and do the active zone check depending on the allocation flag, not on the raid index. --- fs/btrfs/extent-tree.c | 21 +++++++++------------ fs/btrfs/zoned.c | 5 ++--- fs/btrfs/zoned.h | 5 ++--- 3 files changed, 13 insertions(+), 18 deletions(-) diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index 5ec512673dc5..802add9857ed 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -3966,6 +3966,15 @@ static bool can_allocate_chunk(struct btrfs_fs_info *fs_info, case BTRFS_EXTENT_ALLOC_CLUSTERED: return true; case BTRFS_EXTENT_ALLOC_ZONED: + /* + * If we have enough free space left in an already + * active block group and we can't activate any other + * zone now, do not allow allocating a new chunk and + * let find_free_extent() retry with a smaller size. + */ + if (ffe_ctl->max_extent_size >= ffe_ctl->min_alloc_size && + !btrfs_can_activate_zone(fs_info->fs_devices, ffe_ctl->flags)) + return false; return true; default: BUG(); @@ -4012,18 +4021,6 @@ static int find_free_extent_update_loop(struct btrfs_fs_info *fs_info, return 0; } - if (ffe_ctl->max_extent_size >= ffe_ctl->min_alloc_size && - !btrfs_can_activate_zone(fs_info->fs_devices, ffe_ctl->index)) { - /* - * If we have enough free space left in an already active block - * group and we can't activate any other zone now, retry the - * active ones with a smaller allocation size. Returning early - * from here will tell btrfs_reserve_extent() to haven the - * size. - */ - return -ENOSPC; - } - if (ffe_ctl->loop >= LOOP_CACHING_WAIT && ffe_ctl->have_caching_bg) return 1; diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c index 67d932d70798..06681fae450a 100644 --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -1883,7 +1883,7 @@ int btrfs_zone_finish(struct btrfs_block_group *block_group) return ret; } -bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, int raid_index) +bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags) { struct btrfs_device *device; bool ret = false; @@ -1892,8 +1892,7 @@ bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, int raid_index return true; /* Non-single profiles are not supported yet */ - if (raid_index != BTRFS_RAID_SINGLE) - return false; + ASSERT((flags & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0); /* Check if there is a device with active zones left */ mutex_lock(&fs_devices->device_list_mutex); diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h index e53ab7b96437..002ff86c8608 100644 --- a/fs/btrfs/zoned.h +++ b/fs/btrfs/zoned.h @@ -71,8 +71,7 @@ struct btrfs_device *btrfs_zoned_get_device(struct btrfs_fs_info *fs_info, u64 logical, u64 length); bool btrfs_zone_activate(struct btrfs_block_group *block_group); int btrfs_zone_finish(struct btrfs_block_group *block_group); -bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, - int raid_index); +bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags); void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info, u64 logical, u64 length); void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg); @@ -222,7 +221,7 @@ static inline int btrfs_zone_finish(struct btrfs_block_group *block_group) } static inline bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, - int raid_index) + u64 flags) { return true; }