From patchwork Fri Sep 29 19:43:56 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 9978555 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9970260311 for ; Fri, 29 Sep 2017 19:44:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8B46C298AB for ; Fri, 29 Sep 2017 19:44:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7FBB6298AD; Fri, 29 Sep 2017 19:44:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EE0FE298AB for ; Fri, 29 Sep 2017 19:44:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752571AbdI2Tod (ORCPT ); Fri, 29 Sep 2017 15:44:33 -0400 Received: from mail-qk0-f196.google.com ([209.85.220.196]:34200 "EHLO mail-qk0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752548AbdI2To0 (ORCPT ); Fri, 29 Sep 2017 15:44:26 -0400 Received: by mail-qk0-f196.google.com with SMTP id d70so411987qkc.1 for ; Fri, 29 Sep 2017 12:44:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=FALMMUzDOXrGZBh0OZIC7GuSrvY67OBLVSupuYU8vSs=; b=s0wM+EtnDIx2UxU2Xl7N3s+SYOTXkgAzeU3vvdDsvj84KuwWWc8Ktg31MhXRP4KVKF QzyOig8Z71P5eMkglVZ8GogYWc0+y4ZuLKKqkOQT9XBr7cl7GGiwO6XirqBdNWj+ufg9 juh+JpU0M0XXT3Q+YP2Vobk0aFAI/FdhP+IG5DiwrQONg9eb9+SVXGZnc4oQZbQoddUR FpYMjbGLF3yELAyCCzb2+3L2NCNCuHKvZoaFm8EcwbV/9mLd+6eLx1xqCCLRaTWlR4mR ScghnGRBzjFkWsmJqX5HT3IirATmdGM1tp9+MbP+5MOBA3PsjfMV+2l4cSUQS/T0f0xI gMrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=FALMMUzDOXrGZBh0OZIC7GuSrvY67OBLVSupuYU8vSs=; b=CdZR8/kzn0/haddNq+RWG0Anjw7JsaCCsBR0Z4Xacsa3zIXBepjg9e4vUozbtmikao De0vJ6FKK6vgaEMQJjCKptLjF71wpQVrVZof7zzxEuYOTPBvwCjRlVPUO3O9Ve5iQfYP rKUjyQvYBWEy6v2ECORdRWuTkqPuyaaV/+dpNg7yLMXZIkx9F4ZmaTFI40eEzEUffup8 f5+pvUPE2hrHjRq8qSTvmGCvsYZumZcBnyksAyADtTN0/l0KsYTV7wr+tg7uffpicJhz 6mPL6JBU8/ze+7Q4Yr/7qnEwTgiuksMQBoeVYLl7nIpBS8M90sYHcFaX9xVG0ilvqdhG A9Fw== X-Gm-Message-State: AMCzsaX0aS/wWNSAsjULzMi/rAXf5bend13n4AtBlxRg0aE6IBms6FK9 EiDvIOPu3SJE5sYiczNNRE8ykw== X-Google-Smtp-Source: AOwi7QCjW0Kx3Y00nu/qzkU/5x+QllW1zte9IZrRKOlbDSrty6WVb3o//aqxWwot7qhbTPhMLp6RqQ== X-Received: by 10.55.177.198 with SMTP id a189mr4661825qkf.77.1506714265629; Fri, 29 Sep 2017 12:44:25 -0700 (PDT) Received: from localhost ([2606:a000:4381:1201:225:22ff:feb3:e51a]) by smtp.gmail.com with ESMTPSA id w46sm3150669qta.84.2017.09.29.12.44.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 29 Sep 2017 12:44:25 -0700 (PDT) From: Josef Bacik X-Google-Original-From: Josef Bacik To: kernel-team@fb.com, linux-btrfs@vger.kernel.org Subject: [PATCH 12/21] btrfs: move all ref head cleanup to the helper function Date: Fri, 29 Sep 2017 15:43:56 -0400 Message-Id: <1506714245-23072-13-git-send-email-jbacik@fb.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506714245-23072-1-git-send-email-jbacik@fb.com> References: <1506714245-23072-1-git-send-email-jbacik@fb.com> Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We do a couple different cleanup operations on the ref head. We adjust counters, we'll free any reserved space if we didn't end up using the ref, and we clear the pending csum bytes. Move all these disparate things into cleanup_ref_head and clean up the logic in __btrfs_run_delayed_refs so that it handles the !ref case a lot cleaner, as well as making run_one_delayed_ref() only deal with real refs and not the ref head. Signed-off-by: Josef Bacik Reviewed-by: David Sterba --- fs/btrfs/extent-tree.c | 144 ++++++++++++++++++++++--------------------------- 1 file changed, 64 insertions(+), 80 deletions(-) diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index ac1196af6b7e..260414bd86e7 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -2501,44 +2501,6 @@ static int run_one_delayed_ref(struct btrfs_trans_handle *trans, return 0; } - if (btrfs_delayed_ref_is_head(node)) { - struct btrfs_delayed_ref_head *head; - /* - * we've hit the end of the chain and we were supposed - * to insert this extent into the tree. But, it got - * deleted before we ever needed to insert it, so all - * we have to do is clean up the accounting - */ - BUG_ON(extent_op); - head = btrfs_delayed_node_to_head(node); - trace_run_delayed_ref_head(fs_info, node, head, node->action); - - if (head->total_ref_mod < 0) { - struct btrfs_block_group_cache *cache; - - cache = btrfs_lookup_block_group(fs_info, node->bytenr); - ASSERT(cache); - percpu_counter_add(&cache->space_info->total_bytes_pinned, - -node->num_bytes); - btrfs_put_block_group(cache); - } - - if (insert_reserved) { - btrfs_pin_extent(fs_info, node->bytenr, - node->num_bytes, 1); - if (head->is_data) { - ret = btrfs_del_csums(trans, fs_info, - node->bytenr, - node->num_bytes); - } - } - - /* Also free its reserved qgroup space */ - btrfs_qgroup_free_delayed_ref(fs_info, head->qgroup_ref_root, - head->qgroup_reserved); - return ret; - } - if (node->type == BTRFS_TREE_BLOCK_REF_KEY || node->type == BTRFS_SHARED_BLOCK_REF_KEY) ret = run_delayed_tree_ref(trans, fs_info, node, extent_op, @@ -2641,6 +2603,43 @@ static int cleanup_ref_head(struct btrfs_trans_handle *trans, delayed_refs->num_heads--; rb_erase(&head->href_node, &delayed_refs->href_root); spin_unlock(&delayed_refs->lock); + spin_unlock(&head->lock); + atomic_dec(&delayed_refs->num_entries); + + trace_run_delayed_ref_head(fs_info, &head->node, head, + head->node.action); + + if (head->total_ref_mod < 0) { + struct btrfs_block_group_cache *cache; + + cache = btrfs_lookup_block_group(fs_info, head->node.bytenr); + ASSERT(cache); + percpu_counter_add(&cache->space_info->total_bytes_pinned, + -head->node.num_bytes); + btrfs_put_block_group(cache); + + if (head->is_data) { + spin_lock(&delayed_refs->lock); + delayed_refs->pending_csums -= head->node.num_bytes; + spin_unlock(&delayed_refs->lock); + } + } + + if (head->must_insert_reserved) { + btrfs_pin_extent(fs_info, head->node.bytenr, + head->node.num_bytes, 1); + if (head->is_data) { + ret = btrfs_del_csums(trans, fs_info, + head->node.bytenr, + head->node.num_bytes); + } + } + + /* Also free its reserved qgroup space */ + btrfs_qgroup_free_delayed_ref(fs_info, head->qgroup_ref_root, + head->qgroup_reserved); + btrfs_delayed_ref_unlock(head); + btrfs_put_delayed_ref(&head->node); return 0; } @@ -2724,6 +2723,10 @@ static noinline int __btrfs_run_delayed_refs(struct btrfs_trans_handle *trans, continue; } + /* + * We're done processing refs in this ref_head, clean everything + * up and move on to the next ref_head. + */ if (!ref) { ret = cleanup_ref_head(trans, fs_info, locked_ref); if (ret > 0 ) { @@ -2733,33 +2736,30 @@ static noinline int __btrfs_run_delayed_refs(struct btrfs_trans_handle *trans, } else if (ret) { return ret; } + locked_ref = NULL; + count++; + continue; + } - /* All delayed refs have been processed, Go ahead - * and send the head node to run_one_delayed_ref, - * so that any accounting fixes can happen - */ - ref = &locked_ref->node; - } else { - actual_count++; - ref->in_tree = 0; - list_del(&ref->list); - if (!list_empty(&ref->add_list)) - list_del(&ref->add_list); - /* - * when we play the delayed ref, also correct the - * ref_mod on head - */ - switch (ref->action) { - case BTRFS_ADD_DELAYED_REF: - case BTRFS_ADD_DELAYED_EXTENT: - locked_ref->node.ref_mod -= ref->ref_mod; - break; - case BTRFS_DROP_DELAYED_REF: - locked_ref->node.ref_mod += ref->ref_mod; - break; - default: - WARN_ON(1); - } + actual_count++; + ref->in_tree = 0; + list_del(&ref->list); + if (!list_empty(&ref->add_list)) + list_del(&ref->add_list); + /* + * when we play the delayed ref, also correct the + * ref_mod on head + */ + switch (ref->action) { + case BTRFS_ADD_DELAYED_REF: + case BTRFS_ADD_DELAYED_EXTENT: + locked_ref->node.ref_mod -= ref->ref_mod; + break; + case BTRFS_DROP_DELAYED_REF: + locked_ref->node.ref_mod += ref->ref_mod; + break; + default: + WARN_ON(1); } atomic_dec(&delayed_refs->num_entries); @@ -2786,22 +2786,6 @@ static noinline int __btrfs_run_delayed_refs(struct btrfs_trans_handle *trans, return ret; } - /* - * If this node is a head, that means all the refs in this head - * have been dealt with, and we will pick the next head to deal - * with, so we must unlock the head and drop it from the cluster - * list before we release it. - */ - if (btrfs_delayed_ref_is_head(ref)) { - if (locked_ref->is_data && - locked_ref->total_ref_mod < 0) { - spin_lock(&delayed_refs->lock); - delayed_refs->pending_csums -= ref->num_bytes; - spin_unlock(&delayed_refs->lock); - } - btrfs_delayed_ref_unlock(locked_ref); - locked_ref = NULL; - } btrfs_put_delayed_ref(ref); count++; cond_resched();