diff mbox series

[5/6] fetch: make `--atomic` flag cover backfilling of tags

Message ID 55dbe19a1a4d05d84c81356af1a3f04b65f8aa7b.1644565025.git.ps@pks.im (mailing list archive)
State Superseded
Headers show
Series fetch: improve atomicity of `--atomic` flag | expand

Commit Message

Patrick Steinhardt Feb. 11, 2022, 7:47 a.m. UTC
When fetching references from a remote we by default also fetch all tags
which point into the history we have fetched. This is a separate step
performed after updating local references because it requires us to walk
over the history on the client-side to determine whether the remote has
announced any tags which point to one of the fetched commits.

This backfilling of tags isn't covered by the `--atomic` flag: right
now, it only applies to the step where we update our local references.
This is an oversight at the time the flag was introduced: its purpose is
to either update all references or none, but right now we happily update
local references even in the case where backfilling failed.

Fix this by pulling up creation of the reference transaction such that
we can pass the same transaction to both the code which updates local
references and to the code which backfills tags. This allows us to only
commit the transaction in case both actions succeed.

Note that we also have to start passing the transaction into
`find_non_local_tags()`: this function is responsible for finding all
tags which we need to backfill. Right now, it will happily return tags
which we have already been updated with our local references. But when
we use a single transaction for both local references and backfilling
then it may happen that we try to queue the same reference update twice
to the transaction, which consequentially triggers a bug. We thus have
to skip over any tags which have already been queued. Unfortunately,
this requires us to reach into internals of the reference transaction to
access queued updates, but there is no non-internal interface right now
which would allow us to access this information.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
---
 builtin/fetch.c      | 89 ++++++++++++++++++++++++++++++--------------
 t/t5503-tagfollow.sh | 11 ++----
 2 files changed, 65 insertions(+), 35 deletions(-)

Comments

Christian Couder Feb. 15, 2022, 8:11 a.m. UTC | #1
On Mon, Feb 14, 2022 at 10:13 PM Patrick Steinhardt <ps@pks.im> wrote:
>
> When fetching references from a remote we by default also fetch all tags
> which point into the history we have fetched. This is a separate step
> performed after updating local references because it requires us to walk
> over the history on the client-side to determine whether the remote has
> announced any tags which point to one of the fetched commits.
>
> This backfilling of tags isn't covered by the `--atomic` flag: right
> now, it only applies to the step where we update our local references.
> This is an oversight at the time the flag was introduced: its purpose is
> to either update all references or none, but right now we happily update
> local references even in the case where backfilling failed.

Also it looks like the backfilling of tags itself isn't atomic, right?
Some tags could be backfilled while others aren't.

> Fix this by pulling up creation of the reference transaction such that
> we can pass the same transaction to both the code which updates local
> references and to the code which backfills tags. This allows us to only
> commit the transaction in case both actions succeed.

Maybe this could be seen as a regression by users who are mostly
interested in the local references though.

> Note that we also have to start passing the transaction into
> `find_non_local_tags()`: this function is responsible for finding all
> tags which we need to backfill. Right now, it will happily return tags
> which we have already been updated with our local references. But when

s/we have/have/

> we use a single transaction for both local references and backfilling
> then it may happen that we try to queue the same reference update twice
> to the transaction, which consequentially triggers a bug. We thus have

s/consequentially/consequently/

> to skip over any tags which have already been queued. Unfortunately,
> this requires us to reach into internals of the reference transaction to
> access queued updates, but there is no non-internal interface right now
> which would allow us to access this information.

This makes me wonder if such a non-internal interface should be
implemented first. Or if some function to queue a reference update
could check if the same reference update has already been queued.
Jonathan Tan Feb. 16, 2022, 11:41 p.m. UTC | #2
Christian Couder <christian.couder@gmail.com> writes:
> > Fix this by pulling up creation of the reference transaction such that
> > we can pass the same transaction to both the code which updates local
> > references and to the code which backfills tags. This allows us to only
> > commit the transaction in case both actions succeed.
> 
> Maybe this could be seen as a regression by users who are mostly
> interested in the local references though.

I don't think we need to worry about this - as far as I know,
backfilling of tags only happens when the server doesn't support
include-tag, which was introduced in 2009 (and in my experience, they
all do). And in the rare case that backfilling happens, I think the user
would want to have atomicity in the also rare case that the first fetch
succeeds and the second fails.
Elijah Newren Feb. 17, 2022, 1:34 a.m. UTC | #3
On Mon, Feb 14, 2022 at 1:32 AM Patrick Steinhardt <ps@pks.im> wrote:
>
> When fetching references from a remote we by default also fetch all tags
> which point into the history we have fetched. This is a separate step
> performed after updating local references because it requires us to walk
> over the history on the client-side to determine whether the remote has
> announced any tags which point to one of the fetched commits.
>
> This backfilling of tags isn't covered by the `--atomic` flag: right
> now, it only applies to the step where we update our local references.
> This is an oversight at the time the flag was introduced: its purpose is
> to either update all references or none, but right now we happily update
> local references even in the case where backfilling failed.
>
> Fix this by pulling up creation of the reference transaction such that
> we can pass the same transaction to both the code which updates local
> references and to the code which backfills tags. This allows us to only
> commit the transaction in case both actions succeed.
>
> Note that we also have to start passing the transaction into
> `find_non_local_tags()`: this function is responsible for finding all
> tags which we need to backfill. Right now, it will happily return tags
> which we have already been updated with our local references. But when
> we use a single transaction for both local references and backfilling
> then it may happen that we try to queue the same reference update twice
> to the transaction, which consequentially triggers a bug. We thus have
> to skip over any tags which have already been queued. Unfortunately,
> this requires us to reach into internals of the reference transaction to
> access queued updates, but there is no non-internal interface right now
> which would allow us to access this information.

I like the changes you are making here in general, but I do agree that
reaching into refs-internal feels a bit icky.  I'm not familiar with
the refs API nor the fetching code, so feel free to ignore these
ideas, but I'm just throwing them out there as possibilities to avoid
reaching into refs-internal:

  - you are trying to check for existing transactions to avoid
duplicates, but those existing transactions came from elsewhere in the
same code we control.  Could we store a strset or strmap of the items
being updated (in addition to storing them in the transaction), and
then use the strset/strmap to filter out which tags we need to
backfill?  Or would that require plumbing an extra variable through an
awful lot of callers to get the information into the right places?

  - would it make sense to add a flag to the transaction API to allow
duplicates if both updates update the ref to the same value?  (I'm
guessing you're updating to the same value, right?)

  - should we just add something to the refs API along the lines of
"transaction_includes_update_for()" or something like that?

[...]
> @@ -361,12 +362,28 @@ static void find_non_local_tags(const struct ref *refs,
>         const struct ref *ref;
>         struct refname_hash_entry *item = NULL;
>         const int quick_flags = OBJECT_INFO_QUICK | OBJECT_INFO_SKIP_FETCH_OBJECT;
> +       int i;
>
>         refname_hash_init(&existing_refs);
>         refname_hash_init(&remote_refs);
>         create_fetch_oidset(head, &fetch_oids);
>
>         for_each_ref(add_one_refname, &existing_refs);
> +
> +       /*
> +        * If we already have a transaction, then we need to filter out all
> +        * tags which have already been queued up.
> +        */
> +       for (i = 0; transaction && i < transaction->nr; i++) {
> +               if (!starts_with(transaction->updates[i]->refname, "refs/tags/") ||
> +                   !(transaction->updates[i]->flags & REF_HAVE_NEW))
> +                       continue;
> +               (void) refname_hash_add(&existing_refs,
> +                                       transaction->updates[i]->refname,
> +                                       &transaction->updates[i]->new_oid);

Why the typecast here?
Patrick Steinhardt Feb. 17, 2022, 11:37 a.m. UTC | #4
On Tue, Feb 15, 2022 at 09:11:55AM +0100, Christian Couder wrote:
> On Mon, Feb 14, 2022 at 10:13 PM Patrick Steinhardt <ps@pks.im> wrote:
> >
> > When fetching references from a remote we by default also fetch all tags
> > which point into the history we have fetched. This is a separate step
> > performed after updating local references because it requires us to walk
> > over the history on the client-side to determine whether the remote has
> > announced any tags which point to one of the fetched commits.
> >
> > This backfilling of tags isn't covered by the `--atomic` flag: right
> > now, it only applies to the step where we update our local references.
> > This is an oversight at the time the flag was introduced: its purpose is
> > to either update all references or none, but right now we happily update
> > local references even in the case where backfilling failed.
> 
> Also it looks like the backfilling of tags itself isn't atomic, right?
> Some tags could be backfilled while others aren't.

Right.

> > Fix this by pulling up creation of the reference transaction such that
> > we can pass the same transaction to both the code which updates local
> > references and to the code which backfills tags. This allows us to only
> > commit the transaction in case both actions succeed.
> 
> Maybe this could be seen as a regression by users who are mostly
> interested in the local references though.

Even though the commit message discern "local references" and
"backfilled tags", ultimately they're the same. Both are references that
end up in your local refdb, so from the point of the user there is no
real difference here. Documentation of the `--atomic` flag only says
that "either all refs are updared, or on error, no refs are updated". I
think that the current behaviour does not fit the description.

> > Note that we also have to start passing the transaction into
> > `find_non_local_tags()`: this function is responsible for finding all
> > tags which we need to backfill. Right now, it will happily return tags
> > which we have already been updated with our local references. But when
> 
> s/we have/have/
> 
> > we use a single transaction for both local references and backfilling
> > then it may happen that we try to queue the same reference update twice
> > to the transaction, which consequentially triggers a bug. We thus have
> 
> s/consequentially/consequently/
> 
> > to skip over any tags which have already been queued. Unfortunately,
> > this requires us to reach into internals of the reference transaction to
> > access queued updates, but there is no non-internal interface right now
> > which would allow us to access this information.
> 
> This makes me wonder if such a non-internal interface should be
> implemented first. Or if some function to queue a reference update
> could check if the same reference update has already been queued.

Yeah. I noted that ommission in the cover letter already, but didn't yet
want to fix that before getting some initial feedback. I'll add
something like a `for_each_queued_reference_update()` in v2 of this
series though.

Patrick
Patrick Steinhardt Feb. 17, 2022, 11:58 a.m. UTC | #5
On Wed, Feb 16, 2022 at 05:34:23PM -0800, Elijah Newren wrote:
> On Mon, Feb 14, 2022 at 1:32 AM Patrick Steinhardt <ps@pks.im> wrote:
> >
> > When fetching references from a remote we by default also fetch all tags
> > which point into the history we have fetched. This is a separate step
> > performed after updating local references because it requires us to walk
> > over the history on the client-side to determine whether the remote has
> > announced any tags which point to one of the fetched commits.
> >
> > This backfilling of tags isn't covered by the `--atomic` flag: right
> > now, it only applies to the step where we update our local references.
> > This is an oversight at the time the flag was introduced: its purpose is
> > to either update all references or none, but right now we happily update
> > local references even in the case where backfilling failed.
> >
> > Fix this by pulling up creation of the reference transaction such that
> > we can pass the same transaction to both the code which updates local
> > references and to the code which backfills tags. This allows us to only
> > commit the transaction in case both actions succeed.
> >
> > Note that we also have to start passing the transaction into
> > `find_non_local_tags()`: this function is responsible for finding all
> > tags which we need to backfill. Right now, it will happily return tags
> > which we have already been updated with our local references. But when
> > we use a single transaction for both local references and backfilling
> > then it may happen that we try to queue the same reference update twice
> > to the transaction, which consequentially triggers a bug. We thus have
> > to skip over any tags which have already been queued. Unfortunately,
> > this requires us to reach into internals of the reference transaction to
> > access queued updates, but there is no non-internal interface right now
> > which would allow us to access this information.
> 
> I like the changes you are making here in general, but I do agree that
> reaching into refs-internal feels a bit icky.  I'm not familiar with
> the refs API nor the fetching code, so feel free to ignore these
> ideas, but I'm just throwing them out there as possibilities to avoid
> reaching into refs-internal:
> 
>   - you are trying to check for existing transactions to avoid
> duplicates, but those existing transactions came from elsewhere in the
> same code we control.  Could we store a strset or strmap of the items
> being updated (in addition to storing them in the transaction), and
> then use the strset/strmap to filter out which tags we need to
> backfill?  Or would that require plumbing an extra variable through an
> awful lot of callers to get the information into the right places?

We basically would need to plumb through the variable to most callsites
which also get the transaction as input, and that's rather deep into the
callstack. The reason I think it's preferable to instead use the
transaction is that it holds the definitive state of all updates we have
already queued, and thus we cannot accidentally forget to update another
auxiliary variable.

>   - would it make sense to add a flag to the transaction API to allow
> duplicates if both updates update the ref to the same value?  (I'm
> guessing you're updating to the same value, right?)

It should be the same value, yes. There is a race though in the context
of tag backfilling: if the initial fetch pulls in some tags, then it can
happen that second fetch used in some cases for the backfilling
mechanism pulls in the same tag references but with different target
objects. It's an unlikely thing to happen, but cannot be ruled out a
100%. As Jonathan pointed out the backfilling-fetch is only used when
the transport does not use "include-tag" though.

The result in that case would be that the transaction aborts because of
duplicate addition of the same ref with different values. And I'd say
that this is correct behaviour in case the user asked for an atomic
fetch.

>   - should we just add something to the refs API along the lines of
> "transaction_includes_update_for()" or something like that?

I think something in the spirit of this last option would be the easiest
solution. Using `includes_updates_for()` or the above solution of a flag
which avoids duplicate updates would potentially be quadratic in
behaviour though if implemented naively: we need to walk all queued
updates for each of the tags we want to queue. That's easy enough to
avoid if we just add a `for_each_queued_reference_update()` and then
continue to do the same thing like we below. It also gives us greater
flexibility compared to the other alternatives.

> [...]
> > @@ -361,12 +362,28 @@ static void find_non_local_tags(const struct ref *refs,
> >         const struct ref *ref;
> >         struct refname_hash_entry *item = NULL;
> >         const int quick_flags = OBJECT_INFO_QUICK | OBJECT_INFO_SKIP_FETCH_OBJECT;
> > +       int i;
> >
> >         refname_hash_init(&existing_refs);
> >         refname_hash_init(&remote_refs);
> >         create_fetch_oidset(head, &fetch_oids);
> >
> >         for_each_ref(add_one_refname, &existing_refs);
> > +
> > +       /*
> > +        * If we already have a transaction, then we need to filter out all
> > +        * tags which have already been queued up.
> > +        */
> > +       for (i = 0; transaction && i < transaction->nr; i++) {
> > +               if (!starts_with(transaction->updates[i]->refname, "refs/tags/") ||
> > +                   !(transaction->updates[i]->flags & REF_HAVE_NEW))
> > +                       continue;
> > +               (void) refname_hash_add(&existing_refs,
> > +                                       transaction->updates[i]->refname,
> > +                                       &transaction->updates[i]->new_oid);
> 
> Why the typecast here?

`refname_hash_add()` returns the newly added entry, and we don't care
about that. `add_one_refname()` has the same cast, potentially to
demonstrate that we don't need the return value? Compilers shouldn't
care I think, but on the other hand some static analysis sites like
Coverity use to complain about such things.

Patrick
diff mbox series

Patch

diff --git a/builtin/fetch.c b/builtin/fetch.c
index 1eda0b68ff..348e64cf2c 100644
--- a/builtin/fetch.c
+++ b/builtin/fetch.c
@@ -4,7 +4,7 @@ 
 #include "cache.h"
 #include "config.h"
 #include "repository.h"
-#include "refs.h"
+#include "refs/refs-internal.h"
 #include "refspec.h"
 #include "object-store.h"
 #include "oidset.h"
@@ -350,6 +350,7 @@  static void clear_item(struct refname_hash_entry *item)
 }
 
 static void find_non_local_tags(const struct ref *refs,
+				struct ref_transaction *transaction,
 				struct ref **head,
 				struct ref ***tail)
 {
@@ -361,12 +362,28 @@  static void find_non_local_tags(const struct ref *refs,
 	const struct ref *ref;
 	struct refname_hash_entry *item = NULL;
 	const int quick_flags = OBJECT_INFO_QUICK | OBJECT_INFO_SKIP_FETCH_OBJECT;
+	int i;
 
 	refname_hash_init(&existing_refs);
 	refname_hash_init(&remote_refs);
 	create_fetch_oidset(head, &fetch_oids);
 
 	for_each_ref(add_one_refname, &existing_refs);
+
+	/*
+	 * If we already have a transaction, then we need to filter out all
+	 * tags which have already been queued up.
+	 */
+	for (i = 0; transaction && i < transaction->nr; i++) {
+		if (!starts_with(transaction->updates[i]->refname, "refs/tags/") ||
+		    !(transaction->updates[i]->flags & REF_HAVE_NEW))
+			continue;
+		(void) refname_hash_add(&existing_refs,
+					transaction->updates[i]->refname,
+					&transaction->updates[i]->new_oid);
+	}
+
+
 	for (ref = refs; ref; ref = ref->next) {
 		if (!starts_with(ref->name, "refs/tags/"))
 			continue;
@@ -600,7 +617,7 @@  static struct ref *get_ref_map(struct remote *remote,
 		/* also fetch all tags */
 		get_fetch_map(remote_refs, tag_refspec, &tail, 0);
 	else if (tags == TAGS_DEFAULT && *autotags)
-		find_non_local_tags(remote_refs, &ref_map, &tail);
+		find_non_local_tags(remote_refs, NULL, &ref_map, &tail);
 
 	/* Now append any refs to be updated opportunistically: */
 	*tail = orefs;
@@ -1083,12 +1100,12 @@  N_("it took %.2f seconds to check forced updates; you can use\n"
    "to avoid this check\n");
 
 static int store_updated_refs(const char *raw_url, const char *remote_name,
-			      int connectivity_checked, struct ref *ref_map,
+			      int connectivity_checked,
+			      struct ref_transaction *transaction, struct ref *ref_map,
 			      struct fetch_head *fetch_head, struct worktree **worktrees)
 {
 	int url_len, i, rc = 0;
 	struct strbuf note = STRBUF_INIT, err = STRBUF_INIT;
-	struct ref_transaction *transaction = NULL;
 	const char *what, *kind;
 	struct ref *rm;
 	char *url;
@@ -1110,14 +1127,6 @@  static int store_updated_refs(const char *raw_url, const char *remote_name,
 		}
 	}
 
-	if (atomic_fetch) {
-		transaction = ref_transaction_begin(&err);
-		if (!transaction) {
-			error("%s", err.buf);
-			goto abort;
-		}
-	}
-
 	prepare_format_display(ref_map);
 
 	/*
@@ -1233,14 +1242,6 @@  static int store_updated_refs(const char *raw_url, const char *remote_name,
 		}
 	}
 
-	if (!rc && transaction) {
-		rc = ref_transaction_commit(transaction, &err);
-		if (rc) {
-			error("%s", err.buf);
-			goto abort;
-		}
-	}
-
 	if (rc & STORE_REF_ERROR_DF_CONFLICT)
 		error(_("some local refs could not be updated; try running\n"
 		      " 'git remote prune %s' to remove any old, conflicting "
@@ -1258,7 +1259,6 @@  static int store_updated_refs(const char *raw_url, const char *remote_name,
  abort:
 	strbuf_release(&note);
 	strbuf_release(&err);
-	ref_transaction_free(transaction);
 	free(url);
 	return rc;
 }
@@ -1299,6 +1299,7 @@  static int check_exist_and_connected(struct ref *ref_map)
 }
 
 static int fetch_and_consume_refs(struct transport *transport,
+				  struct ref_transaction *transaction,
 				  struct ref *ref_map,
 				  struct fetch_head *fetch_head,
 				  struct worktree **worktrees)
@@ -1323,7 +1324,8 @@  static int fetch_and_consume_refs(struct transport *transport,
 
 	trace2_region_enter("fetch", "consume_refs", the_repository);
 	ret = store_updated_refs(transport->url, transport->remote->name,
-				 connectivity_checked, ref_map, fetch_head, worktrees);
+				 connectivity_checked, transaction, ref_map,
+				 fetch_head, worktrees);
 	trace2_region_leave("fetch", "consume_refs", the_repository);
 
 out:
@@ -1496,6 +1498,7 @@  static struct transport *prepare_transport(struct remote *remote, int deepen)
 }
 
 static int backfill_tags(struct transport *transport,
+			 struct ref_transaction *transaction,
 			 struct ref *ref_map,
 			 struct fetch_head *fetch_head,
 			 struct worktree **worktrees)
@@ -1519,7 +1522,7 @@  static int backfill_tags(struct transport *transport,
 	transport_set_option(transport, TRANS_OPT_FOLLOWTAGS, NULL);
 	transport_set_option(transport, TRANS_OPT_DEPTH, "0");
 	transport_set_option(transport, TRANS_OPT_DEEPEN_RELATIVE, NULL);
-	retcode = fetch_and_consume_refs(transport, ref_map, fetch_head, worktrees);
+	retcode = fetch_and_consume_refs(transport, transaction, ref_map, fetch_head, worktrees);
 
 	if (gsecondary) {
 		transport_disconnect(gsecondary);
@@ -1532,6 +1535,7 @@  static int backfill_tags(struct transport *transport,
 static int do_fetch(struct transport *transport,
 		    struct refspec *rs)
 {
+	struct ref_transaction *transaction = NULL;
 	struct ref *ref_map = NULL;
 	int autotags = (transport->remote->fetch_tags == 1);
 	int retcode = 0;
@@ -1541,6 +1545,7 @@  static int do_fetch(struct transport *transport,
 	int must_list_refs = 1;
 	struct worktree **worktrees = get_worktrees();
 	struct fetch_head fetch_head = { 0 };
+	struct strbuf err = STRBUF_INIT;
 
 	if (tags == TAGS_DEFAULT) {
 		if (transport->remote->fetch_tags == 2)
@@ -1602,6 +1607,14 @@  static int do_fetch(struct transport *transport,
 	if (retcode)
 		goto cleanup;
 
+	if (atomic_fetch) {
+		transaction = ref_transaction_begin(&err);
+		if (!transaction) {
+			retcode = error("%s", err.buf);
+			goto cleanup;
+		}
+	}
+
 	if (tags == TAGS_DEFAULT && autotags)
 		transport_set_option(transport, TRANS_OPT_FOLLOWTAGS, "1");
 	if (prune) {
@@ -1619,7 +1632,7 @@  static int do_fetch(struct transport *transport,
 		}
 	}
 
-	if (fetch_and_consume_refs(transport, ref_map, &fetch_head, worktrees)) {
+	if (fetch_and_consume_refs(transport, transaction, ref_map, &fetch_head, worktrees)) {
 		retcode = 1;
 		goto cleanup;
 	}
@@ -1629,7 +1642,7 @@  static int do_fetch(struct transport *transport,
 	if (tags == TAGS_DEFAULT && autotags) {
 		struct ref *tags_ref_map = NULL, **tail = &tags_ref_map;
 
-		find_non_local_tags(remote_refs, &tags_ref_map, &tail);
+		find_non_local_tags(remote_refs, transaction, &tags_ref_map, &tail);
 		if (tags_ref_map) {
 			/*
 			 * If backfilling tags succeeds we used to not return
@@ -1638,15 +1651,31 @@  static int do_fetch(struct transport *transport,
 			 * state of the repository. We now notify the user of
 			 * any such errors, but we continue to make sure that
 			 * FETCH_HEAD and the upstream branch are configured as
-			 * expected.
+			 * expected. The exception though is when `--atomic`
+			 * is passed: in that case we'll abort the transaction
+			 * and don't commit anything.
 			 */
-			if (backfill_tags(transport, tags_ref_map, &fetch_head, worktrees))
+			if (backfill_tags(transport, transaction, tags_ref_map,
+					  &fetch_head, worktrees))
 				retcode = 1;
 		}
 
 		free_refs(tags_ref_map);
 	}
 
+	if (transaction) {
+		if (retcode)
+			goto cleanup;
+
+		retcode = ref_transaction_commit(transaction, &err);
+		if (retcode) {
+			error("%s", err.buf);
+			ref_transaction_free(transaction);
+			transaction = NULL;
+			goto cleanup;
+		}
+	}
+
 	commit_fetch_head(&fetch_head);
 
 	if (set_upstream) {
@@ -1704,7 +1733,13 @@  static int do_fetch(struct transport *transport,
 	}
 
 cleanup:
+	if (retcode && transaction) {
+		ref_transaction_abort(transaction, &err);
+		error("%s", err.buf);
+	}
+
 	close_fetch_head(&fetch_head);
+	strbuf_release(&err);
 	free_refs(ref_map);
 	free_worktrees(worktrees);
 	return retcode;
diff --git a/t/t5503-tagfollow.sh b/t/t5503-tagfollow.sh
index 549f908b90..4a8e63aa16 100755
--- a/t/t5503-tagfollow.sh
+++ b/t/t5503-tagfollow.sh
@@ -182,11 +182,8 @@  test_expect_success 'atomic fetch with failing backfill' '
 	EOF
 
 	test_must_fail git -C clone3 fetch --atomic .. $B:refs/heads/something &&
-
-	# Creation of the tag has failed, so ideally refs/heads/something
-	# should not exist. The fact that it does is demonstrates that there is
-	# missing coverage in the `--atomic` flag.
-	test $B = "$(git -C clone3 rev-parse --verify refs/heads/something)"
+	test_must_fail git -C clone3 rev-parse --verify refs/heads/something &&
+	test_must_fail git -C clone3 rev-parse --verify refs/tags/tag2
 '
 
 test_expect_success 'atomic fetch with backfill should use single transaction' '
@@ -199,12 +196,10 @@  test_expect_success 'atomic fetch with backfill should use single transaction' '
 		prepared
 		$ZERO_OID $B refs/heads/something
 		$ZERO_OID $S refs/tags/tag2
+		$ZERO_OID $T refs/tags/tag1
 		committed
 		$ZERO_OID $B refs/heads/something
 		$ZERO_OID $S refs/tags/tag2
-		prepared
-		$ZERO_OID $T refs/tags/tag1
-		committed
 		$ZERO_OID $T refs/tags/tag1
 	EOF