mbox series

[v2,00/24] pack-bitmap: bitmap generation improvements

Message ID cover.1605649533.git.me@ttaylorr.com (mailing list archive)
Headers show
Series pack-bitmap: bitmap generation improvements | expand

Message

Taylor Blau Nov. 17, 2020, 9:46 p.m. UTC
Here is an updated version of this series, which improves the
performance of generating reachability bitmaps in large repositories.

Not very much has changed since last time, but a range-diff is below
nonetheless. The major changes are:

  - Avoid an overflow when bounds checking in the second and third
    patches (thanks, Martin, for noticing).
  - Incorporate a fix to avoid reading beyond an EWAH bitmap by double
    checking our read before actually doing it (thanks, Peff).
  - Harden the tests so that they pass under sha256-mode (thanks SZEDER,
    and Peff).

Derrick Stolee (9):
  pack-bitmap-write: fill bitmap with commit history
  bitmap: add bitmap_diff_nonzero()
  commit: implement commit_list_contains()
  t5310: add branch-based checks
  pack-bitmap-write: rename children to reverse_edges
  pack-bitmap-write: build fewer intermediate bitmaps
  pack-bitmap-write: use existing bitmaps
  pack-bitmap-write: relax unique rewalk condition
  pack-bitmap-write: better reuse bitmaps

Jeff King (11):
  pack-bitmap: fix header size check
  pack-bitmap: bounds-check size of cache extension
  t5310: drop size of truncated ewah bitmap
  rev-list: die when --test-bitmap detects a mismatch
  ewah: factor out bitmap growth
  ewah: make bitmap growth less aggressive
  ewah: implement bitmap_or()
  ewah: add bitmap_dup() function
  pack-bitmap-write: reimplement bitmap writing
  pack-bitmap-write: pass ownership of intermediate bitmaps
  pack-bitmap-write: ignore BITMAP_FLAG_REUSE

Taylor Blau (4):
  ewah/ewah_bitmap.c: grow buffer past 1
  pack-bitmap.c: check reads more aggressively when loading
  pack-bitmap: factor out 'bitmap_for_commit()'
  pack-bitmap: factor out 'add_commit_to_bitmap()'

 builtin/pack-objects.c  |   1 -
 commit.c                |  11 +
 commit.h                |   2 +
 ewah/bitmap.c           |  54 ++++-
 ewah/ewah_bitmap.c      |   2 +-
 ewah/ewok.h             |   3 +-
 pack-bitmap-write.c     | 452 +++++++++++++++++++++++++---------------
 pack-bitmap.c           | 139 ++++++------
 pack-bitmap.h           |   8 +-
 t/t5310-pack-bitmaps.sh | 164 ++++++++++++---
 10 files changed, 555 insertions(+), 281 deletions(-)

Range-diff against v1:
 -:  ---------- >  1:  07054ff8ee ewah/ewah_bitmap.c: grow buffer past 1
 1:  1970a70207 !  2:  74a13b4a6e pack-bitmap: fix header size check
    @@ pack-bitmap.c: static struct ewah_bitmap *read_bitmap_1(struct bitmap_index *ind
     +	size_t header_size = sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;

     -	if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)
    -+	if (index->map_size < header_size)
    - 		return error("Corrupted bitmap index (missing header data)");
    +-		return error("Corrupted bitmap index (missing header data)");
    ++	if (index->map_size < header_size + the_hash_algo->rawsz)
    ++		return error("Corrupted bitmap index (too small)");

      	if (memcmp(header->magic, BITMAP_IDX_SIGNATURE, sizeof(BITMAP_IDX_SIGNATURE)) != 0)
    + 		return error("Corrupted bitmap index file (wrong header)");
     @@ pack-bitmap.c: static int load_bitmap_header(struct bitmap_index *index)
      	}

 2:  36b1815d03 !  3:  db11116dac pack-bitmap: bounds-check size of cache extension
    @@ Commit message
         pack-bitmap: bounds-check size of cache extension

         A .bitmap file may have a "name hash cache" extension, which puts a
    -    sequence of uint32_t bytes (one per object) at the end of the file. When
    -    we see a flag indicating this extension, we blindly subtract the
    +    sequence of uint32_t values (one per object) at the end of the file.
    +    When we see a flag indicating this extension, we blindly subtract the
         appropriate number of bytes from our available length. However, if the
         .bitmap file is too short, we'll underflow our length variable and wrap
         around, thinking we have a very large length. This can lead to reading
    @@ pack-bitmap.c: static int load_bitmap_header(struct bitmap_index *index)
      	/* Parse known bitmap format options */
      	{
      		uint32_t flags = ntohs(header->options);
    -+		uint32_t cache_size = st_mult(index->pack->num_objects, sizeof(uint32_t));
    ++		size_t cache_size = st_mult(index->pack->num_objects, sizeof(uint32_t));
     +		unsigned char *index_end = index->map + index->map_size - the_hash_algo->rawsz;

      		if ((flags & BITMAP_OPT_FULL_DAG) == 0)
    @@ pack-bitmap.c: static int load_bitmap_header(struct bitmap_index *index)
      		if (flags & BITMAP_OPT_HASH_CACHE) {
     -			unsigned char *end = index->map + index->map_size - the_hash_algo->rawsz;
     -			index->hashes = ((uint32_t *)end) - index->pack->num_objects;
    -+			if (index->map + header_size + cache_size > index_end)
    ++			if (cache_size > index_end - index->map - header_size)
     +				return error("corrupted bitmap index file (too short to fit hash cache)");
     +			index->hashes = (void *)(index_end - cache_size);
     +			index_end -= cache_size;
 3:  edfec2ea62 =  4:  f779e76f82 t5310: drop size of truncated ewah bitmap
 4:  f3fec466f7 =  5:  1a9ac1c4ae rev-list: die when --test-bitmap detects a mismatch
 5:  b35012f44d =  6:  9bb1ea3b19 ewah: factor out bitmap growth
 6:  53b8bea98c =  7:  f8426c7e8b ewah: make bitmap growth less aggressive
 7:  98e3bfc1b2 =  8:  674e31f98e ewah: implement bitmap_or()
 8:  1bd115fc51 =  9:  a903c949d8 ewah: add bitmap_dup() function
 9:  adf16557c2 = 10:  c951206729 pack-bitmap-write: reimplement bitmap writing
10:  27992687c9 = 11:  466dd3036a pack-bitmap-write: pass ownership of intermediate bitmaps
11:  d92fb0e1e1 = 12:  8e5607929d pack-bitmap-write: fill bitmap with commit history
12:  bf86cb6196 = 13:  4840c64c51 bitmap: add bitmap_diff_nonzero()
13:  78cdf847aa = 14:  63e846f4e8 commit: implement commit_list_contains()
14:  778e9e9c44 = 15:  8b5d239333 t5310: add branch-based checks
15:  526d3509ef = 16:  60a46091bb pack-bitmap-write: rename children to reverse_edges
 -:  ---------- > 17:  8f7bb2dd2e pack-bitmap.c: check reads more aggressively when loading
16:  86d77fd085 ! 18:  5262daa330 pack-bitmap-write: build fewer intermediate bitmaps
    @@ t/t5310-pack-bitmaps.sh: test_expect_success 'setup repo with moderate-sized his
      '

      test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
    +@@ t/t5310-pack-bitmaps.sh: test_expect_success 'truncated bitmap fails gracefully (ewah)' '
    + 	git rev-list --use-bitmap-index --count --all >expect &&
    + 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
    + 	test_when_finished "rm -f $bitmap" &&
    +-	test_copy_bytes 256 <$bitmap >$bitmap.tmp &&
    ++	test_copy_bytes 270 <$bitmap >$bitmap.tmp &&
    + 	mv -f $bitmap.tmp $bitmap &&
    + 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
    + 	test_cmp expect actual &&
17:  e4f296100c = 19:  a206f48614 pack-bitmap-write: ignore BITMAP_FLAG_REUSE
18:  6e856bcf75 = 20:  9928b3c7da pack-bitmap: factor out 'bitmap_for_commit()'
19:  9b5f595f50 = 21:  f40a39a48a pack-bitmap: factor out 'add_commit_to_bitmap()'
20:  c458f98e11 = 22:  4bf5e78a54 pack-bitmap-write: use existing bitmaps
21:  3026876e7a = 23:  1da4fa0fb8 pack-bitmap-write: relax unique rewalk condition
22:  ce2716e291 = 24:  42399a1c2e pack-bitmap-write: better reuse bitmaps
--
2.29.2.312.gabc4d358d8

Comments

SZEDER Gábor Nov. 18, 2020, 6:32 p.m. UTC | #1
On Tue, Nov 17, 2020 at 04:46:16PM -0500, Taylor Blau wrote:
>   - Harden the tests so that they pass under sha256-mode (thanks SZEDER,
>     and Peff).

Fixing this is good, of course, but...

> 16:  86d77fd085 ! 18:  5262daa330 pack-bitmap-write: build fewer intermediate bitmaps
>     @@ t/t5310-pack-bitmaps.sh: test_expect_success 'setup repo with moderate-sized his
>       '
> 
>       test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
>     +@@ t/t5310-pack-bitmaps.sh: test_expect_success 'truncated bitmap fails gracefully (ewah)' '
>     + 	git rev-list --use-bitmap-index --count --all >expect &&
>     + 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
>     + 	test_when_finished "rm -f $bitmap" &&
>     +-	test_copy_bytes 256 <$bitmap >$bitmap.tmp &&
>     ++	test_copy_bytes 270 <$bitmap >$bitmap.tmp &&
>     + 	mv -f $bitmap.tmp $bitmap &&
>     + 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
>     + 	test_cmp expect actual &&

Please don't simply sneak in such a change without explaining it in
the commit message.
Taylor Blau Nov. 18, 2020, 7:51 p.m. UTC | #2
On Wed, Nov 18, 2020 at 07:32:25PM +0100, SZEDER Gábor wrote:
> On Tue, Nov 17, 2020 at 04:46:16PM -0500, Taylor Blau wrote:
> >   - Harden the tests so that they pass under sha256-mode (thanks SZEDER,
> >     and Peff).
>
> Fixing this is good, of course, but...
>
> > 16:  86d77fd085 ! 18:  5262daa330 pack-bitmap-write: build fewer intermediate bitmaps
> >     @@ t/t5310-pack-bitmaps.sh: test_expect_success 'setup repo with moderate-sized his
> >       '
> >
> >       test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
> >     +@@ t/t5310-pack-bitmaps.sh: test_expect_success 'truncated bitmap fails gracefully (ewah)' '
> >     + 	git rev-list --use-bitmap-index --count --all >expect &&
> >     + 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
> >     + 	test_when_finished "rm -f $bitmap" &&
> >     +-	test_copy_bytes 256 <$bitmap >$bitmap.tmp &&
> >     ++	test_copy_bytes 270 <$bitmap >$bitmap.tmp &&
> >     + 	mv -f $bitmap.tmp $bitmap &&
> >     + 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
> >     + 	test_cmp expect actual &&
>
> Please don't simply sneak in such a change without explaining it in
> the commit message.

Ah, I certainly didn't mean to go under the radar, so to speak ;-). From
my perspective, the final patch looks like it picked a magic number in
the same way was the original version of this patch did, so I didn't
think to add any more detail there.

I did try and highlight this a little bit in the patch just before the
one you're commenting on, though:

  - Adding more tests in this area. Testing these truncation situations
    are remarkably fragile to even subtle changes in the bitmap
    generation. So, the resulting tests are likely to be quite brittle.

Thanks,
Taylor
Martin Ågren Nov. 20, 2020, 6:34 a.m. UTC | #3
On Tue, 17 Nov 2020 at 22:46, Taylor Blau <me@ttaylorr.com> wrote:
> Not very much has changed since last time, but a range-diff is below
> nonetheless. The major changes are:
>
>   - Avoid an overflow when bounds checking in the second and third
>     patches (thanks, Martin, for noticing).

FWIW, the updates to patches 2 and 3 look exactly like what I was
expecting after the discussion on v1. I have nothing to add.


Martin
Junio C Hamano Nov. 21, 2020, 7:37 p.m. UTC | #4
Martin Ågren <martin.agren@gmail.com> writes:

> On Tue, 17 Nov 2020 at 22:46, Taylor Blau <me@ttaylorr.com> wrote:
>> Not very much has changed since last time, but a range-diff is below
>> nonetheless. The major changes are:
>>
>>   - Avoid an overflow when bounds checking in the second and third
>>     patches (thanks, Martin, for noticing).
>
> FWIW, the updates to patches 2 and 3 look exactly like what I was
> expecting after the discussion on v1. I have nothing to add.

Thanks, both.  Shall we move the topic down to 'next'?
Martin Ågren Nov. 21, 2020, 8:11 p.m. UTC | #5
On Sat, 21 Nov 2020 at 20:37, Junio C Hamano <gitster@pobox.com> wrote:
>
> Martin Ågren <martin.agren@gmail.com> writes:
>
> > On Tue, 17 Nov 2020 at 22:46, Taylor Blau <me@ttaylorr.com> wrote:
> >> Not very much has changed since last time, but a range-diff is below
> >> nonetheless. The major changes are:
> >>
> >>   - Avoid an overflow when bounds checking in the second and third
> >>     patches (thanks, Martin, for noticing).
> >
> > FWIW, the updates to patches 2 and 3 look exactly like what I was
> > expecting after the discussion on v1. I have nothing to add.
>
> Thanks, both.  Shall we move the topic down to 'next'?

I really only dug into those patches 2 and 3. I read the rest of the
patches of v1 and went "that makes sense", but that's about it. I
started looking at "pack-bitmap-write: build fewer intermediate bitmaps"
and went "this looks really cool -- I should try to understand this". :-)

There was SZEDER's comment on that last patch in v2, where future
readers of that patch will have to wonder why it does s/256/270/ in a
test. I agree with SZEDER that the change should be mentioned in the
commit message, even if it's just "unfortunately, we have some magicness
here, plus we want to pass both with SHA-1 and SHA-256; turns out 270
hits the problem we want to test for".

Martin
Taylor Blau Nov. 22, 2020, 2:17 a.m. UTC | #6
On Wed, Nov 18, 2020 at 02:51:59PM -0500, Taylor Blau wrote:
> On Wed, Nov 18, 2020 at 07:32:25PM +0100, SZEDER Gábor wrote:
> > Please don't simply sneak in such a change without explaining it in
> > the commit message.
>
> Ah, I certainly didn't mean to go under the radar, so to speak ;-). From
> my perspective, the final patch looks like it picked a magic number in
> the same way was the original version of this patch did, so I didn't
> think to add any more detail there.

Oops; when I wrote that to you, I had in my mind that this patch already
changed that line in the tests, so the rerolled patch was simply
changing it to something different.

But, this isn't a new test from this patch's perspective, so I'll make a
note of why this changed in a replacement.

Thanks.

Taylor
Taylor Blau Nov. 22, 2020, 2:28 a.m. UTC | #7
On Sat, Nov 21, 2020 at 09:17:43PM -0500, Taylor Blau wrote:
> But, this isn't a new test from this patch's perspective, so I'll make a
> note of why this changed in a replacement.

Here's that patch: let's use it as a replacement when queueing. Thanks
again for noticing.

--- 8< ---

From: Derrick Stolee <dstolee@microsoft.com>
Subject: [PATCH] pack-bitmap-write: build fewer intermediate bitmaps

The bitmap_writer_build() method calls bitmap_builder_init() to
construct a list of commits reachable from the selected commits along
with a "reverse graph". This reverse graph has edges pointing from a
commit to other commits that can reach that commit. After computing a
reachability bitmap for a commit, the values in that bitmap are then
copied to the reachability bitmaps across the edges in the reverse
graph.

We can now relax the role of the reverse graph to greatly reduce the
number of intermediate reachability bitmaps we compute during this
reverse walk. The end result is that we walk objects the same number of
times as before when constructing the reachability bitmaps, but we also
spend much less time copying bits between bitmaps and have much lower
memory pressure in the process.

The core idea is to select a set of "important" commits based on
interactions among the sets of commits reachable from each selected commit.

The first technical concept is to create a new 'commit_mask' member in the
bb_commit struct. Note that the selected commits are provided in an
ordered array. The first thing to do is to mark the ith bit in the
commit_mask for the ith selected commit. As we walk the commit-graph, we
copy the bits in a commit's commit_mask to its parents. At the end of
the walk, the ith bit in the commit_mask for a commit C stores a boolean
representing "The ith selected commit can reach C."

As we walk, we will discover non-selected commits that are important. We
will get into this later, but those important commits must also receive
bit positions, growing the width of the bitmasks as we walk. At the true
end of the walk, the ith bit means "the ith _important_ commit can reach
C."

MAXIMAL COMMITS
---------------

We use a new 'maximal' bit in the bb_commit struct to represent whether
a commit is important or not. The term "maximal" comes from the
partially-ordered set of commits in the commit-graph where C >= P if P
is a parent of C, and then extending the relationship transitively.
Instead of taking the maximal commits across the entire commit-graph, we
instead focus on selecting each commit that is maximal among commits
with the same bits on in their commit_mask. This definition is
important, so let's consider an example.

Suppose we have three selected commits A, B, and C. These are assigned
bitmasks 100, 010, and 001 to start. Each of these can be marked as
maximal immediately because they each will be the uniquely maximal
commit that contains their own bit. Keep in mind that that these commits
may have different bitmasks after the walk; for example, if B can reach
C but A cannot, then the final bitmask for C is 011. Even in these
cases, C would still be a maximal commit among all commits with the
third bit on in their masks.

Now define sets X, Y, and Z to be the sets of commits reachable from A,
B, and C, respectively. The intersections of these sets correspond to
different bitmasks:

 * 100: X - (Y union Z)
 * 010: Y - (X union Z)
 * 001: Z - (X union Y)
 * 110: (X intersect Y) - Z
 * 101: (X intersect Z) - Y
 * 011: (Y intersect Z) - X
 * 111: X intersect Y intersect Z

This can be visualized with the following Hasse diagram:

	100    010    001
         | \  /   \  / |
         |  \/     \/  |
         |  /\     /\  |
         | /  \   /  \ |
        110    101    011
          \___  |  ___/
              \ | /
               111

Some of these bitmasks may not be represented, depending on the topology
of the commit-graph. In fact, we are counting on it, since the number of
possible bitmasks is exponential in the number of selected commits, but
is also limited by the total number of commits. In practice, very few
bitmasks are possible because most commits converge on a common "trunk"
in the commit history.

With this three-bit example, we wish to find commits that are maximal
for each bitmask. How can we identify this as we are walking?

As we walk, we visit a commit C. Since we are walking the commits in
topo-order, we know that C is visited after all of its children are
visited. Thus, when we get C from the revision walk we inspect the
'maximal' property of its bb_data and use that to determine if C is truly
important. Its commit_mask is also nearly final. If C is not one of the
originally-selected commits, then assign a bit position to C (by
incrementing num_maximal) and set that bit on in commit_mask. See
"MULTIPLE MAXIMAL COMMITS" below for more detail on this.

Now that the commit C is known to be maximal or not, consider each
parent P of C. Compute two new values:

 * c_not_p : true if and only if the commit_mask for C contains a bit
             that is not contained in the commit_mask for P.

 * p_not_c : true if and only if the commit_mask for P contains a bit
             that is not contained in the commit_mask for P.

If c_not_p is false, then P already has all of the bits that C would
provide to its commit_mask. In this case, move on to other parents as C
has nothing to contribute to P's state that was not already provided by
other children of P.

We continue with the case that c_not_p is true. This means there are
bits in C's commit_mask to copy to P's commit_mask, so use bitmap_or()
to add those bits.

If p_not_c is also true, then set the maximal bit for P to one. This means
that if no other commit has P as a parent, then P is definitely maximal.
This is because no child had the same bitmask. It is important to think
about the maximal bit for P at this point as a temporary state: "P is
maximal based on current information."

In contrast, if p_not_c is false, then set the maximal bit for P to
zero. Further, clear all reverse_edges for P since any edges that were
previously assigned to P are no longer important. P will gain all
reverse edges based on C.

The final thing we need to do is to update the reverse edges for P.
These reverse edges respresent "which closest maximal commits
contributed bits to my commit_mask?" Since C contributed bits to P's
commit_mask in this case, C must add to the reverse edges of P.

If C is maximal, then C is a 'closest' maximal commit that contributed
bits to P. Add C to P's reverse_edges list.

Otherwise, C has a list of maximal commits that contributed bits to its
bitmask (and this list is exactly one element). Add all of these items
to P's reverse_edges list. Be careful to ignore duplicates here.

After inspecting all parents P for a commit C, we can clear the
commit_mask for C. This reduces the memory load to be limited to the
"width" of the commit graph.

Consider our ABC/XYZ example from earlier and let's inspect the state of
the commits for an interesting bitmask, say 011. Suppose that D is the
only maximal commit with this bitmask (in the first three bits). All
other commits with bitmask 011 have D as the only entry in their
reverse_edges list. D's reverse_edges list contains B and C.

COMPUTING REACHABILITY BITMAPS
------------------------------

Now that we have our definition, let's zoom out and consider what
happens with our new reverse graph when computing reachability bitmaps.
We walk the reverse graph in reverse-topo-order, so we visit commits
with largest commit_masks first. After we compute the reachability
bitmap for a commit C, we push the bits in that bitmap to each commit D
in the reverse edge list for C. Then, when we finally visit D we already
have the bits for everything reachable from maximal commits that D can
reach and we only need to walk the objects in the set-difference.

In our ABC/XYZ example, when we finally walk for the commit A we only
need to walk commits with bitmask equal to A's bitmask. If that bitmask
is 100, then we are only walking commits in X - (Y union Z) because the
bitmap already contains the bits for objects reachable from (X intersect
Y) union (X intersect Z) (i.e. the bits from the reachability bitmaps
for the maximal commits with bitmasks 110 and 101).

The behavior is intended to walk each commit (and the trees that commit
introduces) at most once while allocating and copying fewer reachability
bitmaps. There is one caveat: what happens when there are multiple
maximal commits with the same bitmask, with respect to the initial set
of selected commits?

MULTIPLE MAXIMAL COMMITS
------------------------

Earlier, we mentioned that when we discover a new maximal commit, we
assign a new bit position to that commit and set that bit position to
one for that commit. This is absolutely important for interesting
commit-graphs such as git/git and torvalds/linux. The reason is due to
the existence of "butterflies" in the commit-graph partial order.

Here is an example of four commits forming a butterfly:

   I    J
   |\  /|
   | \/ |
   | /\ |
   |/  \|
   M    N
    \  /
     |/
     Q

Here, I and J both have parents M and N. In general, these do not need
to be exact parent relationships, but reachability relationships. The
most important part is that M and N cannot reach each other, so they are
independent in the partial order. If I had commit_mask 10 and J had
commit_mask 01, then M and N would both be assigned commit_mask 11 and
be maximal commits with the bitmask 11. Then, what happens when M and N
can both reach a commit Q? If Q is also assigned the bitmask 11, then it
is not maximal but is reachable from both M and N.

While this is not necessarily a deal-breaker for our abstract definition
of finding maximal commits according to a given bitmask, we have a few
issues that can come up in our larger picture of constructing
reachability bitmaps.

In particular, if we do not also consider Q to be a "maximal" commit,
then we will walk commits reachable from Q twice: once when computing
the reachability bitmap for M and another time when computing the
reachability bitmap for N. This becomes much worse if the topology
continues this pattern with multiple butterflies.

The solution has already been mentioned: each of M and N are assigned
their own bits to the bitmask and hence they become uniquely maximal for
their bitmasks. Finally, Q also becomes maximal and thus we do not need
to walk its commits multiple times. The final bitmasks for these commits
are as follows:

  I:10       J:01
   |\        /|
   | \ _____/ |
   | /\____   |
   |/      \  |
   M:111    N:1101
        \  /
       Q:1111

Further, Q's reverse edge list is { M, N }, while M and N both have
reverse edge list { I, J }.

PERFORMANCE MEASUREMENTS
------------------------

Now that we've spent a LOT of time on the theory of this algorithm,
let's show that this is actually worth all that effort.

To test the performance, use GIT_TRACE2_PERF=1 when running
'git repack -abd' in a repository with no existing reachability bitmaps.
This avoids any issues with keeping existing bitmaps to skew the
numbers.

Inspect the "building_bitmaps_total" region in the trace2 output to
focus on the portion of work that is affected by this change. Here are
the performance comparisons for a few repositories. The timings are for
the following versions of Git: "multi" is the timing from before any
reverse graph is constructed, where we might perform multiple
traversals. "reverse" is for the previous change where the reverse graph
has every reachable commit.  Finally "maximal" is the version introduced
here where the reverse graph only contains the maximal commits.

      Repository: git/git
           multi: 2.628 sec
         reverse: 2.344 sec
         maximal: 2.047 sec

      Repository: torvalds/linux
           multi: 64.7 sec
         reverse: 205.3 sec
         maximal: 44.7 sec

So in all cases we've not only recovered any time lost to switching to
the reverse-edge algorithm, but we come out ahead of "multi" in all
cases. Likewise, peak heap has gone back to something reasonable:

      Repository: torvalds/linux
           multi: 2.087 GB
         reverse: 3.141 GB
         maximal: 2.288 GB

While I do not have access to full fork networks on GitHub, Peff has run
this algorithm on the chromium/chromium fork network and reported a
change from 3 hours to ~233 seconds. That network is particularly
beneficial for this approach because it has a long, linear history along
with many tags. The "multi" approach was obviously quadratic and the new
approach is linear.

MISCELLANEOUS
-------------

Unluckily, this patch causes bitmaps to change in such a way that
t5310.66 no longer passes as-is in SHA-256 mode. That test is asserting
that truncated .bitmap files fail gracefully. But noticing that
truncation depends on which bitmaps we try to look at, and exactly where
the truncation is.

Since this commit happens to rearrange the bytes in that exact region by
changing the way commits are selected (and thus shifting around other
bitmaps), move the truncation to a spot where it will be noticed in both
SHA-1 and SHA-256 mode.

Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c     | 72 +++++++++++++++++++++++++++++++---
 t/t5310-pack-bitmaps.sh | 87 +++++++++++++++++++++++++++++++++++++++--
 2 files changed, 149 insertions(+), 10 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 369c76a87c..7b4fc0f304 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -180,8 +180,10 @@ static void compute_xor_offsets(void)

 struct bb_commit {
 	struct commit_list *reverse_edges;
+	struct bitmap *commit_mask;
 	struct bitmap *bitmap;
-	unsigned selected:1;
+	unsigned selected:1,
+		 maximal:1;
 	unsigned idx; /* within selected array */
 };

@@ -198,7 +200,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 {
 	struct rev_info revs;
 	struct commit *commit;
-	unsigned int i;
+	unsigned int i, num_maximal;

 	memset(bb, 0, sizeof(*bb));
 	init_bb_data(&bb->data);
@@ -210,27 +212,85 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 	for (i = 0; i < writer->selected_nr; i++) {
 		struct commit *c = writer->selected[i].commit;
 		struct bb_commit *ent = bb_data_at(&bb->data, c);
+
 		ent->selected = 1;
+		ent->maximal = 1;
 		ent->idx = i;
+
+		ent->commit_mask = bitmap_new();
+		bitmap_set(ent->commit_mask, i);
+
 		add_pending_object(&revs, &c->object, "");
 	}
+	num_maximal = writer->selected_nr;

 	if (prepare_revision_walk(&revs))
 		die("revision walk setup failed");

 	while ((commit = get_revision(&revs))) {
 		struct commit_list *p;
+		struct bb_commit *c_ent;

 		parse_commit_or_die(commit);

-		ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
-		bb->commits[bb->commits_nr++] = commit;
+		c_ent = bb_data_at(&bb->data, commit);
+
+		if (c_ent->maximal) {
+			if (!c_ent->selected) {
+				bitmap_set(c_ent->commit_mask, num_maximal);
+				num_maximal++;
+			}
+
+			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
+			bb->commits[bb->commits_nr++] = commit;
+		}

 		for (p = commit->parents; p; p = p->next) {
-			struct bb_commit *ent = bb_data_at(&bb->data, p->item);
-			commit_list_insert(commit, &ent->reverse_edges);
+			struct bb_commit *p_ent = bb_data_at(&bb->data, p->item);
+			int c_not_p, p_not_c;
+
+			if (!p_ent->commit_mask) {
+				p_ent->commit_mask = bitmap_new();
+				c_not_p = 1;
+				p_not_c = 0;
+			} else {
+				c_not_p = bitmap_diff_nonzero(c_ent->commit_mask, p_ent->commit_mask);
+				p_not_c = bitmap_diff_nonzero(p_ent->commit_mask, c_ent->commit_mask);
+			}
+
+			if (!c_not_p)
+				continue;
+
+			bitmap_or(p_ent->commit_mask, c_ent->commit_mask);
+
+			if (p_not_c)
+				p_ent->maximal = 1;
+			else {
+				p_ent->maximal = 0;
+				free_commit_list(p_ent->reverse_edges);
+				p_ent->reverse_edges = NULL;
+			}
+
+			if (c_ent->maximal) {
+				commit_list_insert(commit, &p_ent->reverse_edges);
+			} else {
+				struct commit_list *cc = c_ent->reverse_edges;
+
+				for (; cc; cc = cc->next) {
+					if (!commit_list_contains(cc->item, p_ent->reverse_edges))
+						commit_list_insert(cc->item, &p_ent->reverse_edges);
+				}
+			}
 		}
+
+		bitmap_free(c_ent->commit_mask);
+		c_ent->commit_mask = NULL;
 	}
+
+	trace2_data_intmax("pack-bitmap-write", the_repository,
+			   "num_selected_commits", writer->selected_nr);
+	trace2_data_intmax("pack-bitmap-write", the_repository,
+			   "num_maximal_commits", num_maximal);
 }

 static void bitmap_builder_clear(struct bitmap_builder *bb)
diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index 6bf68fee85..1691710ec1 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -20,11 +20,87 @@ has_any () {
 	grep -Ff "$1" "$2"
 }

+# To ensure the logic for "maximal commits" is exercised, make
+# the repository a bit more complicated.
+#
+#    other                         master
+#      *                             *
+# (99 commits)                  (99 commits)
+#      *                             *
+#      |\                           /|
+#      | * octo-other  octo-master * |
+#      |/|\_________  ____________/|\|
+#      | \          \/  __________/  |
+#      |  | ________/\ /             |
+#      *  |/          * merge-right  *
+#      | _|__________/ \____________ |
+#      |/ |                         \|
+# (l1) *  * merge-left               * (r1)
+#      | / \________________________ |
+#      |/                           \|
+# (l2) *                             * (r2)
+#       \____________...____________ |
+#                                   \|
+#                                    * (base)
+#
+# The important part for the maximal commit algorithm is how
+# the bitmasks are extended. Assuming starting bit positions
+# for master (bit 0) and other (bit 1), and some flexibility
+# in the order that merge bases are visited, the bitmasks at
+# the end should be:
+#
+#      master: 1       (maximal, selected)
+#       other: 01      (maximal, selected)
+# octo-master: 1
+#  octo-other: 01
+# merge-right: 111     (maximal)
+#        (l1): 111
+#        (r1): 111
+#  merge-left: 1101    (maximal)
+#        (l2): 11111   (maximal)
+#        (r2): 111101  (maximal)
+#      (base): 1111111 (maximal)
+
 test_expect_success 'setup repo with moderate-sized history' '
-	test_commit_bulk --id=file 100 &&
+	test_commit_bulk --id=file 10 &&
 	git checkout -b other HEAD~5 &&
 	test_commit_bulk --id=side 10 &&
+
+	# add complicated history setup, including merges and
+	# ambiguous merge-bases
+
+	git checkout -b merge-left other~2 &&
+	git merge master~2 -m "merge-left" &&
+
+	git checkout -b merge-right master~1 &&
+	git merge other~1 -m "merge-right" &&
+
+	git checkout -b octo-master master &&
+	git merge merge-left merge-right -m "octopus-master" &&
+
+	git checkout -b octo-other other &&
+	git merge merge-left merge-right -m "octopus-other" &&
+
+	git checkout other &&
+	git merge octo-other -m "pull octopus" &&
+
 	git checkout master &&
+	git merge octo-master -m "pull octopus" &&
+
+	# Remove these branches so they are not selected
+	# as bitmap tips
+	git branch -D merge-left &&
+	git branch -D merge-right &&
+	git branch -D octo-other &&
+	git branch -D octo-master &&
+
+	# add padding to make these merges less interesting
+	# and avoid having them selected for bitmaps
+	test_commit_bulk --id=file 100 &&
+	git checkout other &&
+	test_commit_bulk --id=side 100 &&
+	git checkout master &&
+
 	bitmaptip=$(git rev-parse master) &&
 	blob=$(echo tagged-blob | git hash-object -w --stdin) &&
 	git tag tagged-blob $blob &&
@@ -32,9 +108,12 @@ test_expect_success 'setup repo with moderate-sized history' '
 '

 test_expect_success 'full repack creates bitmaps' '
-	git repack -ad &&
+	GIT_TRACE2_EVENT_NESTING=4 GIT_TRACE2_EVENT="$(pwd)/trace" \
+		git repack -ad &&
 	ls .git/objects/pack/ | grep bitmap >output &&
-	test_line_count = 1 output
+	test_line_count = 1 output &&
+	grep "\"key\":\"num_selected_commits\",\"value\":\"106\"" trace &&
+	grep "\"key\":\"num_maximal_commits\",\"value\":\"111\"" trace
 '

 test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
@@ -356,7 +435,7 @@ test_expect_success 'truncated bitmap fails gracefully (ewah)' '
 	git rev-list --use-bitmap-index --count --all >expect &&
 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
 	test_when_finished "rm -f $bitmap" &&
-	test_copy_bytes 256 <$bitmap >$bitmap.tmp &&
+	test_copy_bytes 270 <$bitmap >$bitmap.tmp &&
 	mv -f $bitmap.tmp $bitmap &&
 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
 	test_cmp expect actual &&
--
2.29.2.312.gabc4d358d8
Taylor Blau Nov. 22, 2020, 2:31 a.m. UTC | #8
On Sat, Nov 21, 2020 at 09:11:21PM +0100, Martin Ågren wrote:
> On Sat, 21 Nov 2020 at 20:37, Junio C Hamano <gitster@pobox.com> wrote:
> >
> > Martin Ågren <martin.agren@gmail.com> writes:
> >
> > > On Tue, 17 Nov 2020 at 22:46, Taylor Blau <me@ttaylorr.com> wrote:
> > >> Not very much has changed since last time, but a range-diff is below
> > >> nonetheless. The major changes are:
> > >>
> > >>   - Avoid an overflow when bounds checking in the second and third
> > >>     patches (thanks, Martin, for noticing).
> > >
> > > FWIW, the updates to patches 2 and 3 look exactly like what I was
> > > expecting after the discussion on v1. I have nothing to add.
> >
> > Thanks, both.  Shall we move the topic down to 'next'?
>
> I really only dug into those patches 2 and 3. I read the rest of the
> patches of v1 and went "that makes sense", but that's about it. I
> started looking at "pack-bitmap-write: build fewer intermediate bitmaps"
> and went "this looks really cool -- I should try to understand this". :-)
>
> There was SZEDER's comment on that last patch in v2, where future
> readers of that patch will have to wonder why it does s/256/270/ in a
> test. I agree with SZEDER that the change should be mentioned in the
> commit message, even if it's just "unfortunately, we have some magicness
> here, plus we want to pass both with SHA-1 and SHA-256; turns out 270
> hits the problem we want to test for".

Thanks for reviewing it, and noticing a couple of problems in the
earlier patches, too. If folks are happy with the replacement that I
sent [1], then I am too :-).

I don't think that the "big" patch generated a ton of review on the
list, but maybe that's OK. Peff, Stolee, and I all reviewed that patch
extensively when deploying it at GitHub (where it has been running since
late Summer).

> Martin

Thanks,
Taylor

[1]: https://lore.kernel.org/git/X7nMzzMfjm%2Fp9qfj@xnor.local/
Jeff King Nov. 24, 2020, 2:43 a.m. UTC | #9
On Sat, Nov 21, 2020 at 09:31:50PM -0500, Taylor Blau wrote:

> > There was SZEDER's comment on that last patch in v2, where future
> > readers of that patch will have to wonder why it does s/256/270/ in a
> > test. I agree with SZEDER that the change should be mentioned in the
> > commit message, even if it's just "unfortunately, we have some magicness
> > here, plus we want to pass both with SHA-1 and SHA-256; turns out 270
> > hits the problem we want to test for".
> 
> Thanks for reviewing it, and noticing a couple of problems in the
> earlier patches, too. If folks are happy with the replacement that I
> sent [1], then I am too :-).
> 
> I don't think that the "big" patch generated a ton of review on the
> list, but maybe that's OK. Peff, Stolee, and I all reviewed that patch
> extensively when deploying it at GitHub (where it has been running since
> late Summer).

Hrm. I thought you were going to integrate the extra checks I suggested
for load_bitmap_entries_v1(). Which is looks like you did in patch 17.
After that, the s/256/270/ hack should not be necessary anymore (if it
is, then we should keep fixing more spots).

-Peff
Taylor Blau Dec. 1, 2020, 11:04 p.m. UTC | #10
On Mon, Nov 23, 2020 at 09:43:36PM -0500, Jeff King wrote:
> On Sat, Nov 21, 2020 at 09:31:50PM -0500, Taylor Blau wrote:
>
> > > There was SZEDER's comment on that last patch in v2, where future
> > > readers of that patch will have to wonder why it does s/256/270/ in a
> > > test. I agree with SZEDER that the change should be mentioned in the
> > > commit message, even if it's just "unfortunately, we have some magicness
> > > here, plus we want to pass both with SHA-1 and SHA-256; turns out 270
> > > hits the problem we want to test for".
> >
> > Thanks for reviewing it, and noticing a couple of problems in the
> > earlier patches, too. If folks are happy with the replacement that I
> > sent [1], then I am too :-).
> >
> > I don't think that the "big" patch generated a ton of review on the
> > list, but maybe that's OK. Peff, Stolee, and I all reviewed that patch
> > extensively when deploying it at GitHub (where it has been running since
> > late Summer).
>
> Hrm. I thought you were going to integrate the extra checks I suggested
> for load_bitmap_entries_v1(). Which is looks like you did in patch 17.
> After that, the s/256/270/ hack should not be necessary anymore (if it
> is, then we should keep fixing more spots).

Oops. I even wrote down a big "S/256/270" in my notebook after you and I
talked last about it, and then promptly forgot about it before sending
v2.

In any case, I have all of that fixed up, as well as other comments and
suggestions from review, all of which were very helpful. (Thanks
everybody who took a look at this monstrously large series, and
apologies in advance for more to come ;-)).

I think I would like a little more clarification on the discussion in
[1]. From my reading, Jonathan Tan wants comments about the algorithm in
the code, but Stolee would rather rely on the commits, especially since
the algorithm changes later on in the series relative to the patch that
this is downthread from.

Once we can reach a good decision there, I'll send a v3 (which currently
lives in my fork[2]).

> -Peff

Thanks,
Taylor

[1]: https://lore.kernel.org/git/ea0c8c5d-6bc3-0dca-4fa1-fb461ed7ccb9@gmail.com/
[2]: https://github.com/ttaylorr/git/compare/tb/bitmap-build-fast-for-upstream
Jonathan Tan Dec. 1, 2020, 11:37 p.m. UTC | #11
> I think I would like a little more clarification on the discussion in
> [1]. From my reading, Jonathan Tan wants comments about the algorithm in
> the code, but Stolee would rather rely on the commits, especially since
> the algorithm changes later on in the series relative to the patch that
> this is downthread from.
> 
> Once we can reach a good decision there, I'll send a v3 (which currently
> lives in my fork[2]).

I did, but Stolee has a point that the algorithm will change later on.
I'm OK with the parts I reviewed (patches 10 to 18).
Taylor Blau Dec. 1, 2020, 11:43 p.m. UTC | #12
On Tue, Dec 01, 2020 at 03:37:25PM -0800, Jonathan Tan wrote:
> > Once we can reach a good decision there, I'll send a v3 (which currently
> > lives in my fork[2]).
>
> I did, but Stolee has a point that the algorithm will change later on.
> I'm OK with the parts I reviewed (patches 10 to 18).

Ah, good to know. I'll hold off on a v3, then, until you have had a
chance to look through the remaining handful of patches (if you were
planning on doing that). I haven't touched those locally, so it'll be
good to hear any comments you might have before sending another version.

Thanks for all of your very helpful review :-).

Taylor
Jonathan Tan Dec. 2, 2020, 8:11 a.m. UTC | #13
> On Tue, Dec 01, 2020 at 03:37:25PM -0800, Jonathan Tan wrote:
> > > Once we can reach a good decision there, I'll send a v3 (which currently
> > > lives in my fork[2]).
> >
> > I did, but Stolee has a point that the algorithm will change later on.
> > I'm OK with the parts I reviewed (patches 10 to 18).
> 
> Ah, good to know. I'll hold off on a v3, then, until you have had a
> chance to look through the remaining handful of patches (if you were
> planning on doing that). I haven't touched those locally, so it'll be
> good to hear any comments you might have before sending another version.
> 
> Thanks for all of your very helpful review :-).
> 
> Taylor

You're welcome! I've gone ahead and reviewed all the patches subsequent
to 18. I think others have looked at the patches prior to 10, so I'm not
planning to review the others.