Message ID | 20181015021900.1030041-5-sandals@crustytoothpaste.net (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Base SHA-256 implementation | expand |
On Mon, Oct 15, 2018 at 4:21 AM brian m. carlson <sandals@crustytoothpaste.net> wrote: > diff --git a/cache.h b/cache.h > index a13d14ce0a..0b88c3a344 100644 > --- a/cache.h > +++ b/cache.h > @@ -1024,16 +1024,12 @@ extern const struct object_id null_oid; > static inline int hashcmp(const unsigned char *sha1, const unsigned char *sha2) > { > /* > - * This is a temporary optimization hack. By asserting the size here, > - * we let the compiler know that it's always going to be 20, which lets > - * it turn this fixed-size memcmp into a few inline instructions. > - * > - * This will need to be extended or ripped out when we learn about > - * hashes of different sizes. > + * Teach the compiler that there are only two possibilities of hash size > + * here, so that it can optimize for this case as much as possible. > */ > - if (the_hash_algo->rawsz != 20) > - BUG("hash size not yet supported by hashcmp"); > - return memcmp(sha1, sha2, the_hash_algo->rawsz); > + if (the_hash_algo->rawsz == GIT_MAX_RAWSZ) It's tangent. But performance is probably another good reason to detach the_hash_algo from the_repository so we have one less dereference to do. (the other good reason is these hash operations should work in "no-repo" commands as well, where the_repository does not really make sense). > + return memcmp(sha1, sha2, GIT_MAX_RAWSZ); > + return memcmp(sha1, sha2, GIT_SHA1_RAWSZ); > } > > static inline int oidcmp(const struct object_id *oid1, const struct object_id *oid2) > @@ -1043,7 +1039,13 @@ static inline int oidcmp(const struct object_id *oid1, const struct object_id *o > > static inline int hasheq(const unsigned char *sha1, const unsigned char *sha2) > { > - return !hashcmp(sha1, sha2); > + /* > + * We write this here instead of deferring to hashcmp so that the > + * compiler can properly inline it and avoid calling memcmp. > + */ > + if (the_hash_algo->rawsz == GIT_MAX_RAWSZ) > + return !memcmp(sha1, sha2, GIT_MAX_RAWSZ); > + return !memcmp(sha1, sha2, GIT_SHA1_RAWSZ); > } > > static inline int oideq(const struct object_id *oid1, const struct object_id *oid2)
diff --git a/cache.h b/cache.h index a13d14ce0a..0b88c3a344 100644 --- a/cache.h +++ b/cache.h @@ -1024,16 +1024,12 @@ extern const struct object_id null_oid; static inline int hashcmp(const unsigned char *sha1, const unsigned char *sha2) { /* - * This is a temporary optimization hack. By asserting the size here, - * we let the compiler know that it's always going to be 20, which lets - * it turn this fixed-size memcmp into a few inline instructions. - * - * This will need to be extended or ripped out when we learn about - * hashes of different sizes. + * Teach the compiler that there are only two possibilities of hash size + * here, so that it can optimize for this case as much as possible. */ - if (the_hash_algo->rawsz != 20) - BUG("hash size not yet supported by hashcmp"); - return memcmp(sha1, sha2, the_hash_algo->rawsz); + if (the_hash_algo->rawsz == GIT_MAX_RAWSZ) + return memcmp(sha1, sha2, GIT_MAX_RAWSZ); + return memcmp(sha1, sha2, GIT_SHA1_RAWSZ); } static inline int oidcmp(const struct object_id *oid1, const struct object_id *oid2) @@ -1043,7 +1039,13 @@ static inline int oidcmp(const struct object_id *oid1, const struct object_id *o static inline int hasheq(const unsigned char *sha1, const unsigned char *sha2) { - return !hashcmp(sha1, sha2); + /* + * We write this here instead of deferring to hashcmp so that the + * compiler can properly inline it and avoid calling memcmp. + */ + if (the_hash_algo->rawsz == GIT_MAX_RAWSZ) + return !memcmp(sha1, sha2, GIT_MAX_RAWSZ); + return !memcmp(sha1, sha2, GIT_SHA1_RAWSZ); } static inline int oideq(const struct object_id *oid1, const struct object_id *oid2)
In 183a638b7d ("hashcmp: assert constant hash size", 2018-08-23), we modified hashcmp to assert that the hash size was always 20 to help it optimize and inline calls to memcmp. In a future series, we replaced many calls to hashcmp and oidcmp with calls to hasheq and oideq to improve inlining further. However, we want to support hash algorithms other than SHA-1, namely SHA-256. When doing so, we must handle the case where these values are 32 bytes long as well as 20. Adjust hashcmp to handle two cases: 20-byte matches, and maximum-size matches. Therefore, when we include SHA-256, we'll automatically handle it properly, while at the same time teaching the compiler that there are only two possible options to consider. This will allow the compiler to write the most efficient possible code. Copy similar code into hasheq and perform an identical transformation. At least with GCC 8.2.0, making hasheq defer to hashcmp when there are two branches prevents the compiler from inlining the comparison, while the code in this patch is inlined properly. Add a comment to avoid an accidental performance regression from well-intentioned refactoring. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> --- cache.h | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-)