From patchwork Thu Sep 24 19:16:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jeff King X-Patchwork-Id: 11798143 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2D2FE59D for ; Thu, 24 Sep 2020 19:16:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 167CD21D91 for ; Thu, 24 Sep 2020 19:16:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728707AbgIXTQj (ORCPT ); Thu, 24 Sep 2020 15:16:39 -0400 Received: from cloud.peff.net ([104.130.231.41]:39902 "EHLO cloud.peff.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726899AbgIXTQj (ORCPT ); Thu, 24 Sep 2020 15:16:39 -0400 Received: (qmail 6875 invoked by uid 109); 24 Sep 2020 19:16:39 -0000 Received: from Unknown (HELO peff.net) (10.0.1.2) by cloud.peff.net (qpsmtpd/0.94) with ESMTP; Thu, 24 Sep 2020 19:16:39 +0000 Authentication-Results: cloud.peff.net; auth=none Received: (qmail 10271 invoked by uid 111); 24 Sep 2020 19:16:39 -0000 Received: from coredump.intra.peff.net (HELO sigill.intra.peff.net) (10.0.0.2) by peff.net (qpsmtpd/0.94) with (TLS_AES_256_GCM_SHA384 encrypted) ESMTPS; Thu, 24 Sep 2020 15:16:39 -0400 Authentication-Results: peff.net; auth=none Date: Thu, 24 Sep 2020 15:16:38 -0400 From: Jeff King To: Junio C Hamano Cc: Han-Wen Nienhuys , git , Han-Wen Nienhuys Subject: [PATCH 0/2] drop unaligned loads Message-ID: <20200924191638.GA2528003@coredump.intra.peff.net> MIME-Version: 1.0 Content-Disposition: inline Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org On Thu, Sep 24, 2020 at 10:22:20AM -0700, Junio C Hamano wrote: > Jeff King writes: > > > Then I did the same, but building with -DNO_UNALIGNED_LOADS. The latter > > actually ran faster, by a small margin. Here are the hyperfine results: > > > > [stock] > > Time (mean ± σ): 6.638 s ± 0.081 s [User: 6.269 s, System: 0.368 s] > > Range (min … max): 6.550 s … 6.841 s 10 runs > > > > [-DNO_UNALIGNED_LOADS] > > Time (mean ± σ): 6.418 s ± 0.015 s [User: 6.058 s, System: 0.360 s] > > Range (min … max): 6.394 s … 6.447 s 10 runs > > > > For casual use as in reftables I doubt the difference is even > > measurable. But this result implies that perhaps we ought to just be > > using the fallback version all the time. > > I like that one. One less configurable knob that makes us execute > different codepaths is one less thing to be worried about. Here it is with a little more research, then, and a cleanup we can do on top. [1/2]: bswap.h: drop unaligned loads [2/2]: Revert "fast-export: use local array to store anonymized oid" Makefile | 1 - builtin/fast-export.c | 8 ++++---- compat/bswap.h | 24 ------------------------ 3 files changed, 4 insertions(+), 29 deletions(-) -Peff