From patchwork Wed Mar 25 06:50:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 11457047 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9891E161F for ; Wed, 25 Mar 2020 06:51:29 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 791672051A for ; Wed, 25 Mar 2020 06:51:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 791672051A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:59924 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jGzsu-0005aP-JV for patchwork-qemu-devel@patchwork.kernel.org; Wed, 25 Mar 2020 02:51:28 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:34457) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jGzs9-0004mH-4Q for qemu-devel@nongnu.org; Wed, 25 Mar 2020 02:50:42 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jGzs8-0003vR-4f for qemu-devel@nongnu.org; Wed, 25 Mar 2020 02:50:40 -0400 Received: from mga04.intel.com ([192.55.52.120]:16672) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jGzs7-0003dc-TO for qemu-devel@nongnu.org; Wed, 25 Mar 2020 02:50:40 -0400 IronPort-SDR: Lnmjbkb8iNgBpKJYlMSvbVzI8aXTUyjubhDcJ9oJ+8eHpUiGWTVagrjhoBzbZ7kVGyD50eB4I1 FvYMUTl9nJCg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2020 23:50:33 -0700 IronPort-SDR: VKlaw+bhzTXJ4huxMnj3R1isTGuBVE0tVuGQGY7Es/DQGgLkNtzGQhKYjdUUnXSS010RHneb2o wK1aTt8Eqkaw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,303,1580803200"; d="scan'208";a="326149618" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by orsmga001.jf.intel.com with ESMTP; 24 Mar 2020 23:50:32 -0700 From: Robert Hoo To: qemu-devel@nongnu.org, pbonzini@redhat.com, richard.henderson@linaro.org Subject: [PATCH 1/2] util/bufferiszero: assign length_to_accel value for each accelerator case Date: Wed, 25 Mar 2020 14:50:20 +0800 Message-Id: <1585119021-46593-1-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 X-detected-operating-system: by eggs.gnu.org: FreeBSD 9.x [fuzzy] X-Received-From: 192.55.52.120 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robert.hu@intel.com, Robert Hoo Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Because in unit test, init_accel() will be called several times, each with different accelerator type. Signed-off-by: Robert Hoo --- util/bufferiszero.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/util/bufferiszero.c b/util/bufferiszero.c index 6639035..b801253 100644 --- a/util/bufferiszero.c +++ b/util/bufferiszero.c @@ -254,13 +254,16 @@ static void init_accel(unsigned cache) bool (*fn)(const void *, size_t) = buffer_zero_int; if (cache & CACHE_SSE2) { fn = buffer_zero_sse2; + length_to_accel = 64; } #ifdef CONFIG_AVX2_OPT if (cache & CACHE_SSE4) { fn = buffer_zero_sse4; + length_to_accel = 64; } if (cache & CACHE_AVX2) { fn = buffer_zero_avx2; + length_to_accel = 64; } #endif #ifdef CONFIG_AVX512F_OPT From patchwork Wed Mar 25 06:50:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 11457049 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C9E11161F for ; Wed, 25 Mar 2020 06:51:38 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AA70E2051A for ; Wed, 25 Mar 2020 06:51:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AA70E2051A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:59926 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jGzt3-0005ik-TQ for patchwork-qemu-devel@patchwork.kernel.org; Wed, 25 Mar 2020 02:51:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:34470) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jGzsD-0004pI-D8 for qemu-devel@nongnu.org; Wed, 25 Mar 2020 02:50:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jGzsC-000423-Bl for qemu-devel@nongnu.org; Wed, 25 Mar 2020 02:50:45 -0400 Received: from mga04.intel.com ([192.55.52.120]:16676) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jGzsC-0003t9-2N for qemu-devel@nongnu.org; Wed, 25 Mar 2020 02:50:44 -0400 IronPort-SDR: ORm91fHsfWFy6YJtaMVwlVpEtlQ8AB2+ApfEBTeZnpL+WrthEfq1PtaCWxlzhsl4T/+7jKAxme PFS/ZinR4lJg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2020 23:50:38 -0700 IronPort-SDR: yel406oUMlWhQztIFBraFpRZeAEeKSbpRJgSJI6ESH2cLGAuzZWeXQLG52FfIdF6KmH5HbZpuQ 7Jn51BvpXN7A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,303,1580803200"; d="scan'208";a="326149630" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by orsmga001.jf.intel.com with ESMTP; 24 Mar 2020 23:50:37 -0700 From: Robert Hoo To: qemu-devel@nongnu.org, pbonzini@redhat.com, richard.henderson@linaro.org Subject: [PATCH 2/2] util/bufferiszero: improve avx2 accelerator Date: Wed, 25 Mar 2020 14:50:21 +0800 Message-Id: <1585119021-46593-2-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1585119021-46593-1-git-send-email-robert.hu@linux.intel.com> References: <1585119021-46593-1-git-send-email-robert.hu@linux.intel.com> X-detected-operating-system: by eggs.gnu.org: FreeBSD 9.x [fuzzy] X-Received-From: 192.55.52.120 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robert.hu@intel.com, Robert Hoo Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" By increasing avx2 length_to_accel to 128, we can simplify its logic and reduce a branch. The authorship of this patch actually belongs to Richard Henderson , I just fix a boudary case on his original patch. Suggested-by: Richard Henderson Signed-off-by: Robert Hoo --- util/bufferiszero.c | 26 +++++++++----------------- 1 file changed, 9 insertions(+), 17 deletions(-) diff --git a/util/bufferiszero.c b/util/bufferiszero.c index b801253..695bb4c 100644 --- a/util/bufferiszero.c +++ b/util/bufferiszero.c @@ -158,27 +158,19 @@ buffer_zero_avx2(const void *buf, size_t len) __m256i *p = (__m256i *)(((uintptr_t)buf + 5 * 32) & -32); __m256i *e = (__m256i *)(((uintptr_t)buf + len) & -32); - if (likely(p <= e)) { - /* Loop over 32-byte aligned blocks of 128. */ - do { - __builtin_prefetch(p); - if (unlikely(!_mm256_testz_si256(t, t))) { - return false; - } - t = p[-4] | p[-3] | p[-2] | p[-1]; - p += 4; - } while (p <= e); - } else { - t |= _mm256_loadu_si256(buf + 32); - if (len <= 128) { - goto last2; + /* Loop over 32-byte aligned blocks of 128. */ + while (p <= e) { + __builtin_prefetch(p); + if (unlikely(!_mm256_testz_si256(t, t))) { + return false; } - } + t = p[-4] | p[-3] | p[-2] | p[-1]; + p += 4; + } ; /* Finish the last block of 128 unaligned. */ t |= _mm256_loadu_si256(buf + len - 4 * 32); t |= _mm256_loadu_si256(buf + len - 3 * 32); - last2: t |= _mm256_loadu_si256(buf + len - 2 * 32); t |= _mm256_loadu_si256(buf + len - 1 * 32); @@ -263,7 +255,7 @@ static void init_accel(unsigned cache) } if (cache & CACHE_AVX2) { fn = buffer_zero_avx2; - length_to_accel = 64; + length_to_accel = 128; } #endif #ifdef CONFIG_AVX512F_OPT