Message ID | 1608325864-4033-2-git-send-email-megha.dey@intel.com (mailing list archive) |
---|---|
State | RFC |
Delegated to: | Herbert Xu |
Headers | show |
Series | Introduce AVX512 optimized crypto algorithms | expand |
On Fri, 18 Dec 2020 at 22:07, Megha Dey <megha.dey@intel.com> wrote: > > This is a preparatory patch to introduce the optimized crypto algorithms > using AVX512 instructions which would require VAES and VPLCMULQDQ support. > > Check for VAES and VPCLMULQDQ assembler support using AVX512 registers. > > Cc: x86@kernel.org > Signed-off-by: Megha Dey <megha.dey@intel.com> > --- > arch/x86/Kconfig.assembler | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/arch/x86/Kconfig.assembler b/arch/x86/Kconfig.assembler > index 26b8c08..9ea0bc8 100644 > --- a/arch/x86/Kconfig.assembler > +++ b/arch/x86/Kconfig.assembler > @@ -1,6 +1,16 @@ > # SPDX-License-Identifier: GPL-2.0 > # Copyright (C) 2020 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. > > +config AS_VAES_AVX512 > + def_bool $(as-instr,vaesenc %zmm0$(comma)%zmm1$(comma)%zmm1) && 64BIT Is the '&& 64BIT' necessary here, but not below? In any case, better to use a separate 'depends on' line, for legibility > + help > + Supported by binutils >= 2.30 and LLVM integrated assembler > + > +config AS_VPCLMULQDQ > + def_bool $(as-instr,vpclmulqdq \$0$(comma)%zmm2$(comma)%zmm6$(comma)%zmm4) > + help > + Supported by binutils >= 2.30 and LLVM integrated assembler > + > config AS_AVX512 > def_bool $(as-instr,vpmovm2b %k1$(comma)%zmm5) > help > -- > 2.7.4 >
Hi Ard, On 1/16/2021 8:54 AM, Ard Biesheuvel wrote: > On Fri, 18 Dec 2020 at 22:07, Megha Dey <megha.dey@intel.com> wrote: >> This is a preparatory patch to introduce the optimized crypto algorithms >> using AVX512 instructions which would require VAES and VPLCMULQDQ support. >> >> Check for VAES and VPCLMULQDQ assembler support using AVX512 registers. >> >> Cc: x86@kernel.org >> Signed-off-by: Megha Dey <megha.dey@intel.com> >> --- >> arch/x86/Kconfig.assembler | 10 ++++++++++ >> 1 file changed, 10 insertions(+) >> >> diff --git a/arch/x86/Kconfig.assembler b/arch/x86/Kconfig.assembler >> index 26b8c08..9ea0bc8 100644 >> --- a/arch/x86/Kconfig.assembler >> +++ b/arch/x86/Kconfig.assembler >> @@ -1,6 +1,16 @@ >> # SPDX-License-Identifier: GPL-2.0 >> # Copyright (C) 2020 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. >> >> +config AS_VAES_AVX512 >> + def_bool $(as-instr,vaesenc %zmm0$(comma)%zmm1$(comma)%zmm1) && 64BIT > Is the '&& 64BIT' necessary here, but not below? > > In any case, better to use a separate 'depends on' line, for legibility yeah , I think the '&& 64 BIT' is not required. I will remove it in the next version. -Megha > >> + help >> + Supported by binutils >= 2.30 and LLVM integrated assembler >> + >> +config AS_VPCLMULQDQ >> + def_bool $(as-instr,vpclmulqdq \$0$(comma)%zmm2$(comma)%zmm6$(comma)%zmm4) >> + help >> + Supported by binutils >= 2.30 and LLVM integrated assembler >> + >> config AS_AVX512 >> def_bool $(as-instr,vpmovm2b %k1$(comma)%zmm5) >> help >> -- >> 2.7.4 >>
diff --git a/arch/x86/Kconfig.assembler b/arch/x86/Kconfig.assembler index 26b8c08..9ea0bc8 100644 --- a/arch/x86/Kconfig.assembler +++ b/arch/x86/Kconfig.assembler @@ -1,6 +1,16 @@ # SPDX-License-Identifier: GPL-2.0 # Copyright (C) 2020 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. +config AS_VAES_AVX512 + def_bool $(as-instr,vaesenc %zmm0$(comma)%zmm1$(comma)%zmm1) && 64BIT + help + Supported by binutils >= 2.30 and LLVM integrated assembler + +config AS_VPCLMULQDQ + def_bool $(as-instr,vpclmulqdq \$0$(comma)%zmm2$(comma)%zmm6$(comma)%zmm4) + help + Supported by binutils >= 2.30 and LLVM integrated assembler + config AS_AVX512 def_bool $(as-instr,vpmovm2b %k1$(comma)%zmm5) help
This is a preparatory patch to introduce the optimized crypto algorithms using AVX512 instructions which would require VAES and VPLCMULQDQ support. Check for VAES and VPCLMULQDQ assembler support using AVX512 registers. Cc: x86@kernel.org Signed-off-by: Megha Dey <megha.dey@intel.com> --- arch/x86/Kconfig.assembler | 10 ++++++++++ 1 file changed, 10 insertions(+)