diff mbox series

[RFC,1/8] xen/arm: enable SVE extension for Xen

Message ID 20230111143826.3224-2-luca.fancellu@arm.com (mailing list archive)
State Superseded
Headers show
Series SVE feature for arm guests | expand

Commit Message

Luca Fancellu Jan. 11, 2023, 2:38 p.m. UTC
Enable Xen to handle the SVE extension, add code in cpufeature module
to handle ZCR SVE register, disable trapping SVE feature on system
boot, it will be restored later on vcpu creation and running.
While there, correct coding style for the comment on coprocessor
trapping.

Change the KConfig entry to make ARM64_SVE symbol selectable, by
default it will be not selected.

Create sve module and sve_asm.S that contains assembly routines for
the SVE feature, this code is inspired from linux and it uses
instruction encoding to be compatible with compilers that does not
support SVE.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/arch/arm/Kconfig                     |  3 +-
 xen/arch/arm/arm64/Makefile              |  1 +
 xen/arch/arm/arm64/cpufeature.c          |  7 ++--
 xen/arch/arm/arm64/sve.c                 | 38 +++++++++++++++++++
 xen/arch/arm/arm64/sve_asm.S             | 48 ++++++++++++++++++++++++
 xen/arch/arm/cpufeature.c                |  6 ++-
 xen/arch/arm/domain.c                    |  4 ++
 xen/arch/arm/include/asm/arm64/sve.h     | 43 +++++++++++++++++++++
 xen/arch/arm/include/asm/arm64/sysregs.h |  1 +
 xen/arch/arm/include/asm/cpufeature.h    | 14 +++++++
 xen/arch/arm/include/asm/domain.h        |  1 +
 xen/arch/arm/include/asm/processor.h     |  2 +
 xen/arch/arm/setup.c                     |  5 ++-
 xen/arch/arm/traps.c                     | 34 ++++++++++++-----
 14 files changed, 188 insertions(+), 19 deletions(-)
 create mode 100644 xen/arch/arm/arm64/sve.c
 create mode 100644 xen/arch/arm/arm64/sve_asm.S
 create mode 100644 xen/arch/arm/include/asm/arm64/sve.h

Comments

Julien Grall Jan. 11, 2023, 5:16 p.m. UTC | #1
Hi Luca,

As this is an RFC, I will be mostly making general comments.

On 11/01/2023 14:38, Luca Fancellu wrote:
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 99577adb6c69..8ea3843ea8e8 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -181,6 +181,8 @@ static void ctxt_switch_to(struct vcpu *n)
>       /* VGIC */
>       gic_restore_state(n);
>   
> +    WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);

Shouldn't this need an isb() afterwards do ensure that any previously 
trapped will be accessible?

[...]

> @@ -122,6 +137,7 @@ __initcall(update_serrors_cpu_caps);
>   
>   void init_traps(void)
>   {
> +    register_t cptr_bits = get_default_cptr_flags();
>       /*
>        * Setup Hyp vector base. Note they might get updated with the
>        * branch predictor hardening.
> @@ -135,17 +151,15 @@ void init_traps(void)
>       /* Trap CP15 c15 used for implementation defined registers */
>       WRITE_SYSREG(HSTR_T(15), HSTR_EL2);
>   
> -    /* Trap all coprocessor registers (0-13) except cp10 and
> -     * cp11 for VFP.
> -     *
> -     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
> -     *
> -     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
> -     * RES1, i.e. they would trap whether we did this write or not.
> +#ifdef CONFIG_ARM64_SVE
> +    /*
> +     * Don't trap SVE now, Xen might need to access ZCR reg in cpufeature code,
> +     * trapping again or not will be handled on vcpu creation/scheduling later
>        */

Instead of enable by default at boot, can we try to enable/disable only 
when this is strictly needed?

> -    WRITE_SYSREG((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
> -                 HCPTR_TTA | HCPTR_TAM,
> -                 CPTR_EL2);
> +    cptr_bits &= ~HCPTR_CP(8);
> +#endif
> +
> +    WRITE_SYSREG(cptr_bits, CPTR_EL2);
>   
>       /*
>        * Configure HCR_EL2 with the bare minimum to run Xen until a guest

Cheers,
Luca Fancellu Jan. 12, 2023, 10:46 a.m. UTC | #2
> On 11 Jan 2023, at 17:16, Julien Grall <julien@xen.org> wrote:
> 
> Hi Luca,
> 
> As this is an RFC, I will be mostly making general comments.

Hi Julien,

Thank you.

> 
> On 11/01/2023 14:38, Luca Fancellu wrote:
>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index 99577adb6c69..8ea3843ea8e8 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -181,6 +181,8 @@ static void ctxt_switch_to(struct vcpu *n)
>>      /* VGIC */
>>      gic_restore_state(n);
>>  +    WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
> 
> Shouldn't this need an isb() afterwards do ensure that any previously trapped will be accessible?

Yes you are right, would it be ok for you if I move this before gic_restore_state because it inside
has an isb()? This to limit the isb() usage. I could put also a comment to don’t forget it.

Otherwise I will add the barrier.


> 
> [...]
> 
>> @@ -122,6 +137,7 @@ __initcall(update_serrors_cpu_caps);
>>    void init_traps(void)
>>  {
>> +    register_t cptr_bits = get_default_cptr_flags();
>>      /*
>>       * Setup Hyp vector base. Note they might get updated with the
>>       * branch predictor hardening.
>> @@ -135,17 +151,15 @@ void init_traps(void)
>>      /* Trap CP15 c15 used for implementation defined registers */
>>      WRITE_SYSREG(HSTR_T(15), HSTR_EL2);
>>  -    /* Trap all coprocessor registers (0-13) except cp10 and
>> -     * cp11 for VFP.
>> -     *
>> -     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
>> -     *
>> -     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
>> -     * RES1, i.e. they would trap whether we did this write or not.
>> +#ifdef CONFIG_ARM64_SVE
>> +    /*
>> +     * Don't trap SVE now, Xen might need to access ZCR reg in cpufeature code,
>> +     * trapping again or not will be handled on vcpu creation/scheduling later
>>       */
> 
> Instead of enable by default at boot, can we try to enable/disable only when this is strictly needed?

Yes we could un-trap inside compute_max_zcr() just before accessing SVE resources and trap it
again when finished. Would it be ok for you this approach?

> 
>> -    WRITE_SYSREG((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
>> -                 HCPTR_TTA | HCPTR_TAM,
>> -                 CPTR_EL2);
>> +    cptr_bits &= ~HCPTR_CP(8);
>> +#endif
>> +
>> +    WRITE_SYSREG(cptr_bits, CPTR_EL2);
>>        /*
>>       * Configure HCR_EL2 with the bare minimum to run Xen until a guest
> 
> Cheers,
> -- 
> Julien Grall
Julien Grall Jan. 13, 2023, 8:53 a.m. UTC | #3
Hi Luca,

On 12/01/2023 10:46, Luca Fancellu wrote:
> 
> 
>> On 11 Jan 2023, at 17:16, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Luca,
>>
>> As this is an RFC, I will be mostly making general comments.
> 
> Hi Julien,
> 
> Thank you.
> 
>>
>> On 11/01/2023 14:38, Luca Fancellu wrote:
>>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>>> index 99577adb6c69..8ea3843ea8e8 100644
>>> --- a/xen/arch/arm/domain.c
>>> +++ b/xen/arch/arm/domain.c
>>> @@ -181,6 +181,8 @@ static void ctxt_switch_to(struct vcpu *n)
>>>       /* VGIC */
>>>       gic_restore_state(n);
>>>   +    WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
>>
>> Shouldn't this need an isb() afterwards do ensure that any previously trapped will be accessible?
> 
> Yes you are right, would it be ok for you if I move this before gic_restore_state because it inside
> has an isb()? This to limit the isb() usage. I could put also a comment to don’t forget it.

I would rather prefer if we don't rely on gic_restore_state() to have an 
isb() because this could change in the future (although unlikely).

Looking at the context switch code, I think we can move the call to 
restore the floating register towards the end of the helper and use one 
of the existing isb() for our purpose.


>>> @@ -122,6 +137,7 @@ __initcall(update_serrors_cpu_caps);
>>>     void init_traps(void)
>>>   {
>>> +    register_t cptr_bits = get_default_cptr_flags();
>>>       /*
>>>        * Setup Hyp vector base. Note they might get updated with the
>>>        * branch predictor hardening.
>>> @@ -135,17 +151,15 @@ void init_traps(void)
>>>       /* Trap CP15 c15 used for implementation defined registers */
>>>       WRITE_SYSREG(HSTR_T(15), HSTR_EL2);
>>>   -    /* Trap all coprocessor registers (0-13) except cp10 and
>>> -     * cp11 for VFP.
>>> -     *
>>> -     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
>>> -     *
>>> -     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
>>> -     * RES1, i.e. they would trap whether we did this write or not.
>>> +#ifdef CONFIG_ARM64_SVE
>>> +    /*
>>> +     * Don't trap SVE now, Xen might need to access ZCR reg in cpufeature code,
>>> +     * trapping again or not will be handled on vcpu creation/scheduling later
>>>        */
>>
>> Instead of enable by default at boot, can we try to enable/disable only when this is strictly needed?
> 
> Yes we could un-trap inside compute_max_zcr() just before accessing SVE resources and trap it
> again when finished. Would it be ok for you this approach?

Yes.

Cheers,
Luca Fancellu Jan. 13, 2023, 12:53 p.m. UTC | #4
> On 13 Jan 2023, at 08:53, Julien Grall <julien@xen.org> wrote:
> 
> Hi Luca,
> 
> On 12/01/2023 10:46, Luca Fancellu wrote:
>>> On 11 Jan 2023, at 17:16, Julien Grall <julien@xen.org> wrote:
>>> 
>>> Hi Luca,
>>> 
>>> As this is an RFC, I will be mostly making general comments.
>> Hi Julien,
>> Thank you.
>>> 
>>> On 11/01/2023 14:38, Luca Fancellu wrote:
>>>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>>>> index 99577adb6c69..8ea3843ea8e8 100644
>>>> --- a/xen/arch/arm/domain.c
>>>> +++ b/xen/arch/arm/domain.c
>>>> @@ -181,6 +181,8 @@ static void ctxt_switch_to(struct vcpu *n)
>>>>      /* VGIC */
>>>>      gic_restore_state(n);
>>>>  +    WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
>>> 
>>> Shouldn't this need an isb() afterwards do ensure that any previously trapped will be accessible?
>> Yes you are right, would it be ok for you if I move this before gic_restore_state because it inside
>> has an isb()? This to limit the isb() usage. I could put also a comment to don’t forget it.
> 
> I would rather prefer if we don't rely on gic_restore_state() to have an isb() because this could change in the future (although unlikely).
> 
> Looking at the context switch code, I think we can move the call to restore the floating register towards the end of the helper and use one of the existing isb() for our purpose.

Sounds good to me

> 
> 
>>>> @@ -122,6 +137,7 @@ __initcall(update_serrors_cpu_caps);
>>>>    void init_traps(void)
>>>>  {
>>>> +    register_t cptr_bits = get_default_cptr_flags();
>>>>      /*
>>>>       * Setup Hyp vector base. Note they might get updated with the
>>>>       * branch predictor hardening.
>>>> @@ -135,17 +151,15 @@ void init_traps(void)
>>>>      /* Trap CP15 c15 used for implementation defined registers */
>>>>      WRITE_SYSREG(HSTR_T(15), HSTR_EL2);
>>>>  -    /* Trap all coprocessor registers (0-13) except cp10 and
>>>> -     * cp11 for VFP.
>>>> -     *
>>>> -     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
>>>> -     *
>>>> -     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
>>>> -     * RES1, i.e. they would trap whether we did this write or not.
>>>> +#ifdef CONFIG_ARM64_SVE
>>>> +    /*
>>>> +     * Don't trap SVE now, Xen might need to access ZCR reg in cpufeature code,
>>>> +     * trapping again or not will be handled on vcpu creation/scheduling later
>>>>       */
>>> 
>>> Instead of enable by default at boot, can we try to enable/disable only when this is strictly needed?
>> Yes we could un-trap inside compute_max_zcr() just before accessing SVE resources and trap it
>> again when finished. Would it be ok for you this approach?
> 
> Yes.
> 
> Cheers,
> 
> -- 
> Julien Grall
diff mbox series

Patch

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c7f..2a5151f3c718 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -112,11 +112,10 @@  config ARM64_PTR_AUTH
 	  This feature is not supported in Xen.
 
 config ARM64_SVE
-	def_bool n
+	bool "Enable Scalar Vector Extension support" if EXPERT
 	depends on ARM_64
 	help
 	  Scalar Vector Extension support.
-	  This feature is not supported in Xen.
 
 config ARM64_MTE
 	def_bool n
diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
index 6d507da0d44d..1d59c3b0ec89 100644
--- a/xen/arch/arm/arm64/Makefile
+++ b/xen/arch/arm/arm64/Makefile
@@ -12,6 +12,7 @@  obj-y += insn.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
 obj-y += smc.o
 obj-y += smpboot.o
+obj-$(CONFIG_ARM64_SVE) += sve.o sve_asm.o
 obj-y += traps.o
 obj-y += vfp.o
 obj-y += vsysreg.o
diff --git a/xen/arch/arm/arm64/cpufeature.c b/xen/arch/arm/arm64/cpufeature.c
index d9039d37b2d1..b4656ff4d80f 100644
--- a/xen/arch/arm/arm64/cpufeature.c
+++ b/xen/arch/arm/arm64/cpufeature.c
@@ -455,15 +455,11 @@  static const struct arm64_ftr_bits ftr_id_dfr1[] = {
 	ARM64_FTR_END,
 };
 
-#if 0
-/* TODO: use this to sanitize SVE once we support it */
-
 static const struct arm64_ftr_bits ftr_zcr[] = {
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
 		ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0),	/* LEN */
 	ARM64_FTR_END,
 };
-#endif
 
 /*
  * Common ftr bits for a 32bit register with all hidden, strict
@@ -603,6 +599,9 @@  void update_system_features(const struct cpuinfo_arm *new)
 
 	SANITIZE_ID_REG(zfr64, 0, aa64zfr0);
 
+	if ( cpu_has_sve )
+		SANITIZE_REG(zcr64, 0, zcr);
+
 	/*
 	 * Comment from Linux:
 	 * Userspace may perform DC ZVA instructions. Mismatched block sizes
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
new file mode 100644
index 000000000000..326389278292
--- /dev/null
+++ b/xen/arch/arm/arm64/sve.c
@@ -0,0 +1,38 @@ 
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Arm SVE feature code
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ */
+
+#include <xen/types.h>
+#include <asm/arm64/sve.h>
+#include <asm/arm64/sysregs.h>
+
+extern unsigned int sve_get_hw_vl(void);
+
+register_t compute_max_zcr(void)
+{
+    register_t zcr = vl_to_zcr(SVE_VL_MAX_BITS);
+    unsigned int hw_vl;
+
+    /*
+     * Set the maximum SVE vector length, doing that we will know the VL
+     * supported by the platform, calling sve_get_hw_vl()
+     */
+    WRITE_SYSREG(zcr, ZCR_EL2);
+
+    /*
+     * Read the maximum VL, which could be lower than what we imposed before,
+     * hw_vl contains VL in bytes, multiply it by 8 to use vl_to_zcr() later
+     */
+    hw_vl = sve_get_hw_vl() * 8U;
+
+    return vl_to_zcr(hw_vl);
+}
+
+/* Takes a vector length in bits and returns the ZCR_ELx encoding */
+register_t vl_to_zcr(uint16_t vl)
+{
+    return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
+}
diff --git a/xen/arch/arm/arm64/sve_asm.S b/xen/arch/arm/arm64/sve_asm.S
new file mode 100644
index 000000000000..4d1549344733
--- /dev/null
+++ b/xen/arch/arm/arm64/sve_asm.S
@@ -0,0 +1,48 @@ 
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Arm SVE assembly routines
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ *
+ * Some macros and instruction encoding in this file are taken from linux 6.1.1,
+ * file arch/arm64/include/asm/fpsimdmacros.h, some of them are a modified
+ * version.
+ */
+
+/* Sanity-check macros to help avoid encoding garbage instructions */
+
+.macro _check_general_reg nr
+    .if (\nr) < 0 || (\nr) > 30
+        .error "Bad register number \nr."
+    .endif
+.endm
+
+.macro _check_num n, min, max
+    .if (\n) < (\min) || (\n) > (\max)
+        .error "Number \n out of range [\min,\max]"
+    .endif
+.endm
+
+/* SVE instruction encodings for non-SVE-capable assemblers */
+/* (pre binutils 2.28, all kernel capable clang versions support SVE) */
+
+/* RDVL X\nx, #\imm */
+.macro _sve_rdvl nx, imm
+    _check_general_reg \nx
+    _check_num (\imm), -0x20, 0x1f
+    .inst 0x04bf5000                \
+        | (\nx)                     \
+        | (((\imm) & 0x3f) << 5)
+.endm
+
+/* Gets the current vector register size in bytes */
+GLOBAL(sve_get_hw_vl)
+    _sve_rdvl 0, 1
+    ret
+
+/*
+ * Local variables:
+ * mode: ASM
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
index c4ec38bb2554..83b84368f6d5 100644
--- a/xen/arch/arm/cpufeature.c
+++ b/xen/arch/arm/cpufeature.c
@@ -9,6 +9,7 @@ 
 #include <xen/init.h>
 #include <xen/smp.h>
 #include <xen/stop_machine.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpufeature.h>
 
 DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
@@ -143,6 +144,9 @@  void identify_cpu(struct cpuinfo_arm *c)
 
     c->zfr64.bits[0] = READ_SYSREG(ID_AA64ZFR0_EL1);
 
+    if ( cpu_has_sve )
+        c->zcr64.bits[0] = compute_max_zcr();
+
     c->dczid.bits[0] = READ_SYSREG(DCZID_EL0);
 
     c->ctr.bits[0] = READ_SYSREG(CTR_EL0);
@@ -199,7 +203,7 @@  static int __init create_guest_cpuinfo(void)
     guest_cpuinfo.pfr64.mpam = 0;
     guest_cpuinfo.pfr64.mpam_frac = 0;
 
-    /* Hide SVE as Xen does not support it */
+    /* Hide SVE by default to the guests */
     guest_cpuinfo.pfr64.sve = 0;
     guest_cpuinfo.zfr64.bits[0] = 0;
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 99577adb6c69..8ea3843ea8e8 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -181,6 +181,8 @@  static void ctxt_switch_to(struct vcpu *n)
     /* VGIC */
     gic_restore_state(n);
 
+    WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
+
     /* VFP */
     vfp_restore_state(n);
 
@@ -548,6 +550,8 @@  int arch_vcpu_create(struct vcpu *v)
 
     v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
 
+    v->arch.cptr_el2 = get_default_cptr_flags();
+
     v->arch.hcr_el2 = get_default_hcr_flags();
 
     v->arch.mdcr_el2 = HDCR_TDRA | HDCR_TDOSA | HDCR_TDA;
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
new file mode 100644
index 000000000000..bd56e2f24230
--- /dev/null
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -0,0 +1,43 @@ 
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Arm SVE feature code
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ */
+
+#ifndef _ARM_ARM64_SVE_H
+#define _ARM_ARM64_SVE_H
+
+#define SVE_VL_MAX_BITS (2048U)
+
+/* Vector length must be multiple of 128 */
+#define SVE_VL_MULTIPLE_VAL (128U)
+
+#ifdef CONFIG_ARM64_SVE
+
+register_t compute_max_zcr(void);
+register_t vl_to_zcr(uint16_t vl);
+
+#else /* !CONFIG_ARM64_SVE */
+
+static inline register_t compute_max_zcr(void)
+{
+    return 0;
+}
+
+static inline register_t vl_to_zcr(uint16_t vl)
+{
+    return 0;
+}
+
+#endif
+
+#endif /* _ARM_ARM64_SVE_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
index 463899951414..4cabb9eb4d5e 100644
--- a/xen/arch/arm/include/asm/arm64/sysregs.h
+++ b/xen/arch/arm/include/asm/arm64/sysregs.h
@@ -24,6 +24,7 @@ 
 #define ICH_EISR_EL2              S3_4_C12_C11_3
 #define ICH_ELSR_EL2              S3_4_C12_C11_5
 #define ICH_VMCR_EL2              S3_4_C12_C11_7
+#define ZCR_EL2                   S3_4_C1_C2_0
 
 #define __LR0_EL2(x)              S3_4_C12_C12_ ## x
 #define __LR8_EL2(x)              S3_4_C12_C13_ ## x
diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/include/asm/cpufeature.h
index c62cf6293fd6..6d703e051906 100644
--- a/xen/arch/arm/include/asm/cpufeature.h
+++ b/xen/arch/arm/include/asm/cpufeature.h
@@ -32,6 +32,12 @@ 
 #define cpu_has_thumbee   (boot_cpu_feature32(thumbee) == 1)
 #define cpu_has_aarch32   (cpu_has_arm || cpu_has_thumb)
 
+#ifdef CONFIG_ARM64_SVE
+#define cpu_has_sve       (boot_cpu_feature64(sve) == 1)
+#else
+#define cpu_has_sve       (0)
+#endif
+
 #ifdef CONFIG_ARM_32
 #define cpu_has_gicv3     (boot_cpu_feature32(gic) >= 1)
 #define cpu_has_gentimer  (boot_cpu_feature32(gentimer) == 1)
@@ -323,6 +329,14 @@  struct cpuinfo_arm {
         };
     } isa64;
 
+    union {
+        register_t bits[1];
+        struct {
+            unsigned long len:4;
+            unsigned long __res0:60;
+        };
+    } zcr64;
+
     struct {
         register_t bits[1];
     } zfr64;
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index 0e310601e846..42eb5df320a7 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -190,6 +190,7 @@  struct arch_vcpu
     register_t tpidrro_el0;
 
     /* HYP configuration */
+    register_t cptr_el2;
     register_t hcr_el2;
     register_t mdcr_el2;
 
diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/asm/processor.h
index 1dd81d7d528f..0e38926b94db 100644
--- a/xen/arch/arm/include/asm/processor.h
+++ b/xen/arch/arm/include/asm/processor.h
@@ -583,6 +583,8 @@  void do_trap_guest_serror(struct cpu_user_regs *regs);
 
 register_t get_default_hcr_flags(void);
 
+register_t get_default_cptr_flags(void);
+
 /*
  * Synchronize SError unless the feature is selected.
  * This is relying on the SErrors are currently unmasked.
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 1f26f67b90e3..5459cc4f5e62 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -135,10 +135,11 @@  static void __init processor_id(void)
            cpu_has_el2_32 ? "64+32" : cpu_has_el2_64 ? "64" : "No",
            cpu_has_el1_32 ? "64+32" : cpu_has_el1_64 ? "64" : "No",
            cpu_has_el0_32 ? "64+32" : cpu_has_el0_64 ? "64" : "No");
-    printk("    Extensions:%s%s%s\n",
+    printk("    Extensions:%s%s%s%s\n",
            cpu_has_fp ? " FloatingPoint" : "",
            cpu_has_simd ? " AdvancedSIMD" : "",
-           cpu_has_gicv3 ? " GICv3-SysReg" : "");
+           cpu_has_gicv3 ? " GICv3-SysReg" : "",
+           cpu_has_sve ? " SVE" : "");
 
     /* Warn user if we find unknown floating-point features */
     if ( cpu_has_fp && (boot_cpu_feature64(fp) >= 2) )
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 061c92acbd68..45163fd3afb0 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -93,6 +93,21 @@  register_t get_default_hcr_flags(void)
              HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
 }
 
+register_t get_default_cptr_flags(void)
+{
+    /*
+     * Trap all coprocessor registers (0-13) except cp10 and
+     * cp11 for VFP.
+     *
+     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
+     *
+     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
+     * RES1, i.e. they would trap whether we did this write or not.
+     */
+    return  ((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
+             HCPTR_TTA | HCPTR_TAM);
+}
+
 static enum {
     SERRORS_DIVERSE,
     SERRORS_PANIC,
@@ -122,6 +137,7 @@  __initcall(update_serrors_cpu_caps);
 
 void init_traps(void)
 {
+    register_t cptr_bits = get_default_cptr_flags();
     /*
      * Setup Hyp vector base. Note they might get updated with the
      * branch predictor hardening.
@@ -135,17 +151,15 @@  void init_traps(void)
     /* Trap CP15 c15 used for implementation defined registers */
     WRITE_SYSREG(HSTR_T(15), HSTR_EL2);
 
-    /* Trap all coprocessor registers (0-13) except cp10 and
-     * cp11 for VFP.
-     *
-     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
-     *
-     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
-     * RES1, i.e. they would trap whether we did this write or not.
+#ifdef CONFIG_ARM64_SVE
+    /*
+     * Don't trap SVE now, Xen might need to access ZCR reg in cpufeature code,
+     * trapping again or not will be handled on vcpu creation/scheduling later
      */
-    WRITE_SYSREG((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
-                 HCPTR_TTA | HCPTR_TAM,
-                 CPTR_EL2);
+    cptr_bits &= ~HCPTR_CP(8);
+#endif
+
+    WRITE_SYSREG(cptr_bits, CPTR_EL2);
 
     /*
      * Configure HCR_EL2 with the bare minimum to run Xen until a guest