diff mbox series

[v10,5/5] KVM: arm64: Refactor writings for PMUVer/CSV2/CSV3

Message ID 20230522221835.957419-6-jingzhangos@google.com (mailing list archive)
State New, archived
Headers show
Series Support writable CPU ID registers from userspace | expand

Commit Message

Jing Zhang May 22, 2023, 10:18 p.m. UTC
Refactor writings for ID_AA64PFR0_EL1.[CSV2|CSV3],
ID_AA64DFR0_EL1.PMUVer and ID_DFR0_ELF.PerfMon based on utilities
specific to ID register.

Signed-off-by: Jing Zhang <jingzhangos@google.com>
---
 arch/arm64/include/asm/cpufeature.h |   1 +
 arch/arm64/kernel/cpufeature.c      |   2 +-
 arch/arm64/kvm/sys_regs.c           | 365 ++++++++++++++++++----------
 3 files changed, 243 insertions(+), 125 deletions(-)

Comments

Marc Zyngier May 28, 2023, 11:04 a.m. UTC | #1
On Mon, 22 May 2023 23:18:35 +0100,
Jing Zhang <jingzhangos@google.com> wrote:
> 
> Refactor writings for ID_AA64PFR0_EL1.[CSV2|CSV3],
> ID_AA64DFR0_EL1.PMUVer and ID_DFR0_ELF.PerfMon based on utilities
> specific to ID register.
> 
> Signed-off-by: Jing Zhang <jingzhangos@google.com>
> ---
>  arch/arm64/include/asm/cpufeature.h |   1 +
>  arch/arm64/kernel/cpufeature.c      |   2 +-
>  arch/arm64/kvm/sys_regs.c           | 365 ++++++++++++++++++----------
>  3 files changed, 243 insertions(+), 125 deletions(-)

Reading the result after applying this series, I feel like a stuck
record. This final series still contains gems like this:

static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
			       const struct sys_reg_desc *rd,
			       u64 val)
{
	u8 csv2, csv3;

	/*
	 * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as
	 * it doesn't promise more than what is actually provided (the
	 * guest could otherwise be covered in ectoplasmic residue).
	 */
	csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV2_SHIFT);
	if (csv2 > 1 ||
	    (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED))
		return -EINVAL;

	/* Same thing for CSV3 */
	csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV3_SHIFT);
	if (csv3 > 1 ||
	    (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED))
		return -EINVAL;

	return set_id_reg(vcpu, rd, val);
}

Why do we have this? I've asked the question at least 3 times in the
previous versions, and I still see the same code.

If we have sane limits, the call to arm64_check_features() in
set_id_reg() will catch the illegal write. So why do we have this at
all? The whole point of the exercise was to unify the handling. But
you're actually making it worse.

So what's the catch?

	M.
Jing Zhang May 30, 2023, 9:18 p.m. UTC | #2
Hi Marc,

On Sun, May 28, 2023 at 4:05 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Mon, 22 May 2023 23:18:35 +0100,
> Jing Zhang <jingzhangos@google.com> wrote:
> >
> > Refactor writings for ID_AA64PFR0_EL1.[CSV2|CSV3],
> > ID_AA64DFR0_EL1.PMUVer and ID_DFR0_ELF.PerfMon based on utilities
> > specific to ID register.
> >
> > Signed-off-by: Jing Zhang <jingzhangos@google.com>
> > ---
> >  arch/arm64/include/asm/cpufeature.h |   1 +
> >  arch/arm64/kernel/cpufeature.c      |   2 +-
> >  arch/arm64/kvm/sys_regs.c           | 365 ++++++++++++++++++----------
> >  3 files changed, 243 insertions(+), 125 deletions(-)
>
> Reading the result after applying this series, I feel like a stuck
> record. This final series still contains gems like this:
>
> static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
>                                const struct sys_reg_desc *rd,
>                                u64 val)
> {
>         u8 csv2, csv3;
>
>         /*
>          * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as
>          * it doesn't promise more than what is actually provided (the
>          * guest could otherwise be covered in ectoplasmic residue).
>          */
>         csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV2_SHIFT);
>         if (csv2 > 1 ||
>             (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED))
>                 return -EINVAL;
>
>         /* Same thing for CSV3 */
>         csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV3_SHIFT);
>         if (csv3 > 1 ||
>             (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED))
>                 return -EINVAL;
>
>         return set_id_reg(vcpu, rd, val);
> }
>
> Why do we have this? I've asked the question at least 3 times in the
> previous versions, and I still see the same code.
>
> If we have sane limits, the call to arm64_check_features() in
> set_id_reg() will catch the illegal write. So why do we have this at
> all? The whole point of the exercise was to unify the handling. But
> you're actually making it worse.
>
> So what's the catch?
Sorry, I am only aware of one discussion of this code in v8. The
reason I still keep the check here is that the arm64_check_features()
can not catch all illegal writes as this code does.
For example, for CSV2, one concern is:
When arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED, this code
only allows guest CSV2 to be set to 0, any non-zero value would lead
to -EINVAL. If we remove the check here, the guest CSV2 can be set to
any value lower or equal to host CSV2.
Of course, we can set the sane limit of CSV2 to 0 when
arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED in
read_sanitised_id_aa64pfr0_el1(). Then we can remove all the checks
here and no specific set_id function for AA64PFR0_EL1 is needed.
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.
Thanks,
Jing
Marc Zyngier May 31, 2023, 7:31 a.m. UTC | #3
On Tue, 30 May 2023 22:18:04 +0100,
Jing Zhang <jingzhangos@google.com> wrote:
> 
> Hi Marc,
> 
> On Sun, May 28, 2023 at 4:05 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > On Mon, 22 May 2023 23:18:35 +0100,
> > Jing Zhang <jingzhangos@google.com> wrote:
> > >
> > > Refactor writings for ID_AA64PFR0_EL1.[CSV2|CSV3],
> > > ID_AA64DFR0_EL1.PMUVer and ID_DFR0_ELF.PerfMon based on utilities
> > > specific to ID register.
> > >
> > > Signed-off-by: Jing Zhang <jingzhangos@google.com>
> > > ---
> > >  arch/arm64/include/asm/cpufeature.h |   1 +
> > >  arch/arm64/kernel/cpufeature.c      |   2 +-
> > >  arch/arm64/kvm/sys_regs.c           | 365 ++++++++++++++++++----------
> > >  3 files changed, 243 insertions(+), 125 deletions(-)
> >
> > Reading the result after applying this series, I feel like a stuck
> > record. This final series still contains gems like this:
> >
> > static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
> >                                const struct sys_reg_desc *rd,
> >                                u64 val)
> > {
> >         u8 csv2, csv3;
> >
> >         /*
> >          * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as
> >          * it doesn't promise more than what is actually provided (the
> >          * guest could otherwise be covered in ectoplasmic residue).
> >          */
> >         csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV2_SHIFT);
> >         if (csv2 > 1 ||
> >             (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED))
> >                 return -EINVAL;
> >
> >         /* Same thing for CSV3 */
> >         csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV3_SHIFT);
> >         if (csv3 > 1 ||
> >             (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED))
> >                 return -EINVAL;
> >
> >         return set_id_reg(vcpu, rd, val);
> > }
> >
> > Why do we have this? I've asked the question at least 3 times in the
> > previous versions, and I still see the same code.
> >
> > If we have sane limits, the call to arm64_check_features() in
> > set_id_reg() will catch the illegal write. So why do we have this at
> > all? The whole point of the exercise was to unify the handling. But
> > you're actually making it worse.
> >
> > So what's the catch?
> Sorry, I am only aware of one discussion of this code in v8. The
> reason I still keep the check here is that the arm64_check_features()
> can not catch all illegal writes as this code does.
> For example, for CSV2, one concern is:
> When arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED, this code
> only allows guest CSV2 to be set to 0, any non-zero value would lead
> to -EINVAL. If we remove the check here, the guest CSV2 can be set to
> any value lower or equal to host CSV2.

Sorry, this doesn't make sense. Lower is always fine. If you meant
'higher', then I agree that it would be bad. But that doesn't make
keeping this code the right outcome.

> Of course, we can set the sane limit of CSV2 to 0 when
> arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED in
> read_sanitised_id_aa64pfr0_el1(). Then we can remove all the checks
> here and no specific set_id function for AA64PFR0_EL1 is needed.

This is what I have been asking for all along: the "sanitised" view of
the register *must* return the absolute limit for the fields that are
flagged as writable by "mask".

If we need extra code, then something is really wrong. The core
feature code manages that without any special casing, and we should be
able to reach the same level.

Thanks,

	M.
Jing Zhang May 31, 2023, 5:29 p.m. UTC | #4
Hi Marc,

On Wed, May 31, 2023 at 12:31 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Tue, 30 May 2023 22:18:04 +0100,
> Jing Zhang <jingzhangos@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Sun, May 28, 2023 at 4:05 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > On Mon, 22 May 2023 23:18:35 +0100,
> > > Jing Zhang <jingzhangos@google.com> wrote:
> > > >
> > > > Refactor writings for ID_AA64PFR0_EL1.[CSV2|CSV3],
> > > > ID_AA64DFR0_EL1.PMUVer and ID_DFR0_ELF.PerfMon based on utilities
> > > > specific to ID register.
> > > >
> > > > Signed-off-by: Jing Zhang <jingzhangos@google.com>
> > > > ---
> > > >  arch/arm64/include/asm/cpufeature.h |   1 +
> > > >  arch/arm64/kernel/cpufeature.c      |   2 +-
> > > >  arch/arm64/kvm/sys_regs.c           | 365 ++++++++++++++++++----------
> > > >  3 files changed, 243 insertions(+), 125 deletions(-)
> > >
> > > Reading the result after applying this series, I feel like a stuck
> > > record. This final series still contains gems like this:
> > >
> > > static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
> > >                                const struct sys_reg_desc *rd,
> > >                                u64 val)
> > > {
> > >         u8 csv2, csv3;
> > >
> > >         /*
> > >          * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as
> > >          * it doesn't promise more than what is actually provided (the
> > >          * guest could otherwise be covered in ectoplasmic residue).
> > >          */
> > >         csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV2_SHIFT);
> > >         if (csv2 > 1 ||
> > >             (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED))
> > >                 return -EINVAL;
> > >
> > >         /* Same thing for CSV3 */
> > >         csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV3_SHIFT);
> > >         if (csv3 > 1 ||
> > >             (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED))
> > >                 return -EINVAL;
> > >
> > >         return set_id_reg(vcpu, rd, val);
> > > }
> > >
> > > Why do we have this? I've asked the question at least 3 times in the
> > > previous versions, and I still see the same code.
> > >
> > > If we have sane limits, the call to arm64_check_features() in
> > > set_id_reg() will catch the illegal write. So why do we have this at
> > > all? The whole point of the exercise was to unify the handling. But
> > > you're actually making it worse.
> > >
> > > So what's the catch?
> > Sorry, I am only aware of one discussion of this code in v8. The
> > reason I still keep the check here is that the arm64_check_features()
> > can not catch all illegal writes as this code does.
> > For example, for CSV2, one concern is:
> > When arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED, this code
> > only allows guest CSV2 to be set to 0, any non-zero value would lead
> > to -EINVAL. If we remove the check here, the guest CSV2 can be set to
> > any value lower or equal to host CSV2.
>
> Sorry, this doesn't make sense. Lower is always fine. If you meant
> 'higher', then I agree that it would be bad. But that doesn't make
> keeping this code the right outcome.
Got it. Then it would be good to remove the check here. Will do that.
>
> > Of course, we can set the sane limit of CSV2 to 0 when
> > arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED in
> > read_sanitised_id_aa64pfr0_el1(). Then we can remove all the checks
> > here and no specific set_id function for AA64PFR0_EL1 is needed.
>
> This is what I have been asking for all along: the "sanitised" view of
> the register *must* return the absolute limit for the fields that are
> flagged as writable by "mask".
>
> If we need extra code, then something is really wrong. The core
> feature code manages that without any special casing, and we should be
> able to reach the same level.
Understood.
>
> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.
Thanks,
Jing
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 6bf013fb110d..dc769c2eb7a4 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -915,6 +915,7 @@  static inline unsigned int get_vmid_bits(u64 mmfr1)
 	return 8;
 }
 
+s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, s64 cur);
 struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id);
 
 extern struct arm64_ftr_override id_aa64mmfr1_override;
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 7d7128c65161..3317a7b6deac 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -798,7 +798,7 @@  static u64 arm64_ftr_set_value(const struct arm64_ftr_bits *ftrp, s64 reg,
 	return reg;
 }
 
-static s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new,
+s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new,
 				s64 cur)
 {
 	s64 ret = 0;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 72255dea8027..b3eacfc592eb 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -41,6 +41,7 @@ 
  * 64bit interface.
  */
 
+static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val);
 static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id);
 static u64 sys_reg_to_index(const struct sys_reg_desc *reg);
 
@@ -1194,6 +1195,86 @@  static bool access_arch_timer(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+static s64 kvm_arm64_ftr_safe_value(u32 id, const struct arm64_ftr_bits *ftrp,
+				    s64 new, s64 cur)
+{
+	struct arm64_ftr_bits kvm_ftr = *ftrp;
+
+	/* Some features have different safe value type in KVM than host features */
+	switch (id) {
+	case SYS_ID_AA64DFR0_EL1:
+		if (kvm_ftr.shift == ID_AA64DFR0_EL1_PMUVer_SHIFT)
+			kvm_ftr.type = FTR_LOWER_SAFE;
+		break;
+	case SYS_ID_DFR0_EL1:
+		if (kvm_ftr.shift == ID_DFR0_EL1_PerfMon_SHIFT)
+			kvm_ftr.type = FTR_LOWER_SAFE;
+		break;
+	}
+
+	return arm64_ftr_safe_value(&kvm_ftr, new, cur);
+}
+
+/**
+ * arm64_check_features() - Check if a feature register value constitutes
+ * a subset of features indicated by the idreg's KVM sanitised limit.
+ *
+ * This function will check if each feature field of @val is the "safe" value
+ * against idreg's KVM sanitised limit return from reset() callback.
+ * If a field value in @val is the same as the one in limit, it is always
+ * considered the safe value regardless For register fields that are not in
+ * writable, only the value in limit is considered the safe value.
+ *
+ * Return: 0 if all the fields are safe. Otherwise, return negative errno.
+ */
+static int arm64_check_features(struct kvm_vcpu *vcpu,
+				const struct sys_reg_desc *rd,
+				u64 val)
+{
+	const struct arm64_ftr_reg *ftr_reg;
+	const struct arm64_ftr_bits *ftrp = NULL;
+	u32 id = reg_to_encoding(rd);
+	u64 writable_mask = rd->val;
+	u64 limit = 0;
+	u64 mask = 0;
+
+	/* For hidden and unallocated idregs without reset, only val = 0 is allowed. */
+	if (rd->reset) {
+		limit = rd->reset(vcpu, rd);
+		ftr_reg = get_arm64_ftr_reg(id);
+		if (!ftr_reg)
+			return -EINVAL;
+		ftrp = ftr_reg->ftr_bits;
+	}
+
+	for (; ftrp && ftrp->width; ftrp++) {
+		s64 f_val, f_lim, safe_val;
+		u64 ftr_mask;
+
+		ftr_mask = arm64_ftr_mask(ftrp);
+		if ((ftr_mask & writable_mask) != ftr_mask)
+			continue;
+
+		f_val = arm64_ftr_value(ftrp, val);
+		f_lim = arm64_ftr_value(ftrp, limit);
+		mask |= ftr_mask;
+
+		if (f_val == f_lim)
+			safe_val = f_val;
+		else
+			safe_val = kvm_arm64_ftr_safe_value(id, ftrp, f_val, f_lim);
+
+		if (safe_val != f_val)
+			return -E2BIG;
+	}
+
+	/* For fields that are not writable, values in limit are the safe values. */
+	if ((val & ~mask) != (limit & ~mask))
+		return -E2BIG;
+
+	return 0;
+}
+
 static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu)
 {
 	if (kvm_vcpu_has_pmu(vcpu))
@@ -1244,7 +1325,6 @@  static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
 	case SYS_ID_AA64PFR0_EL1:
 		if (!vcpu_has_sve(vcpu))
 			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU);
 		if (kvm_vgic_global_state.type == VGIC_V3) {
 			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC);
 			val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1);
@@ -1271,15 +1351,10 @@  static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
 			val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT);
 		break;
 	case SYS_ID_AA64DFR0_EL1:
-		/* Limit debug to ARMv8.0 */
-		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer);
-		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6);
 		/* Set PMUver to the required version */
 		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
 		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer),
 				  vcpu_pmuver(vcpu));
-		/* Hide SPE from guests */
-		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer);
 		break;
 	case SYS_ID_DFR0_EL1:
 		val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon);
@@ -1378,15 +1453,40 @@  static unsigned int sve_visibility(const struct kvm_vcpu *vcpu,
 	return REG_HIDDEN;
 }
 
+static u64 read_sanitised_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
+					  const struct sys_reg_desc *rd)
+{
+	u64 val;
+	u32 id = reg_to_encoding(rd);
+
+	val = read_sanitised_ftr_reg(id);
+	/*
+	 * The default is to expose CSV2 == 1 if the HW isn't affected.
+	 * Although this is a per-CPU feature, we make it global because
+	 * asymmetric systems are just a nuisance.
+	 *
+	 * Userspace can override this as long as it doesn't promise
+	 * the impossible.
+	 */
+	if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) {
+		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2);
+		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1);
+	}
+	if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) {
+		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3);
+		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1);
+	}
+
+	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU);
+
+	return val;
+}
+
 static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
 			       const struct sys_reg_desc *rd,
 			       u64 val)
 {
-	struct kvm_arch *arch = &vcpu->kvm->arch;
-	u64 old_val = read_id_reg(vcpu, rd);
-	u64 new_val = val;
 	u8 csv2, csv3;
-	int ret = 0;
 
 	/*
 	 * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as
@@ -1404,26 +1504,30 @@  static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
 	    (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED))
 		return -EINVAL;
 
-	mutex_lock(&arch->config_lock);
-	/* We can only differ with CSV[23], and anything else is an error */
-	val ^= old_val;
-	val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) |
-		 ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3));
-	if (val) {
-		ret = -EINVAL;
-		goto out;
-	}
+	return set_id_reg(vcpu, rd, val);
+}
 
-	/* Only allow userspace to change the idregs before VM running */
-	if (kvm_vm_has_ran_once(vcpu->kvm)) {
-		if (new_val != old_val)
-			ret = -EBUSY;
-	} else {
-		IDREG(vcpu->kvm, reg_to_encoding(rd)) = new_val;
-	}
-out:
-	mutex_unlock(&arch->config_lock);
-	return ret;
+static u64 read_sanitised_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
+					  const struct sys_reg_desc *rd)
+{
+	u64 val;
+	u32 id = reg_to_encoding(rd);
+
+	val = read_sanitised_ftr_reg(id);
+	/* Limit debug to ARMv8.0 */
+	val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer);
+	val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6);
+	/*
+	 * Initialise the default PMUver before there is a chance to
+	 * create an actual PMU.
+	 */
+	val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
+	val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer),
+			  kvm_arm_pmu_get_pmuver_limit());
+	/* Hide SPE from guests */
+	val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer);
+
+	return val;
 }
 
 static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
@@ -1431,9 +1535,7 @@  static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
 			       u64 val)
 {
 	struct kvm_arch *arch = &vcpu->kvm->arch;
-	u64 old_val = read_id_reg(vcpu, rd);
 	u8 pmuver, host_pmuver;
-	u64 new_val = val;
 	bool valid_pmu;
 	int ret = 0;
 
@@ -1456,48 +1558,67 @@  static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
 		return -EINVAL;
 
 	mutex_lock(&arch->config_lock);
-	/* We can only differ with PMUver, and anything else is an error */
-	val ^= old_val;
-	val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
-	if (val) {
-		ret = -EINVAL;
-		goto out;
-	}
-
 	/* Only allow userspace to change the idregs before VM running */
 	if (kvm_vm_has_ran_once(vcpu->kvm)) {
-		if (new_val != old_val)
+		if (val != read_id_reg(vcpu, rd))
 			ret = -EBUSY;
-	} else {
-		if (valid_pmu) {
-			val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1);
-			val &= ~ID_AA64DFR0_EL1_PMUVer_MASK;
-			val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, pmuver);
-			IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val;
-
-			val = IDREG(vcpu->kvm, SYS_ID_DFR0_EL1);
-			val &= ~ID_DFR0_EL1_PerfMon_MASK;
-			val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, pmuver_to_perfmon(pmuver));
-			IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val;
-		} else {
-			assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags,
-				   pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF);
-		}
+		goto out;
 	}
 
+	if (!valid_pmu) {
+		/*
+		 * Ignore the PMUVer field in @val. The PMUVer would be determined
+		 * by arch flags bit KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU,
+		 */
+		pmuver = FIELD_GET(ID_AA64DFR0_EL1_PMUVer_MASK,
+				   IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1));
+		val &= ~ID_AA64DFR0_EL1_PMUVer_MASK;
+		val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, pmuver);
+	}
+
+	ret = arm64_check_features(vcpu, rd, val);
+	if (ret)
+		goto out;
+
+	IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val;
+
+	val = IDREG(vcpu->kvm, SYS_ID_DFR0_EL1);
+	val &= ~ID_DFR0_EL1_PerfMon_MASK;
+	val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, pmuver_to_perfmon(pmuver));
+	IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val;
+
+	if (!valid_pmu)
+		assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags,
+			   pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF);
+
 out:
 	mutex_unlock(&arch->config_lock);
 	return ret;
 }
 
+static u64 read_sanitised_id_dfr0_el1(struct kvm_vcpu *vcpu,
+				      const struct sys_reg_desc *rd)
+{
+	u64 val;
+	u32 id = reg_to_encoding(rd);
+
+	val = read_sanitised_ftr_reg(id);
+	/*
+	 * Initialise the default PMUver before there is a chance to
+	 * create an actual PMU.
+	 */
+	val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon);
+	val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), kvm_arm_pmu_get_pmuver_limit());
+
+	return val;
+}
+
 static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
 			   const struct sys_reg_desc *rd,
 			   u64 val)
 {
 	struct kvm_arch *arch = &vcpu->kvm->arch;
-	u64 old_val = read_id_reg(vcpu, rd);
 	u8 perfmon, host_perfmon;
-	u64 new_val = val;
 	bool valid_pmu;
 	int ret = 0;
 
@@ -1521,35 +1642,39 @@  static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
 		return -EINVAL;
 
 	mutex_lock(&arch->config_lock);
-	/* We can only differ with PerfMon, and anything else is an error */
-	val ^= old_val;
-	val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon);
-	if (val) {
-		ret = -EINVAL;
-		goto out;
-	}
-
 	/* Only allow userspace to change the idregs before VM running */
 	if (kvm_vm_has_ran_once(vcpu->kvm)) {
-		if (new_val != old_val)
+		if (val != read_id_reg(vcpu, rd))
 			ret = -EBUSY;
-	} else {
-		if (valid_pmu) {
-			val = IDREG(vcpu->kvm, SYS_ID_DFR0_EL1);
-			val &= ~ID_DFR0_EL1_PerfMon_MASK;
-			val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, perfmon);
-			IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val;
-
-			val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1);
-			val &= ~ID_AA64DFR0_EL1_PMUVer_MASK;
-			val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, perfmon_to_pmuver(perfmon));
-			IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val;
-		} else {
-			assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags,
-				   perfmon == ID_DFR0_EL1_PerfMon_IMPDEF);
-		}
+		goto out;
 	}
 
+	if (!valid_pmu) {
+		/*
+		 * Ignore the PerfMon field in @val. The PerfMon would be determined
+		 * by arch flags bit KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU,
+		 */
+		perfmon = FIELD_GET(ID_DFR0_EL1_PerfMon_MASK,
+				    IDREG(vcpu->kvm, SYS_ID_DFR0_EL1));
+		val &= ~ID_DFR0_EL1_PerfMon_MASK;
+		val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, perfmon);
+	}
+
+	ret = arm64_check_features(vcpu, rd, val);
+	if (ret)
+		goto out;
+
+	IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val;
+
+	val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1);
+	val &= ~ID_AA64DFR0_EL1_PMUVer_MASK;
+	val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, perfmon_to_pmuver(perfmon));
+	IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val;
+
+	if (!valid_pmu)
+		assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags,
+			   perfmon == ID_DFR0_EL1_PerfMon_IMPDEF);
+
 out:
 	mutex_unlock(&arch->config_lock);
 	return ret;
@@ -1577,11 +1702,23 @@  static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		      u64 val)
 {
-	/* This is what we mean by invariant: you can't change it. */
-	if (val != read_id_reg(vcpu, rd))
-		return -EINVAL;
+	struct kvm_arch *arch = &vcpu->kvm->arch;
+	u32 id = reg_to_encoding(rd);
+	int ret = 0;
 
-	return 0;
+	mutex_lock(&arch->config_lock);
+	/* Only allow userspace to change the idregs before VM running */
+	if (kvm_vm_has_ran_once(vcpu->kvm)) {
+		if (val != read_id_reg(vcpu, rd))
+			ret = -EBUSY;
+	} else {
+		ret = arm64_check_features(vcpu, rd, val);
+		if (!ret)
+			IDREG(vcpu->kvm, id) = val;
+	}
+	mutex_unlock(&arch->config_lock);
+
+	return ret;
 }
 
 static int get_raz_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
@@ -1932,9 +2069,13 @@  static const struct sys_reg_desc sys_reg_descs[] = {
 	/* CRm=1 */
 	AA32_ID_SANITISED(ID_PFR0_EL1),
 	AA32_ID_SANITISED(ID_PFR1_EL1),
-	{ SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg,
-	  .get_user = get_id_reg, .set_user = set_id_dfr0_el1,
-	  .visibility = aa32_id_visibility, },
+	{ SYS_DESC(SYS_ID_DFR0_EL1),
+	  .access = access_id_reg,
+	  .get_user = get_id_reg,
+	  .set_user = set_id_dfr0_el1,
+	  .visibility = aa32_id_visibility,
+	  .reset = read_sanitised_id_dfr0_el1,
+	  .val = ID_DFR0_EL1_PerfMon_MASK, },
 	ID_HIDDEN(ID_AFR0_EL1),
 	AA32_ID_SANITISED(ID_MMFR0_EL1),
 	AA32_ID_SANITISED(ID_MMFR1_EL1),
@@ -1963,8 +2104,12 @@  static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* AArch64 ID registers */
 	/* CRm=4 */
-	{ SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg,
-	  .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, },
+	{ SYS_DESC(SYS_ID_AA64PFR0_EL1),
+	  .access = access_id_reg,
+	  .get_user = get_id_reg,
+	  .set_user = set_id_aa64pfr0_el1,
+	  .reset = read_sanitised_id_aa64pfr0_el1,
+	  .val = ID_AA64PFR0_EL1_CSV2_MASK | ID_AA64PFR0_EL1_CSV3_MASK, },
 	ID_SANITISED(ID_AA64PFR1_EL1),
 	ID_UNALLOCATED(4,2),
 	ID_UNALLOCATED(4,3),
@@ -1974,8 +2119,12 @@  static const struct sys_reg_desc sys_reg_descs[] = {
 	ID_UNALLOCATED(4,7),
 
 	/* CRm=5 */
-	{ SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg,
-	  .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, },
+	{ SYS_DESC(SYS_ID_AA64DFR0_EL1),
+	  .access = access_id_reg,
+	  .get_user = get_id_reg,
+	  .set_user = set_id_aa64dfr0_el1,
+	  .reset = read_sanitised_id_aa64dfr0_el1,
+	  .val = ID_AA64DFR0_EL1_PMUVer_MASK, },
 	ID_SANITISED(ID_AA64DFR1_EL1),
 	ID_UNALLOCATED(5,2),
 	ID_UNALLOCATED(5,3),
@@ -3497,38 +3646,6 @@  void kvm_arm_init_id_regs(struct kvm *kvm)
 		idreg++;
 		id = reg_to_encoding(idreg);
 	}
-
-	/*
-	 * The default is to expose CSV2 == 1 if the HW isn't affected.
-	 * Although this is a per-CPU feature, we make it global because
-	 * asymmetric systems are just a nuisance.
-	 *
-	 * Userspace can override this as long as it doesn't promise
-	 * the impossible.
-	 */
-	val = IDREG(kvm, SYS_ID_AA64PFR0_EL1);
-
-	if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) {
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2);
-		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1);
-	}
-	if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) {
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3);
-		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1);
-	}
-
-	IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val;
-	/*
-	 * Initialise the default PMUver before there is a chance to
-	 * create an actual PMU.
-	 */
-	val = IDREG(kvm, SYS_ID_AA64DFR0_EL1);
-
-	val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
-	val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer),
-			  kvm_arm_pmu_get_pmuver_limit());
-
-	IDREG(kvm, SYS_ID_AA64DFR0_EL1) = val;
 }
 
 int __init kvm_sys_reg_table_init(void)