Message ID | 20230817003029.3073210-2-rananta@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: arm64: PMU: Allow userspace to limit the number of PMCs on vCPU | expand |
Hi Raghu, On Thu, Aug 17, 2023 at 12:30:18AM +0000, Raghavendra Rao Ananta wrote: > From: Reiji Watanabe <reijiw@google.com> > > Introduce a new helper function to set the guest's PMU > (kvm->arch.arm_pmu), and use it when the guest's PMU needs > to be set. This helper will make it easier for the following > patches to modify the relevant code. > > No functional change intended. > > Signed-off-by: Reiji Watanabe <reijiw@google.com> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> > --- > arch/arm64/kvm/pmu-emul.c | 52 +++++++++++++++++++++++++++------------ > 1 file changed, 36 insertions(+), 16 deletions(-) > > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c > index 5606509724787..0ffd1efa90c07 100644 > --- a/arch/arm64/kvm/pmu-emul.c > +++ b/arch/arm64/kvm/pmu-emul.c > @@ -865,6 +865,32 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) > return true; > } > > +static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) > +{ > + lockdep_assert_held(&kvm->arch.config_lock); > + > + if (!arm_pmu) { > + /* > + * No PMU set, get the default one. > + * > + * The observant among you will notice that the supported_cpus > + * mask does not get updated for the default PMU even though it > + * is quite possible the selected instance supports only a > + * subset of cores in the system. This is intentional, and > + * upholds the preexisting behavior on heterogeneous systems > + * where vCPUs can be scheduled on any core but the guest > + * counters could stop working. > + */ > + arm_pmu = kvm_pmu_probe_armpmu(); > + if (!arm_pmu) > + return -ENODEV; > + } > + > + kvm->arch.arm_pmu = arm_pmu; > + > + return 0; > +} > + I'm not too big of a fan of adding the 'default' path to this helper. I'd prefer it if kvm_arm_set_vm_pmu() does all the necessary initialization for a valid pmu instance. You then avoid introducing unexpected error handling where it didn't exist before. static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) { lockdep_assert_held(&kvm->arch.config_lock); kvm->arch.arm_pmu = arm_pmu; } /* * Blurb about default PMUs I'm too lazy to copy/paste */ static int kvm_arm_set_default_pmu(struct kvm *kvm) { struct arm_pmu *arm_pmu = kvm_pmu_probe_armpmu(); if (!arm_pmu) return -ENODEV; kvm_arm_set_pmu(kvm, arm_pmu); return 0; }
On Fri, Sep 15, 2023 at 12:22 PM Oliver Upton <oliver.upton@linux.dev> wrote: > > Hi Raghu, > > On Thu, Aug 17, 2023 at 12:30:18AM +0000, Raghavendra Rao Ananta wrote: > > From: Reiji Watanabe <reijiw@google.com> > > > > Introduce a new helper function to set the guest's PMU > > (kvm->arch.arm_pmu), and use it when the guest's PMU needs > > to be set. This helper will make it easier for the following > > patches to modify the relevant code. > > > > No functional change intended. > > > > Signed-off-by: Reiji Watanabe <reijiw@google.com> > > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> > > --- > > arch/arm64/kvm/pmu-emul.c | 52 +++++++++++++++++++++++++++------------ > > 1 file changed, 36 insertions(+), 16 deletions(-) > > > > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c > > index 5606509724787..0ffd1efa90c07 100644 > > --- a/arch/arm64/kvm/pmu-emul.c > > +++ b/arch/arm64/kvm/pmu-emul.c > > @@ -865,6 +865,32 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) > > return true; > > } > > > > +static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) > > +{ > > + lockdep_assert_held(&kvm->arch.config_lock); > > + > > + if (!arm_pmu) { > > + /* > > + * No PMU set, get the default one. > > + * > > + * The observant among you will notice that the supported_cpus > > + * mask does not get updated for the default PMU even though it > > + * is quite possible the selected instance supports only a > > + * subset of cores in the system. This is intentional, and > > + * upholds the preexisting behavior on heterogeneous systems > > + * where vCPUs can be scheduled on any core but the guest > > + * counters could stop working. > > + */ > > + arm_pmu = kvm_pmu_probe_armpmu(); > > + if (!arm_pmu) > > + return -ENODEV; > > + } > > + > > + kvm->arch.arm_pmu = arm_pmu; > > + > > + return 0; > > +} > > + > > I'm not too big of a fan of adding the 'default' path to this helper. > I'd prefer it if kvm_arm_set_vm_pmu() does all the necessary > initialization for a valid pmu instance. You then avoid introducing > unexpected error handling where it didn't exist before. > > static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) > { > lockdep_assert_held(&kvm->arch.config_lock); > > kvm->arch.arm_pmu = arm_pmu; > } > > /* > * Blurb about default PMUs I'm too lazy to copy/paste > */ > static int kvm_arm_set_default_pmu(struct kvm *kvm) > { > struct arm_pmu *arm_pmu = kvm_pmu_probe_armpmu(); > > if (!arm_pmu) > return -ENODEV; > > kvm_arm_set_pmu(kvm, arm_pmu); > return 0; > } > Sounds good. We can adapt to your suggestion. Thank you. Raghavendra > -- > Thanks, > Oliver
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 5606509724787..0ffd1efa90c07 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -865,6 +865,32 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) return true; } +static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +{ + lockdep_assert_held(&kvm->arch.config_lock); + + if (!arm_pmu) { + /* + * No PMU set, get the default one. + * + * The observant among you will notice that the supported_cpus + * mask does not get updated for the default PMU even though it + * is quite possible the selected instance supports only a + * subset of cores in the system. This is intentional, and + * upholds the preexisting behavior on heterogeneous systems + * where vCPUs can be scheduled on any core but the guest + * counters could stop working. + */ + arm_pmu = kvm_pmu_probe_armpmu(); + if (!arm_pmu) + return -ENODEV; + } + + kvm->arch.arm_pmu = arm_pmu; + + return 0; +} + static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) { struct kvm *kvm = vcpu->kvm; @@ -884,9 +910,13 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) break; } - kvm->arch.arm_pmu = arm_pmu; + ret = kvm_arm_set_vm_pmu(kvm, arm_pmu); + if (ret) { + WARN_ON(ret); + break; + } + cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); - ret = 0; break; } } @@ -908,20 +938,10 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) return -EBUSY; if (!kvm->arch.arm_pmu) { - /* - * No PMU set, get the default one. - * - * The observant among you will notice that the supported_cpus - * mask does not get updated for the default PMU even though it - * is quite possible the selected instance supports only a - * subset of cores in the system. This is intentional, and - * upholds the preexisting behavior on heterogeneous systems - * where vCPUs can be scheduled on any core but the guest - * counters could stop working. - */ - kvm->arch.arm_pmu = kvm_pmu_probe_armpmu(); - if (!kvm->arch.arm_pmu) - return -ENODEV; + int ret = kvm_arm_set_vm_pmu(kvm, NULL); + + if (ret) + return ret; } switch (attr->attr) {