Message ID | c0a9f52107e22957daaa5b1b0e05e4160db5f064.1720452354.git.matthew.barnes@cloud.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | [RFC,XEN] x86/cpuid: Expose max_vcpus field in HVM hypervisor leaf | expand |
On 08.07.2024 17:42, Matthew Barnes wrote: > Currently, OVMF is hard-coded to set up a maximum of 64 vCPUs on > startup. > > There are efforts to support a maximum of 128 vCPUs, which would involve > bumping the OVMF constant from 64 to 128. > > However, it would be more future-proof for OVMF to access the maximum > number of vCPUs for a domain and set itself up appropriately at > run-time. > > For OVMF to access the maximum vCPU count, Xen will have to expose this > property via cpuid. Why "have to"? The information is available from xenstore, isn't it? > This patch exposes the max_vcpus field via cpuid on the HVM hypervisor > leaf in edx. If exposing via CPUID, why only for HVM? > --- a/xen/include/public/arch-x86/cpuid.h > +++ b/xen/include/public/arch-x86/cpuid.h > @@ -87,6 +87,7 @@ > * Sub-leaf 0: EAX: Features > * Sub-leaf 0: EBX: vcpu id (iff EAX has XEN_HVM_CPUID_VCPU_ID_PRESENT flag) > * Sub-leaf 0: ECX: domain id (iff EAX has XEN_HVM_CPUID_DOMID_PRESENT flag) > + * Sub-leaf 0: EDX: max vcpus (iff EAX has XEN_HVM_CPUID_MAX_VCPUS_PRESENT flag) > */ Unlike EBX and ECX, the proposed value for EDX cannot be zero. I'm therefore not entirely convinced that we need a qualifying flag. Things would be different if the field was "highest possible vCPU ID", which certainly would be the better approach if the field wasn't occupying the entire register. Even with it being 32 bits, I'd still suggest switching its meaning this way. Jan
I'll pitch in, seeing as I created the GitLab ticket. On Tue Jul 9, 2024 at 7:40 AM BST, Jan Beulich wrote: > On 08.07.2024 17:42, Matthew Barnes wrote: > > Currently, OVMF is hard-coded to set up a maximum of 64 vCPUs on > > startup. > > > > There are efforts to support a maximum of 128 vCPUs, which would involve > > bumping the OVMF constant from 64 to 128. > > > > However, it would be more future-proof for OVMF to access the maximum > > number of vCPUs for a domain and set itself up appropriately at > > run-time. > > > > For OVMF to access the maximum vCPU count, Xen will have to expose this > > property via cpuid. > > Why "have to"? The information is available from xenstore, isn't it? That would create an avoidable dependency between OVMF and xenstore, precluding xenstoreless UEFI-enabled domUs. > > > This patch exposes the max_vcpus field via cpuid on the HVM hypervisor > > leaf in edx. > > If exposing via CPUID, why only for HVM? > > > --- a/xen/include/public/arch-x86/cpuid.h > > +++ b/xen/include/public/arch-x86/cpuid.h > > @@ -87,6 +87,7 @@ > > * Sub-leaf 0: EAX: Features > > * Sub-leaf 0: EBX: vcpu id (iff EAX has XEN_HVM_CPUID_VCPU_ID_PRESENT flag) > > * Sub-leaf 0: ECX: domain id (iff EAX has XEN_HVM_CPUID_DOMID_PRESENT flag) > > + * Sub-leaf 0: EDX: max vcpus (iff EAX has XEN_HVM_CPUID_MAX_VCPUS_PRESENT flag) > > */ > > Unlike EBX and ECX, the proposed value for EDX cannot be zero. I'm therefore > not entirely convinced that we need a qualifying flag. Things would be > different if the field was "highest possible vCPU ID", which certainly would > be the better approach if the field wasn't occupying the entire register. > Even with it being 32 bits, I'd still suggest switching its meaning this way. > > Jan Using max_vcpu_id instead of max_vcpus is also fine, but the flag is important as otherwise it's impossible to retroactively change the meaning of EDX (i.e: to stop advertising this datum, or repurpose EDX altogether) We could also reserve only the lower 16bits of EDX rather than the whole thing; but we have plenty of subleafs for growth, so I'm not sure it's worth it. Cheers, Alejandro
On 09.07.2024 13:11, Alejandro Vallejo wrote: > I'll pitch in, seeing as I created the GitLab ticket. > > On Tue Jul 9, 2024 at 7:40 AM BST, Jan Beulich wrote: >> On 08.07.2024 17:42, Matthew Barnes wrote: >>> Currently, OVMF is hard-coded to set up a maximum of 64 vCPUs on >>> startup. >>> >>> There are efforts to support a maximum of 128 vCPUs, which would involve >>> bumping the OVMF constant from 64 to 128. >>> >>> However, it would be more future-proof for OVMF to access the maximum >>> number of vCPUs for a domain and set itself up appropriately at >>> run-time. >>> >>> For OVMF to access the maximum vCPU count, Xen will have to expose this >>> property via cpuid. >> >> Why "have to"? The information is available from xenstore, isn't it? > > That would create an avoidable dependency between OVMF and xenstore, precluding > xenstoreless UEFI-enabled domUs. Right, but that's a desirable thing, so still not "have to". >>> This patch exposes the max_vcpus field via cpuid on the HVM hypervisor >>> leaf in edx. >> >> If exposing via CPUID, why only for HVM? >> >>> --- a/xen/include/public/arch-x86/cpuid.h >>> +++ b/xen/include/public/arch-x86/cpuid.h >>> @@ -87,6 +87,7 @@ >>> * Sub-leaf 0: EAX: Features >>> * Sub-leaf 0: EBX: vcpu id (iff EAX has XEN_HVM_CPUID_VCPU_ID_PRESENT flag) >>> * Sub-leaf 0: ECX: domain id (iff EAX has XEN_HVM_CPUID_DOMID_PRESENT flag) >>> + * Sub-leaf 0: EDX: max vcpus (iff EAX has XEN_HVM_CPUID_MAX_VCPUS_PRESENT flag) >>> */ >> >> Unlike EBX and ECX, the proposed value for EDX cannot be zero. I'm therefore >> not entirely convinced that we need a qualifying flag. Things would be >> different if the field was "highest possible vCPU ID", which certainly would >> be the better approach if the field wasn't occupying the entire register. >> Even with it being 32 bits, I'd still suggest switching its meaning this way. > > Using max_vcpu_id instead of max_vcpus is also fine, but the flag is important > as otherwise it's impossible to retroactively change the meaning of EDX (i.e: to > stop advertising this datum, or repurpose EDX altogether) Hmm, re-purposing. Very interesting thought. I don't think we should ever do that. > We could also reserve only the lower 16bits of EDX rather than the whole thing; > but we have plenty of subleafs for growth, so I'm not sure it's worth it. And I was only mentioning it, without meaning to suggest to shrink. Jan
On Tue, Jul 09, 2024 at 08:40:18AM +0200, Jan Beulich wrote: > On 08.07.2024 17:42, Matthew Barnes wrote: > > Currently, OVMF is hard-coded to set up a maximum of 64 vCPUs on > > startup. > > > > There are efforts to support a maximum of 128 vCPUs, which would involve > > bumping the OVMF constant from 64 to 128. > > > > However, it would be more future-proof for OVMF to access the maximum > > number of vCPUs for a domain and set itself up appropriately at > > run-time. > > > > For OVMF to access the maximum vCPU count, Xen will have to expose this > > property via cpuid. > > Why "have to"? The information is available from xenstore, isn't it? I shall reword the commit message in patch v2 to avoid the wording "have to". > > This patch exposes the max_vcpus field via cpuid on the HVM hypervisor > > leaf in edx. > > If exposing via CPUID, why only for HVM? Other related cpuid fields are also exposed in the HVM hypervisor leaf, such as the vcpu id and the domain id. Having said that, I wouldn't mind moving this field (or other fields, in a separate patch) to a location meant for HVM *and* PV guests. Do you have any suggestions? > > --- a/xen/include/public/arch-x86/cpuid.h > > +++ b/xen/include/public/arch-x86/cpuid.h > > @@ -87,6 +87,7 @@ > > * Sub-leaf 0: EAX: Features > > * Sub-leaf 0: EBX: vcpu id (iff EAX has XEN_HVM_CPUID_VCPU_ID_PRESENT flag) > > * Sub-leaf 0: ECX: domain id (iff EAX has XEN_HVM_CPUID_DOMID_PRESENT flag) > > + * Sub-leaf 0: EDX: max vcpus (iff EAX has XEN_HVM_CPUID_MAX_VCPUS_PRESENT flag) > > */ > > Unlike EBX and ECX, the proposed value for EDX cannot be zero. I'm therefore > not entirely convinced that we need a qualifying flag. Things would be > different if the field was "highest possible vCPU ID", which certainly would > be the better approach if the field wasn't occupying the entire register. > Even with it being 32 bits, I'd still suggest switching its meaning this way. I shall tweak the value from "maximum vcpu count" to "maximum vcpu ID" in patch v2. Matt
On 16.07.2024 17:03, Matthew Barnes wrote: > On Tue, Jul 09, 2024 at 08:40:18AM +0200, Jan Beulich wrote: >> On 08.07.2024 17:42, Matthew Barnes wrote: >>> Currently, OVMF is hard-coded to set up a maximum of 64 vCPUs on >>> startup. >>> >>> There are efforts to support a maximum of 128 vCPUs, which would involve >>> bumping the OVMF constant from 64 to 128. >>> >>> However, it would be more future-proof for OVMF to access the maximum >>> number of vCPUs for a domain and set itself up appropriately at >>> run-time. >>> >>> For OVMF to access the maximum vCPU count, Xen will have to expose this >>> property via cpuid. >> >> Why "have to"? The information is available from xenstore, isn't it? > > I shall reword the commit message in patch v2 to avoid the wording "have > to". > >>> This patch exposes the max_vcpus field via cpuid on the HVM hypervisor >>> leaf in edx. >> >> If exposing via CPUID, why only for HVM? > > Other related cpuid fields are also exposed in the HVM hypervisor leaf, > such as the vcpu id and the domain id. > > Having said that, I wouldn't mind moving this field (or other fields, in > a separate patch) to a location meant for HVM *and* PV guests. Do you > have any suggestions? I don't think we can literally move anything. We could duplicate things into shared fields, but would that gain us anything? Therefore I think going forward we should limit type-specific fields to cases where type really matters, and otherwise expose in a type-independent way. Jan
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index ee91fc56b125..b439ee94f562 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -1169,6 +1169,10 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf, res->a |= XEN_HVM_CPUID_DOMID_PRESENT; res->c = d->domain_id; + /* Indicate presence of max vcpus and set it in edx */ + res->a |= XEN_HVM_CPUID_MAX_VCPUS_PRESENT; + res->d = d->max_vcpus; + /* * Per-vCPU event channel upcalls are implemented and work * correctly with PIRQs routed over event channels. diff --git a/xen/include/public/arch-x86/cpuid.h b/xen/include/public/arch-x86/cpuid.h index 3bb0dd249ff9..a11c9b684308 100644 --- a/xen/include/public/arch-x86/cpuid.h +++ b/xen/include/public/arch-x86/cpuid.h @@ -87,6 +87,7 @@ * Sub-leaf 0: EAX: Features * Sub-leaf 0: EBX: vcpu id (iff EAX has XEN_HVM_CPUID_VCPU_ID_PRESENT flag) * Sub-leaf 0: ECX: domain id (iff EAX has XEN_HVM_CPUID_DOMID_PRESENT flag) + * Sub-leaf 0: EDX: max vcpus (iff EAX has XEN_HVM_CPUID_MAX_VCPUS_PRESENT flag) */ #define XEN_HVM_CPUID_APIC_ACCESS_VIRT (1u << 0) /* Virtualized APIC registers */ #define XEN_HVM_CPUID_X2APIC_VIRT (1u << 1) /* Virtualized x2APIC accesses */ @@ -107,6 +108,8 @@ */ #define XEN_HVM_CPUID_UPCALL_VECTOR (1u << 6) +#define XEN_HVM_CPUID_MAX_VCPUS_PRESENT (1u << 7) /* max vpcus is present in EDX */ + /* * Leaf 6 (0x40000x05) * PV-specific parameters
Currently, OVMF is hard-coded to set up a maximum of 64 vCPUs on startup. There are efforts to support a maximum of 128 vCPUs, which would involve bumping the OVMF constant from 64 to 128. However, it would be more future-proof for OVMF to access the maximum number of vCPUs for a domain and set itself up appropriately at run-time. For OVMF to access the maximum vCPU count, Xen will have to expose this property via cpuid. This patch exposes the max_vcpus field via cpuid on the HVM hypervisor leaf in edx. Tested on HVM guests running Ubuntu 22.04 LTS and Windows 10 x64. GitLab ticket: https://gitlab.com/xen-project/xen/-/issues/191 Signed-off-by: Matthew Barnes <matthew.barnes@cloud.com> --- xen/arch/x86/traps.c | 4 ++++ xen/include/public/arch-x86/cpuid.h | 3 +++ 2 files changed, 7 insertions(+)