From patchwork Wed May 8 12:39:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13658673 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EDB23C25B7A for ; Wed, 8 May 2024 12:39:50 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.718778.1121292 (Exim 4.92) (envelope-from ) id 1s4gaB-0003Iv-6u; Wed, 08 May 2024 12:39:39 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 718778.1121292; Wed, 08 May 2024 12:39:39 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s4gaB-0003Io-3i; Wed, 08 May 2024 12:39:39 +0000 Received: by outflank-mailman (input) for mailman id 718778; Wed, 08 May 2024 12:39:37 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s4ga9-0002bO-0s for xen-devel@lists.xenproject.org; Wed, 08 May 2024 12:39:37 +0000 Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com [2a00:1450:4864:20::631]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 05c39b90-0d38-11ef-b4bb-af5377834399; Wed, 08 May 2024 14:39:34 +0200 (CEST) Received: by mail-ej1-x631.google.com with SMTP id a640c23a62f3a-a59a0168c75so1133548366b.1 for ; Wed, 08 May 2024 05:39:33 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net (default-46-102-197-194.interdsl.co.uk. [46.102.197.194]) by smtp.gmail.com with ESMTPSA id uj4-20020a170907c98400b00a599f876c28sm5984439ejc.38.2024.05.08.05.39.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 May 2024 05:39:32 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 05c39b90-0d38-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1715171973; x=1715776773; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Rt+Dpwuj06WuxiC7ieJ01ZuDEFxunnINhuSQIiORfm8=; b=BnbLM+com9EQbRs3rr8aKO/hO+jHeNe8qlkIMt8HllnArmrOWmC+t7Ndh9XnInzoK1 cvAquPszSTCXmdpHFln7nfcxotUfYgQffWrpK4tU/H4H7p6OmlE199Yl/PcvFluHFgmm CIdt1Ff17yMipLQG3nuHBNaFoJYYBsEAsRtZU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715171973; x=1715776773; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Rt+Dpwuj06WuxiC7ieJ01ZuDEFxunnINhuSQIiORfm8=; b=VDCmMYhOO/Nm65Qqm5CvSK6kLcqI7B1OwMXRMbF8MbcPkCEwvsLqpyF3kE8RuUFavR LpLxnVQLJYliCpqSBsWnA1F/ORdlcH+awevVFp0QffU+lWVfIFvdhM8jpt+R+GmtbTBG NQ7GHxzMqcwrV+qAMa0CCuOKHeGZlRBxHLk/oUNkWA5YUjx66W0i+dStPLGUhUmk1A/p 4KWnUPWNVIesfJ2t6G+3Auj8K8EEaXIKj65LP4YHRiaJJU5t95LF1i/sRAZuJhc3txgn T96ynwWCznMJBlxivpka2R6rHFNUW6OmNNjhWzBi/fOw2DfGoZ0vDyrB/1tnFxEzBJdU YWQg== X-Gm-Message-State: AOJu0YylppFUSqOJYs4al81mspX8Wpm8fLYhAWV//3nR1WR1a/trD4VE /kQaGqeNkMJgF8vnTjfFWvW79MBgraF+TAkUfUm6PwmyJqN9OEAUrYe8mu8aIl9OFT3e5k8cwOe 1 X-Google-Smtp-Source: AGHT+IF3f0rpiBCbGK6d85BfHLVqHkydomk/mOPnkyuMz2szjNGaZF3xK+tRFQaovXxLP12m59c+Iw== X-Received: by 2002:a17:907:6ea3:b0:a59:bc9d:a0ab with SMTP id a640c23a62f3a-a59fb9f3409mr169249366b.72.1715171972911; Wed, 08 May 2024 05:39:32 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH v2 1/8] xen/x86: Add initial x2APIC ID to the per-vLAPIC save area Date: Wed, 8 May 2024 13:39:20 +0100 Message-Id: <4095f31a88589ced2b620e8ebbb84cdc2fae8914.1715102098.git.alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 This allows the initial x2APIC ID to be sent on the migration stream. The hardcoded mapping x2apic_id=2*vcpu_id is maintained for the time being. Given the vlapic data is zero-extended on restore, fix up migrations from hosts without the field by setting it to the old convention if zero. x2APIC IDs are calculated from the CPU policy where the guest topology is defined. For the time being, the function simply returns the old relationship, but will eventually return results consistent with the topology. Signed-off-by: Alejandro Vallejo --- v2: * Removed usage of SET_xAPIC_ID(). * Restored previous logic when exposing leaf 0xb, and gate it for HVM only. * Rewrote comment in lapic_load_fixup, including the implicit assumption. * Moved vlapic_cpu_policy_changed() into hvm_cpuid_policy_changed()) * const-ified policy in vlapic_cpu_policy_changed() --- xen/arch/x86/cpuid.c | 15 ++++--------- xen/arch/x86/hvm/vlapic.c | 30 ++++++++++++++++++++++++-- xen/arch/x86/include/asm/hvm/hvm.h | 1 + xen/arch/x86/include/asm/hvm/vlapic.h | 2 ++ xen/include/public/arch-x86/hvm/save.h | 2 ++ xen/include/xen/lib/x86/cpu-policy.h | 9 ++++++++ xen/lib/x86/policy.c | 11 ++++++++++ 7 files changed, 57 insertions(+), 13 deletions(-) diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c index 7a38e032146a..242c21ec5bb6 100644 --- a/xen/arch/x86/cpuid.c +++ b/xen/arch/x86/cpuid.c @@ -139,10 +139,9 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf, const struct cpu_user_regs *regs; case 0x1: - /* TODO: Rework topology logic. */ res->b &= 0x00ffffffu; if ( is_hvm_domain(d) ) - res->b |= (v->vcpu_id * 2) << 24; + res->b |= vlapic_x2apic_id(vcpu_vlapic(v)) << 24; /* TODO: Rework vPMU control in terms of toolstack choices. */ if ( vpmu_available(v) && @@ -311,19 +310,13 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf, break; case 0xb: - /* - * In principle, this leaf is Intel-only. In practice, it is tightly - * coupled with x2apic, and we offer an x2apic-capable APIC emulation - * to guests on AMD hardware as well. - * - * TODO: Rework topology logic. - */ - if ( p->basic.x2apic ) + /* Don't expose topology information to PV guests */ + if ( is_hvm_domain(d) && p->basic.x2apic ) { *(uint8_t *)&res->c = subleaf; /* Fix the x2APIC identifier. */ - res->d = v->vcpu_id * 2; + res->d = vlapic_x2apic_id(vcpu_vlapic(v)); } break; diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index 05072a21bf38..61a96474006b 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -1069,7 +1069,7 @@ static uint32_t x2apic_ldr_from_id(uint32_t id) static void set_x2apic_id(struct vlapic *vlapic) { const struct vcpu *v = vlapic_vcpu(vlapic); - uint32_t apic_id = v->vcpu_id * 2; + uint32_t apic_id = vlapic->hw.x2apic_id; uint32_t apic_ldr = x2apic_ldr_from_id(apic_id); /* @@ -1083,6 +1083,22 @@ static void set_x2apic_id(struct vlapic *vlapic) vlapic_set_reg(vlapic, APIC_LDR, apic_ldr); } +void vlapic_cpu_policy_changed(struct vcpu *v) +{ + struct vlapic *vlapic = vcpu_vlapic(v); + const struct cpu_policy *cp = v->domain->arch.cpu_policy; + + /* + * Don't override the initial x2APIC ID if we have migrated it or + * if the domain doesn't have vLAPIC at all. + */ + if ( !has_vlapic(v->domain) || vlapic->loaded.hw ) + return; + + vlapic->hw.x2apic_id = x86_x2apic_id_from_vcpu_id(cp, v->vcpu_id); + vlapic_set_reg(vlapic, APIC_ID, SET_xAPIC_ID(vlapic->hw.x2apic_id)); +} + int guest_wrmsr_apic_base(struct vcpu *v, uint64_t val) { const struct cpu_policy *cp = v->domain->arch.cpu_policy; @@ -1449,7 +1465,7 @@ void vlapic_reset(struct vlapic *vlapic) if ( v->vcpu_id == 0 ) vlapic->hw.apic_base_msr |= APIC_BASE_BSP; - vlapic_set_reg(vlapic, APIC_ID, (v->vcpu_id * 2) << 24); + vlapic_set_reg(vlapic, APIC_ID, SET_xAPIC_ID(vlapic->hw.x2apic_id)); vlapic_do_init(vlapic); } @@ -1514,6 +1530,16 @@ static void lapic_load_fixup(struct vlapic *vlapic) const struct vcpu *v = vlapic_vcpu(vlapic); uint32_t good_ldr = x2apic_ldr_from_id(vlapic->loaded.id); + /* + * Loading record without hw.x2apic_id in the save stream, calculate using + * the traditional "vcpu_id * 2" relation. There's an implicit assumption + * that vCPU0 always has x2APIC0, which is true for the old relation, and + * still holds under the new x2APIC generation algorithm. While that case + * goes through the conditional it's benign because it still maps to zero. + */ + if ( !vlapic->hw.x2apic_id ) + vlapic->hw.x2apic_id = v->vcpu_id * 2; + /* Skip fixups on xAPIC mode, or if the x2APIC LDR is already correct */ if ( !vlapic_x2apic_mode(vlapic) || (vlapic->loaded.ldr == good_ldr) ) diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h index 0c9e6f15645d..e1f0585d75a9 100644 --- a/xen/arch/x86/include/asm/hvm/hvm.h +++ b/xen/arch/x86/include/asm/hvm/hvm.h @@ -448,6 +448,7 @@ static inline void hvm_update_guest_efer(struct vcpu *v) static inline void hvm_cpuid_policy_changed(struct vcpu *v) { alternative_vcall(hvm_funcs.cpuid_policy_changed, v); + vlapic_cpu_policy_changed(v); } static inline void hvm_set_tsc_offset(struct vcpu *v, uint64_t offset, diff --git a/xen/arch/x86/include/asm/hvm/vlapic.h b/xen/arch/x86/include/asm/hvm/vlapic.h index 88ef94524339..e8d41313abd3 100644 --- a/xen/arch/x86/include/asm/hvm/vlapic.h +++ b/xen/arch/x86/include/asm/hvm/vlapic.h @@ -44,6 +44,7 @@ #define vlapic_xapic_mode(vlapic) \ (!vlapic_hw_disabled(vlapic) && \ !((vlapic)->hw.apic_base_msr & APIC_BASE_EXTD)) +#define vlapic_x2apic_id(vlapic) ((vlapic)->hw.x2apic_id) /* * Generic APIC bitmap vector update & search routines. @@ -107,6 +108,7 @@ int vlapic_ack_pending_irq(struct vcpu *v, int vector, bool force_ack); int vlapic_init(struct vcpu *v); void vlapic_destroy(struct vcpu *v); +void vlapic_cpu_policy_changed(struct vcpu *v); void vlapic_reset(struct vlapic *vlapic); diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h index 7ecacadde165..1c2ec669ffc9 100644 --- a/xen/include/public/arch-x86/hvm/save.h +++ b/xen/include/public/arch-x86/hvm/save.h @@ -394,6 +394,8 @@ struct hvm_hw_lapic { uint32_t disabled; /* VLAPIC_xx_DISABLED */ uint32_t timer_divisor; uint64_t tdt_msr; + uint32_t x2apic_id; + uint32_t rsvd_zero; }; DECLARE_HVM_SAVE_TYPE(LAPIC, 5, struct hvm_hw_lapic); diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h index d5e447e9dc06..392320b9adbe 100644 --- a/xen/include/xen/lib/x86/cpu-policy.h +++ b/xen/include/xen/lib/x86/cpu-policy.h @@ -542,6 +542,15 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host, const struct cpu_policy *guest, struct cpu_policy_errors *err); +/** + * Calculates the x2APIC ID of a vCPU given a CPU policy + * + * @param p CPU policy of the domain. + * @param id vCPU ID of the vCPU. + * @returns x2APIC ID of the vCPU. + */ +uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_policy *p, uint32_t id); + #endif /* !XEN_LIB_X86_POLICIES_H */ /* diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c index f033d22785be..4cef658feeb8 100644 --- a/xen/lib/x86/policy.c +++ b/xen/lib/x86/policy.c @@ -2,6 +2,17 @@ #include +uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_policy *p, uint32_t id) +{ + /* + * TODO: Derive x2APIC ID from the topology information inside `p` + * rather than from vCPU ID. This bodge is a temporary measure + * until all infra is in place to retrieve or derive the initial + * x2APIC ID from migrated domains. + */ + return vcpu_id * 2; +} + int x86_cpu_policies_are_compatible(const struct cpu_policy *host, const struct cpu_policy *guest, struct cpu_policy_errors *err) From patchwork Wed May 8 12:39:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13658668 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EC0D1C41513 for ; Wed, 8 May 2024 12:39:49 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.718776.1121266 (Exim 4.92) (envelope-from ) id 1s4ga9-0002f0-Ig; Wed, 08 May 2024 12:39:37 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 718776.1121266; Wed, 08 May 2024 12:39:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s4ga9-0002eR-Dw; Wed, 08 May 2024 12:39:37 +0000 Received: by outflank-mailman (input) for mailman id 718776; Wed, 08 May 2024 12:39:36 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s4ga8-0002bO-Ax for xen-devel@lists.xenproject.org; Wed, 08 May 2024 12:39:36 +0000 Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com [2a00:1450:4864:20::631]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 05fbb670-0d38-11ef-b4bb-af5377834399; Wed, 08 May 2024 14:39:34 +0200 (CEST) Received: by mail-ej1-x631.google.com with SMTP id a640c23a62f3a-a59a64db066so1105824266b.3 for ; Wed, 08 May 2024 05:39:34 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net (default-46-102-197-194.interdsl.co.uk. [46.102.197.194]) by smtp.gmail.com with ESMTPSA id uj4-20020a170907c98400b00a599f876c28sm5984439ejc.38.2024.05.08.05.39.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 May 2024 05:39:33 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 05fbb670-0d38-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1715171973; x=1715776773; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IQGCph8OwoOZ7Au91nMnyWXL6z4q2/NNY3PC0JWpYgc=; b=fnJa2taKYoWqge5+mJ0+kKfqsnYWuCRIx2K5a5Fiu9Z55EhIDsATBNfrYtXDa0ZEvO 8jnzQm6cmyoFq3g/9ghgdnPd0HI42TTf06ndqj3sVoLSFnqhfYB8I82KdLC26e2z6GUy SpEK/rGSGn+EAxsayzn4GddDPLxvYWbl8xRwA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715171973; x=1715776773; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IQGCph8OwoOZ7Au91nMnyWXL6z4q2/NNY3PC0JWpYgc=; b=ltYPRltG1GT63mHy4d19XHZaSla/9mq+cLrfwatwkRoG6v0CGj9TilqGtzAeXN6zLg XAUiRgPGzypopeJquWdKPodvrPG7HS8TIIN2TSqAjGCfnVVwkXBSr7WhDUSgMSV7MtZ/ 4mzWOGSH7r1lSeH2YZmZ0noxBClizd6mP40ZDxrRDZ0PRuAJX4H3Lkrc9Vpc/JLD5gYf 9sVUYpgPTDAuo9d8PuC0Xr7ruMpOW6gNVV2XqEOcEa1Ar5to1hPLeIDvror5L2dNwS5S Hjh/7A6AS86/439mO0CC40PeYFRLgmAuvxD9wbuLo3tlm2pi3o3y4JK07BYen7HnQdfX G+4g== X-Gm-Message-State: AOJu0YzWUxT0f9B603KaonHbLQHxsxRo0P0G/ND64BsNNzddF5H0unxq 9yeonqHaoh9MnglnYqLvQ3mp2Y0tON4vNqMS9Jh7VGY6vqHT9nMLbzh9Sm0uW8AKCXc1eK/lJ4s v X-Google-Smtp-Source: AGHT+IFy+1YrPs2ygTxchgGmLZ0UyoDFvdObqrbFi+6msM/QkJlrfa66CMKbsLLSE2VB1GiduBNJ5A== X-Received: by 2002:a17:906:dfc1:b0:a59:bcfd:d950 with SMTP id a640c23a62f3a-a59fb95e30bmr160219566b.46.1715171973587; Wed, 08 May 2024 05:39:33 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH v2 2/8] xen/x86: Simplify header dependencies in x86/hvm Date: Wed, 8 May 2024 13:39:21 +0100 Message-Id: <00ce7005d1d6db5c1ffc2d5023d34d4bd34ff841.1715102098.git.alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Otherwise it's not possible to call functions described in hvm/vlapic.h from the inline functions of hvm/hvm.h. This is because a static inline in vlapic.h depends on hvm.h, and pulls it transitively through vpt.h. The ultimate cause is having hvm.h included in any of the "v*.h" headers, so break the cycle moving the guilty inline into hvm.h. No functional change. Signed-off-by: Alejandro Vallejo Reviewed-by: Jan Beulich Acked-by: Roger Pau Monné --- v2: * New patch. Prereq to moving vlapic_cpu_policy_changed() onto hvm.h --- xen/arch/x86/hvm/irq.c | 6 +++--- xen/arch/x86/hvm/vlapic.c | 4 ++-- xen/arch/x86/include/asm/hvm/hvm.h | 6 ++++++ xen/arch/x86/include/asm/hvm/vlapic.h | 6 ------ xen/arch/x86/include/asm/hvm/vpt.h | 1 - 5 files changed, 11 insertions(+), 12 deletions(-) diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c index 4a9fe82cbd8d..4f5479b12c98 100644 --- a/xen/arch/x86/hvm/irq.c +++ b/xen/arch/x86/hvm/irq.c @@ -512,13 +512,13 @@ struct hvm_intack hvm_vcpu_has_pending_irq(struct vcpu *v) int vector; /* - * Always call vlapic_sync_pir_to_irr so that PIR is synced into IRR when - * using posted interrupts. Note this is also done by + * Always call hvm_vlapic_sync_pir_to_irr so that PIR is synced into IRR + * when using posted interrupts. Note this is also done by * vlapic_has_pending_irq but depending on which interrupts are pending * hvm_vcpu_has_pending_irq will return early without calling * vlapic_has_pending_irq. */ - vlapic_sync_pir_to_irr(v); + hvm_vlapic_sync_pir_to_irr(v); if ( unlikely(v->arch.nmi_pending) ) return hvm_intack_nmi; diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index 61a96474006b..8a244100009c 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -98,7 +98,7 @@ static void vlapic_clear_irr(int vector, struct vlapic *vlapic) static int vlapic_find_highest_irr(struct vlapic *vlapic) { - vlapic_sync_pir_to_irr(vlapic_vcpu(vlapic)); + hvm_vlapic_sync_pir_to_irr(vlapic_vcpu(vlapic)); return vlapic_find_highest_vector(&vlapic->regs->data[APIC_IRR]); } @@ -1516,7 +1516,7 @@ static int cf_check lapic_save_regs(struct vcpu *v, hvm_domain_context_t *h) if ( !has_vlapic(v->domain) ) return 0; - vlapic_sync_pir_to_irr(v); + hvm_vlapic_sync_pir_to_irr(v); return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)->regs); } diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h index e1f0585d75a9..84911f3ebcb4 100644 --- a/xen/arch/x86/include/asm/hvm/hvm.h +++ b/xen/arch/x86/include/asm/hvm/hvm.h @@ -798,6 +798,12 @@ static inline void hvm_update_vlapic_mode(struct vcpu *v) alternative_vcall(hvm_funcs.update_vlapic_mode, v); } +static inline void hvm_vlapic_sync_pir_to_irr(struct vcpu *v) +{ + if ( hvm_funcs.sync_pir_to_irr ) + alternative_vcall(hvm_funcs.sync_pir_to_irr, v); +} + #else /* CONFIG_HVM */ #define hvm_enabled false diff --git a/xen/arch/x86/include/asm/hvm/vlapic.h b/xen/arch/x86/include/asm/hvm/vlapic.h index e8d41313abd3..34f23cd38a20 100644 --- a/xen/arch/x86/include/asm/hvm/vlapic.h +++ b/xen/arch/x86/include/asm/hvm/vlapic.h @@ -139,10 +139,4 @@ bool vlapic_match_dest( const struct vlapic *target, const struct vlapic *source, int short_hand, uint32_t dest, bool dest_mode); -static inline void vlapic_sync_pir_to_irr(struct vcpu *v) -{ - if ( hvm_funcs.sync_pir_to_irr ) - alternative_vcall(hvm_funcs.sync_pir_to_irr, v); -} - #endif /* __ASM_X86_HVM_VLAPIC_H__ */ diff --git a/xen/arch/x86/include/asm/hvm/vpt.h b/xen/arch/x86/include/asm/hvm/vpt.h index feb0bf43f14b..0b92b286252d 100644 --- a/xen/arch/x86/include/asm/hvm/vpt.h +++ b/xen/arch/x86/include/asm/hvm/vpt.h @@ -11,7 +11,6 @@ #include #include #include -#include /* * Abstract layer of periodic time, one short time. From patchwork Wed May 8 12:39:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13658676 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 122C6C25B7C for ; Wed, 8 May 2024 12:39:51 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.718777.1121274 (Exim 4.92) (envelope-from ) id 1s4ga9-0002jG-TX; Wed, 08 May 2024 12:39:37 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 718777.1121274; Wed, 08 May 2024 12:39:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s4ga9-0002ha-Lq; Wed, 08 May 2024 12:39:37 +0000 Received: by outflank-mailman (input) for mailman id 718777; Wed, 08 May 2024 12:39:36 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s4ga8-0002b0-Io for xen-devel@lists.xenproject.org; Wed, 08 May 2024 12:39:36 +0000 Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com [2a00:1450:4864:20::22f]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 06a4974f-0d38-11ef-909c-e314d9c70b13; Wed, 08 May 2024 14:39:35 +0200 (CEST) Received: by mail-lj1-x22f.google.com with SMTP id 38308e7fff4ca-2e1fa1f1d9bso82820181fa.0 for ; Wed, 08 May 2024 05:39:35 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net (default-46-102-197-194.interdsl.co.uk. [46.102.197.194]) by smtp.gmail.com with ESMTPSA id uj4-20020a170907c98400b00a599f876c28sm5984439ejc.38.2024.05.08.05.39.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 May 2024 05:39:33 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 06a4974f-0d38-11ef-909c-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1715171974; x=1715776774; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=H1Ga1HBZRTHKI6NaCRCof9WPNhoGPFDq14GP1TJIEi4=; b=BHV9QNW2x0qZztkc5PVqx/fyWqk4cKREUIPyFLbTbO3/mqjEmTsbsvbdxUoYLqg3Xr YgkY7xf6sE1PaLSkm+QRKI6F/2mCVsBOvSICXZx5sW2SAx9p0sYDA0Nzy12AdzIvEipt mTkPGwcLccNm/jMcKX+DxzfgOPFyXcLVSlkGw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715171974; x=1715776774; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=H1Ga1HBZRTHKI6NaCRCof9WPNhoGPFDq14GP1TJIEi4=; b=iPbKKupU/JH/J/P3CC4e1rQ/+oIzgggFy8afZ+D7o+38fjA/0ElxyLNZlkVG6SBqHM 8WSaHEVjgtDv+LuOYq9tOwI4KV+LV6xwKBiZH9BXn1B/DoR2Vr+oTvmrq3guZA0QMSZf piFWjeYii/Z1iEaA7j8V9s+sVnbSnR9uZspvdIEdWuW5VHZwVNAdgOnxXGcXSrCGoI42 XKbi6ubW2FrmTijPRBMc7BC5V1FcLjbJfkjRfdjpI20UtHK0jXz1mNfzgPsX9rz5KbiP m/3sYfAvxmQzclPqXHPsd2pMLGyM3xGCeSkN93MB2IMt6lercpWDXdKpbikI7FUMMcmq Tsvw== X-Gm-Message-State: AOJu0YzI6kozmeRt4lCcqMhJul6Kg5YhLkkY6VxP2VnvF1lPjnvxdKuw tYe85I10SCBm/haulTFpnayUcO2Ox4Cz5IYDj2B5yIHMxLTGGcA6dtfsxmck1ZPtD7No7uSCa5p k X-Google-Smtp-Source: AGHT+IEsG9Cq9YdVt5/5FaGqvZELl/kornKj55RPE+pogIXnucFJ/moPPpMWOe3cSMbfkfmVqhtlVQ== X-Received: by 2002:a2e:a6a1:0:b0:2d8:8fb6:a53d with SMTP id 38308e7fff4ca-2e4476995aamr17025641fa.42.1715171974299; Wed, 08 May 2024 05:39:34 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH v2 3/8] x86/vlapic: Move lapic_load_hidden migration checks to the check hook Date: Wed, 8 May 2024 13:39:22 +0100 Message-Id: <499e029a7d2fce4fb9118b1e508313f369b37c79.1715102098.git.alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 While at it, add a check for the reserved field in the hidden save area. Signed-off-by: Alejandro Vallejo --- v2: * New patch. Addresses the missing check for rsvd_zero in v1. --- xen/arch/x86/hvm/vlapic.c | 41 ++++++++++++++++++++++++++++----------- 1 file changed, 30 insertions(+), 11 deletions(-) diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index 8a244100009c..2f06bff1b2cc 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -1573,35 +1573,54 @@ static void lapic_load_fixup(struct vlapic *vlapic) v, vlapic->loaded.id, vlapic->loaded.ldr, good_ldr); } -static int cf_check lapic_load_hidden(struct domain *d, hvm_domain_context_t *h) +static int cf_check lapic_check_hidden(const struct domain *d, + hvm_domain_context_t *h) { unsigned int vcpuid = hvm_load_instance(h); - struct vcpu *v; - struct vlapic *s; + struct hvm_hw_lapic s; if ( !has_vlapic(d) ) return -ENODEV; /* Which vlapic to load? */ - if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL ) + if ( vcpuid >= d->max_vcpus || d->vcpu[vcpuid] == NULL ) { dprintk(XENLOG_G_ERR, "HVM restore: dom%d has no apic%u\n", d->domain_id, vcpuid); return -EINVAL; } - s = vcpu_vlapic(v); - if ( hvm_load_entry_zeroextend(LAPIC, h, &s->hw) != 0 ) + if ( hvm_load_entry_zeroextend(LAPIC, h, &s) ) + return -ENODATA; + + /* EN=0 with EXTD=1 is illegal */ + if ( (s.apic_base_msr & (APIC_BASE_ENABLE | APIC_BASE_EXTD)) == + APIC_BASE_EXTD ) + return -EINVAL; + + /* + * Fail migrations from newer versions of Xen where + * rsvd_zero is interpreted as something else. + */ + if ( s.rsvd_zero ) return -EINVAL; + return 0; +} + +static int cf_check lapic_load_hidden(struct domain *d, hvm_domain_context_t *h) +{ + unsigned int vcpuid = hvm_load_instance(h); + struct vcpu *v = d->vcpu[vcpuid]; + struct vlapic *s = vcpu_vlapic(v); + + if ( hvm_load_entry_zeroextend(LAPIC, h, &s->hw) != 0 ) + BUG(); + s->loaded.hw = 1; if ( s->loaded.regs ) lapic_load_fixup(s); - if ( !(s->hw.apic_base_msr & APIC_BASE_ENABLE) && - unlikely(vlapic_x2apic_mode(s)) ) - return -EINVAL; - hvm_update_vlapic_mode(v); return 0; @@ -1643,7 +1662,7 @@ static int cf_check lapic_load_regs(struct domain *d, hvm_domain_context_t *h) return 0; } -HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, NULL, +HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_check_hidden, lapic_load_hidden, 1, HVMSR_PER_VCPU); HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, NULL, lapic_load_regs, 1, HVMSR_PER_VCPU); From patchwork Wed May 8 12:39:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13658675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0D6B3C25B79 for ; Wed, 8 May 2024 12:39:51 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.718779.1121298 (Exim 4.92) (envelope-from ) id 1s4gaB-0003MN-KL; Wed, 08 May 2024 12:39:39 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 718779.1121298; Wed, 08 May 2024 12:39:39 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s4gaB-0003LY-Cr; Wed, 08 May 2024 12:39:39 +0000 Received: by outflank-mailman (input) for mailman id 718779; Wed, 08 May 2024 12:39:37 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s4ga9-0002b0-J3 for xen-devel@lists.xenproject.org; Wed, 08 May 2024 12:39:37 +0000 Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com [2a00:1450:4864:20::62a]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 06dbf4e2-0d38-11ef-909c-e314d9c70b13; Wed, 08 May 2024 14:39:35 +0200 (CEST) Received: by mail-ej1-x62a.google.com with SMTP id a640c23a62f3a-a59b49162aeso949718866b.3 for ; Wed, 08 May 2024 05:39:35 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net (default-46-102-197-194.interdsl.co.uk. [46.102.197.194]) by smtp.gmail.com with ESMTPSA id uj4-20020a170907c98400b00a599f876c28sm5984439ejc.38.2024.05.08.05.39.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 May 2024 05:39:34 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 06dbf4e2-0d38-11ef-909c-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1715171975; x=1715776775; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SqD0+veAXWzP3pl5EqEMwlTj/kQBDP8F+ZMM9Cf6nOI=; b=b4Tqc86tE+8yM+fJ41o291lmU20XK9g5RjMWM9NIzFrYzQDr0Eb08XOHEaDvwKTwLA Y1Lm55N4yU+5VGsmyYtzz0l+1IF92TRxbFZcmCIRerHNO9IXlrDiGhyKmBKPIPPrUzxo mW2+ij4ZNIGGPMj8E4oABS4EaN2c+TKdWfFfs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715171975; x=1715776775; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SqD0+veAXWzP3pl5EqEMwlTj/kQBDP8F+ZMM9Cf6nOI=; b=ueCeDGnBB8Jyp5OzakJ9K7xqlXBHYjySlY0d1vdrrVUJnR44T4HQIZao4kaReKCe8z +Hc3HvEbIsKrB6goNmXxXycgLU5zpi2p6Q3juAMe9it6LkkeJrEFqM5BRxIe3fZEw/gj SBbp4+s/YRGURgsf9Ej+yk66cf6NcfuGaQaf+XvUMN/ELlbywE15SN7cKhpOcfC6UvYC 3AeBKW7ZVzt+j89bV0wvc0lgW812CB6n7KxJLu8FEn0vMn9fhViUVFKys68qlwwsrw+u Ty+6EvzLoa7Yx/MSVwOW6s9wjAxwl/fHKgdsYKBHgF+VhKWFdQ2f1jVggvXvVTLtO3R5 04UA== X-Gm-Message-State: AOJu0Ywswu5fsUUmsjUCivSaom1fxUgueOQShkO9yFVlnbDHb9zMohby hkt42Tp6bC9IfPLzF6xaS2b4Fpk66Ukbe1paT38Pgxz/MZcut9+gtFuBPY8lHifBMj15CeRJywO Y X-Google-Smtp-Source: AGHT+IGg/M+0iKytNyTfXIK6S1aJWBIHRH7JNnNwznMpl2CuP/vkB2TIZRLcAfCZph/OLryXk56CSw== X-Received: by 2002:a17:907:990b:b0:a59:c3d0:550c with SMTP id a640c23a62f3a-a59fb95a533mr160323066b.43.1715171975070; Wed, 08 May 2024 05:39:35 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Anthony PERARD Subject: [PATCH v2 4/8] tools/hvmloader: Wake APs with hypercalls and not with INIT+SIPI+SIPI Date: Wed, 8 May 2024 13:39:23 +0100 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Removes a needless assembly entry point and simplifies the codebase by allowing hvmloader to wake APs it doesn't know the APIC ID of. Signed-off-by: Alejandro Vallejo --- v2: * New patch. Replaces adding cpu policy to hvmloader in v1. --- tools/firmware/hvmloader/smp.c | 111 +++++++++++++-------------------- 1 file changed, 44 insertions(+), 67 deletions(-) diff --git a/tools/firmware/hvmloader/smp.c b/tools/firmware/hvmloader/smp.c index 082b17f13818..a668f15d7e1f 100644 --- a/tools/firmware/hvmloader/smp.c +++ b/tools/firmware/hvmloader/smp.c @@ -22,88 +22,68 @@ #include "util.h" #include "config.h" #include "apic_regs.h" +#include "hypercall.h" -#define AP_BOOT_EIP 0x1000 -extern char ap_boot_start[], ap_boot_end[]; +#include +#include + +#include static int ap_callin, ap_cpuid; -asm ( - " .text \n" - " .code16 \n" - "ap_boot_start: .code16 \n" - " mov %cs,%ax \n" - " mov %ax,%ds \n" - " lgdt gdt_desr-ap_boot_start\n" - " xor %ax, %ax \n" - " inc %ax \n" - " lmsw %ax \n" - " ljmpl $0x08,$1f \n" - "gdt_desr: \n" - " .word gdt_end - gdt - 1 \n" - " .long gdt \n" - "ap_boot_end: .code32 \n" - "1: mov $0x10,%eax \n" - " mov %eax,%ds \n" - " mov %eax,%es \n" - " mov %eax,%ss \n" - " movl $stack_top,%esp \n" - " movl %esp,%ebp \n" - " call ap_start \n" - "1: hlt \n" - " jmp 1b \n" - " \n" - " .align 8 \n" - "gdt: \n" - " .quad 0x0000000000000000 \n" - " .quad 0x00cf9a000000ffff \n" /* 0x08: Flat code segment */ - " .quad 0x00cf92000000ffff \n" /* 0x10: Flat data segment */ - "gdt_end: \n" - " \n" - " .bss \n" - " .align 8 \n" - "stack: \n" - " .skip 0x4000 \n" - "stack_top: \n" - " .text \n" - ); - -void ap_start(void); /* non-static avoids unused-function compiler warning */ -/*static*/ void ap_start(void) +static void ap_start(void) { printf(" - CPU%d ... ", ap_cpuid); cacheattr_init(); printf("done.\n"); + + if ( !ap_cpuid ) + return; + wmb(); ap_callin = 1; -} -static void lapic_wait_ready(void) -{ - while ( lapic_read(APIC_ICR) & APIC_ICR_BUSY ) - cpu_relax(); + while ( 1 ) + asm volatile ( "hlt" ); } static void boot_cpu(unsigned int cpu) { - unsigned int icr2 = SET_APIC_DEST_FIELD(LAPIC_ID(cpu)); + static uint8_t ap_stack[4 * PAGE_SIZE] __attribute__ ((aligned (16))); + static struct vcpu_hvm_context ap; /* Initialise shared variables. */ ap_cpuid = cpu; - ap_callin = 0; wmb(); - /* Wake up the secondary processor: INIT-SIPI-SIPI... */ - lapic_wait_ready(); - lapic_write(APIC_ICR2, icr2); - lapic_write(APIC_ICR, APIC_DM_INIT); - lapic_wait_ready(); - lapic_write(APIC_ICR2, icr2); - lapic_write(APIC_ICR, APIC_DM_STARTUP | (AP_BOOT_EIP >> 12)); - lapic_wait_ready(); - lapic_write(APIC_ICR2, icr2); - lapic_write(APIC_ICR, APIC_DM_STARTUP | (AP_BOOT_EIP >> 12)); - lapic_wait_ready(); + /* Wake up the secondary processor */ + ap = (struct vcpu_hvm_context) { + .mode = VCPU_HVM_MODE_32B, + .cpu_regs.x86_32 = { + .eip = (uint32_t)ap_start, + .esp = (uint32_t)ap_stack + ARRAY_SIZE(ap_stack), + + /* Protected mode with MMU off */ + .cr0 = X86_CR0_PE, + + /* Prepopulate the GDT */ + .cs_limit = -1U, + .ds_limit = -1U, + .ss_limit = -1U, + .es_limit = -1U, + .tr_limit = 0x67, + .cs_ar = 0xc9b, + .ds_ar = 0xc93, + .es_ar = 0xc93, + .ss_ar = 0xc93, + .tr_ar = 0x8b, + }, + }; + + if ( hypercall_vcpu_op(VCPUOP_initialise, cpu, &ap) ) + BUG(); + if ( hypercall_vcpu_op(VCPUOP_up, cpu, NULL) ) + BUG(); /* * Wait for the secondary processor to complete initialisation. @@ -113,17 +93,14 @@ static void boot_cpu(unsigned int cpu) cpu_relax(); /* Take the secondary processor offline. */ - lapic_write(APIC_ICR2, icr2); - lapic_write(APIC_ICR, APIC_DM_INIT); - lapic_wait_ready(); + if ( hypercall_vcpu_op(VCPUOP_down, cpu, NULL) ) + BUG(); } void smp_initialise(void) { unsigned int i, nr_cpus = hvm_info->nr_vcpus; - memcpy((void *)AP_BOOT_EIP, ap_boot_start, ap_boot_end - ap_boot_start); - printf("Multiprocessor initialisation:\n"); ap_start(); for ( i = 1; i < nr_cpus; i++ ) From patchwork Wed May 8 12:39:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13658669 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EC0A1C19F4F for ; Wed, 8 May 2024 12:39:49 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.718780.1121303 (Exim 4.92) (envelope-from ) id 1s4gaB-0003TH-VS; Wed, 08 May 2024 12:39:39 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 718780.1121303; Wed, 08 May 2024 12:39:39 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s4gaB-0003SD-Ov; Wed, 08 May 2024 12:39:39 +0000 Received: by outflank-mailman (input) for mailman id 718780; Wed, 08 May 2024 12:39:38 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s4gaA-0002b0-Jf for xen-devel@lists.xenproject.org; Wed, 08 May 2024 12:39:38 +0000 Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com [2a00:1450:4864:20::62d]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 0750ef9e-0d38-11ef-909c-e314d9c70b13; Wed, 08 May 2024 14:39:36 +0200 (CEST) Received: by mail-ej1-x62d.google.com with SMTP id a640c23a62f3a-a59c04839caso977235566b.2 for ; Wed, 08 May 2024 05:39:36 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net (default-46-102-197-194.interdsl.co.uk. [46.102.197.194]) by smtp.gmail.com with ESMTPSA id uj4-20020a170907c98400b00a599f876c28sm5984439ejc.38.2024.05.08.05.39.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 May 2024 05:39:35 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0750ef9e-0d38-11ef-909c-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1715171976; x=1715776776; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Tjb9wDk03eu5cy1/rNtEigMH1xNhBsbk5Mon31Q8thQ=; b=E3lIERaoiBhvhymIlJVY5fSiUamqJXXkB5MeK37T9+Lx48NFtdYICzQ5oaGslclVBg 0V1hW36VSnVNJuY1Nl0q+gh6slUTMmURmdyMo9u6bEbMVyMJm6Ff56l/BTNhXf0sTJMe t7TUGtNZNupX5YFJF5kwurW9uUFVlv4ExCGI4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715171976; x=1715776776; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Tjb9wDk03eu5cy1/rNtEigMH1xNhBsbk5Mon31Q8thQ=; b=v34JS0yPekgUZVYkb2ZJaQ3JsRs6ErcrNXdTRwUlyrBzA19wlg569afnmNxhcMwdm5 WZ/ioguIFu8dLz/2/4CVaQGCntF6IyR7UrUG2znToYZVmAnG+90yjC69SO4qW+RqBCGJ KqN8yQ2Ds+IpPeswnhImumqFZe3qlffHkdBw/yz3ie6zrzTonD5xoqdAeob74k/9JkBp CiQRGVnfGe0WRPAr4n+3X3mCm2qHFrLZfjOVNHKifUzASV0zHkOzHp9G698Biy2nR3qO QlzVULXgKXJaHOd2pYQelfjGYn4hZYcoIq6vzxyvx6k857EKaSSm6ZUoMVINCoy2yPGi qjGA== X-Gm-Message-State: AOJu0YzsjjURn4l0VRqukhY+zkrG/asJAZFMxKNFE5MAaoqvCgm98vBK jHRkkFH1637MSEVzlQt10Amj9ZMlTUrNZ2jvZWxXev1LUTHLxTBrBkdyfBhlIG6SRx1kyGnZhCs b X-Google-Smtp-Source: AGHT+IF9b75aKMImWVHAuQOHLUdGMkpVk6WnMpMg8aFYo7LFMr0V61yRnpql50wsVtLQWkTMQRkUOQ== X-Received: by 2002:a17:907:3e13:b0:a59:ca33:6845 with SMTP id a640c23a62f3a-a59fb9d2c7emr178006266b.75.1715171975798; Wed, 08 May 2024 05:39:35 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Anthony PERARD Subject: [PATCH v2 5/8] tools/hvmloader: Retrieve (x2)APIC IDs from the APs themselves Date: Wed, 8 May 2024 13:39:24 +0100 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Make it so the APs expose their own APIC IDs in a LUT. We can use that LUT to populate the MADT, decoupling the algorithm that relates CPU IDs and APIC IDs from hvmloader. While at this also remove ap_callin, as writing the APIC ID may serve the same purpose. Signed-off-by: Alejandro Vallejo --- v2: * New patch. Replaces adding cpu policy to hvmloader in v1. --- tools/firmware/hvmloader/config.h | 6 ++++- tools/firmware/hvmloader/hvmloader.c | 4 +-- tools/firmware/hvmloader/smp.c | 40 +++++++++++++++++++++++----- tools/firmware/hvmloader/util.h | 5 ++++ xen/arch/x86/include/asm/hvm/hvm.h | 1 + 5 files changed, 47 insertions(+), 9 deletions(-) diff --git a/tools/firmware/hvmloader/config.h b/tools/firmware/hvmloader/config.h index c82adf6dc508..edf6fa9c908c 100644 --- a/tools/firmware/hvmloader/config.h +++ b/tools/firmware/hvmloader/config.h @@ -4,6 +4,8 @@ #include #include +#include + enum virtual_vga { VGA_none, VGA_std, VGA_cirrus, VGA_pt }; extern enum virtual_vga virtual_vga; @@ -49,8 +51,10 @@ extern uint8_t ioapic_version; #define IOAPIC_ID 0x01 +extern uint32_t CPU_TO_X2APICID[HVM_MAX_VCPUS]; + #define LAPIC_BASE_ADDRESS 0xfee00000 -#define LAPIC_ID(vcpu_id) ((vcpu_id) * 2) +#define LAPIC_ID(vcpu_id) (CPU_TO_X2APICID[(vcpu_id)]) #define PCI_ISA_DEVFN 0x08 /* dev 1, fn 0 */ #define PCI_ISA_IRQ_MASK 0x0c20U /* ISA IRQs 5,10,11 are PCI connected */ diff --git a/tools/firmware/hvmloader/hvmloader.c b/tools/firmware/hvmloader/hvmloader.c index c58841e5b556..1eba92229925 100644 --- a/tools/firmware/hvmloader/hvmloader.c +++ b/tools/firmware/hvmloader/hvmloader.c @@ -342,11 +342,11 @@ int main(void) printf("CPU speed is %u MHz\n", get_cpu_mhz()); + smp_initialise(); + apic_setup(); pci_setup(); - smp_initialise(); - perform_tests(); if ( bios->bios_info_setup ) diff --git a/tools/firmware/hvmloader/smp.c b/tools/firmware/hvmloader/smp.c index a668f15d7e1f..4d75f239c2f5 100644 --- a/tools/firmware/hvmloader/smp.c +++ b/tools/firmware/hvmloader/smp.c @@ -29,7 +29,34 @@ #include -static int ap_callin, ap_cpuid; +static int ap_cpuid; + +/** + * Lookup table of x2APIC IDs. + * + * Each entry is populated its respective CPU as they come online. This is required + * for generating the MADT with minimal assumptions about ID relationships. + */ +uint32_t CPU_TO_X2APICID[HVM_MAX_VCPUS]; + +static uint32_t read_apic_id(void) +{ + uint32_t apic_id; + + cpuid(1, NULL, &apic_id, NULL, NULL); + apic_id >>= 24; + + /* + * APIC IDs over 255 are represented by 255 in leaf 1 and are meant to be + * read from topology leaves instead. Xen exposes x2APIC IDs in leaf 0xb, + * but only if the x2APIC feature is present. If there are that many CPUs + * it's guaranteed to be there so we can avoid checking for it specifically + */ + if ( apic_id == 255 ) + cpuid(0xb, NULL, NULL, NULL, &apic_id); + + return apic_id; +} static void ap_start(void) { @@ -37,12 +64,12 @@ static void ap_start(void) cacheattr_init(); printf("done.\n"); + wmb(); + ACCESS_ONCE(CPU_TO_X2APICID[ap_cpuid]) = read_apic_id(); + if ( !ap_cpuid ) return; - wmb(); - ap_callin = 1; - while ( 1 ) asm volatile ( "hlt" ); } @@ -86,10 +113,11 @@ static void boot_cpu(unsigned int cpu) BUG(); /* - * Wait for the secondary processor to complete initialisation. + * Wait for the secondary processor to complete initialisation, + * which is signaled by its x2APIC ID being writted to the LUT. * Do not touch shared resources meanwhile. */ - while ( !ap_callin ) + while ( !ACCESS_ONCE(CPU_TO_X2APICID[cpu]) ) cpu_relax(); /* Take the secondary processor offline. */ diff --git a/tools/firmware/hvmloader/util.h b/tools/firmware/hvmloader/util.h index 14078bde1e30..51e9003bc615 100644 --- a/tools/firmware/hvmloader/util.h +++ b/tools/firmware/hvmloader/util.h @@ -23,6 +23,11 @@ enum { #define __STR(...) #__VA_ARGS__ #define STR(...) __STR(__VA_ARGS__) +#define __ACCESS_ONCE(x) ({ \ + (void)(typeof(x))0; /* Scalar typecheck. */ \ + (volatile typeof(x) *)&(x); }) +#define ACCESS_ONCE(x) (*__ACCESS_ONCE(x)) + /* GDT selector values. */ #define SEL_CODE16 0x0008 #define SEL_DATA16 0x0010 diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h index 84911f3ebcb4..6c005f0b0b38 100644 --- a/xen/arch/x86/include/asm/hvm/hvm.h +++ b/xen/arch/x86/include/asm/hvm/hvm.h @@ -16,6 +16,7 @@ #include #include #include +#include struct pirq; /* needed by pi_update_irte */ From patchwork Wed May 8 12:39:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13658672 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EF117C25B5F for ; Wed, 8 May 2024 12:39:49 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.718781.1121313 (Exim 4.92) (envelope-from ) id 1s4gaC-0003fM-JF; Wed, 08 May 2024 12:39:40 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 718781.1121313; Wed, 08 May 2024 12:39:40 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s4gaC-0003e7-EQ; Wed, 08 May 2024 12:39:40 +0000 Received: by outflank-mailman (input) for mailman id 718781; Wed, 08 May 2024 12:39:39 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s4gaB-0002b0-JQ for xen-devel@lists.xenproject.org; Wed, 08 May 2024 12:39:39 +0000 Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com [2a00:1450:4864:20::634]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 07d48e6c-0d38-11ef-909c-e314d9c70b13; Wed, 08 May 2024 14:39:37 +0200 (CEST) Received: by mail-ej1-x634.google.com with SMTP id a640c23a62f3a-a599af16934so1088765366b.1 for ; Wed, 08 May 2024 05:39:37 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net (default-46-102-197-194.interdsl.co.uk. [46.102.197.194]) by smtp.gmail.com with ESMTPSA id uj4-20020a170907c98400b00a599f876c28sm5984439ejc.38.2024.05.08.05.39.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 May 2024 05:39:36 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 07d48e6c-0d38-11ef-909c-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1715171976; x=1715776776; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HgVeKZJJDa3o9g7T6dDnYuOFXwdr65sevSfeNvtvPGo=; b=etWXzobbzyIV+PYTlgoFsqPJR87wtyO1yzhz3f4p0KmYMPG9K5A7Uxa48fMhw0EV11 jO+fBxW4Ek+DZbfehXsbLfea3sEGdi4G9skI0477CdibcRncQLkF1wXkMOlXYG9NsPCN R0j8z4R5Pz1CHU5+KZOffZGtY3kLq7RTP8L+o= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715171976; x=1715776776; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HgVeKZJJDa3o9g7T6dDnYuOFXwdr65sevSfeNvtvPGo=; b=nYXNkLse6kNyy++hyhJAetq8DWk+uqSKA+pCVxW0oKOAqYuw09Wjx3+94YErGFCqvv 9SYxZc6ByE0/Qpzo7VFmO0DZ+ss8cOEEZlaMAh6DQnjMp7obrdXX7k3HmtPm4ev47BDf M8EeS5Wp351slQpP/olKQQ59RtlORtsllqhAnZZ38Yf/k1pPLqgTi30Ya0DVlN3dLw9b 3ftnIJHLitJGExOHuzgcCXihCE9X0JWboii3TbUyT8YguGrvDgchk5DiKI4OHMIfrafl SfKBYOZg+7C6uQ9/DiUbpmhpzo6Pe69CgW8VpqLC29r7e57EdtpljANOHBZenYVD4h8d 7GXg== X-Gm-Message-State: AOJu0Yz1Ad8oVZIS2hPUMFZBA2rJNeBTLJCkT9qE9VEKObC9UlIy5xLz qDwP6nnqhcJV929G5dU/GS7AnWa6q24oLGFK5WmsPb3oItN/qK655l2T+9YltiXfv+6E1OaomlC S X-Google-Smtp-Source: AGHT+IEjhEqD6jSzOOxN3SFf1hIWRaJ0vLeIkZb+dYtRv4xOorgZ2g2626xib1oqp8QwkOANeImtag== X-Received: by 2002:a17:906:3901:b0:a52:65bd:a19a with SMTP id a640c23a62f3a-a59fb9d6462mr167510066b.57.1715171976633; Wed, 08 May 2024 05:39:36 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Anthony PERARD Subject: [PATCH v2 6/8] xen/lib: Add topology generator for x86 Date: Wed, 8 May 2024 13:39:25 +0100 Message-Id: <1ffad529d7fed10381df67215c747fc2d69f805e.1715102098.git.alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Add a helper to populate topology leaves in the cpu policy from threads/core and cores/package counts. No functional change, as it's not connected to anything yet. Signed-off-by: Alejandro Vallejo --- v2: * New patch. Extracted from v1/patch6 --- tools/tests/cpu-policy/test-cpu-policy.c | 128 +++++++++++++++++++++++ xen/include/xen/lib/x86/cpu-policy.h | 16 +++ xen/lib/x86/policy.c | 86 +++++++++++++++ 3 files changed, 230 insertions(+) diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c index 301df2c00285..0ba8c418b1b3 100644 --- a/tools/tests/cpu-policy/test-cpu-policy.c +++ b/tools/tests/cpu-policy/test-cpu-policy.c @@ -650,6 +650,132 @@ static void test_is_compatible_failure(void) } } +static void test_topo_from_parts(void) +{ + static const struct test { + unsigned int threads_per_core; + unsigned int cores_per_pkg; + struct cpu_policy policy; + } tests[] = { + { + .threads_per_core = 3, .cores_per_pkg = 1, + .policy = { + .x86_vendor = X86_VENDOR_AMD, + .topo.subleaf = { + [0] = { .nr_logical = 3, .level = 0, .type = 1, .id_shift = 2, }, + [1] = { .nr_logical = 1, .level = 1, .type = 2, .id_shift = 2, }, + }, + }, + }, + { + .threads_per_core = 1, .cores_per_pkg = 3, + .policy = { + .x86_vendor = X86_VENDOR_AMD, + .topo.subleaf = { + [0] = { .nr_logical = 1, .level = 0, .type = 1, .id_shift = 0, }, + [1] = { .nr_logical = 3, .level = 1, .type = 2, .id_shift = 2, }, + }, + }, + }, + { + .threads_per_core = 7, .cores_per_pkg = 5, + .policy = { + .x86_vendor = X86_VENDOR_AMD, + .topo.subleaf = { + [0] = { .nr_logical = 7, .level = 0, .type = 1, .id_shift = 3, }, + [1] = { .nr_logical = 5, .level = 1, .type = 2, .id_shift = 6, }, + }, + }, + }, + { + .threads_per_core = 2, .cores_per_pkg = 128, + .policy = { + .x86_vendor = X86_VENDOR_AMD, + .topo.subleaf = { + [0] = { .nr_logical = 2, .level = 0, .type = 1, .id_shift = 1, }, + [1] = { .nr_logical = 128, .level = 1, .type = 2, .id_shift = 8, }, + }, + }, + }, + { + .threads_per_core = 3, .cores_per_pkg = 1, + .policy = { + .x86_vendor = X86_VENDOR_INTEL, + .topo.subleaf = { + [0] = { .nr_logical = 3, .level = 0, .type = 1, .id_shift = 2, }, + [1] = { .nr_logical = 3, .level = 1, .type = 2, .id_shift = 2, }, + }, + }, + }, + { + .threads_per_core = 1, .cores_per_pkg = 3, + .policy = { + .x86_vendor = X86_VENDOR_INTEL, + .topo.subleaf = { + [0] = { .nr_logical = 1, .level = 0, .type = 1, .id_shift = 0, }, + [1] = { .nr_logical = 3, .level = 1, .type = 2, .id_shift = 2, }, + }, + }, + }, + { + .threads_per_core = 7, .cores_per_pkg = 5, + .policy = { + .x86_vendor = X86_VENDOR_INTEL, + .topo.subleaf = { + [0] = { .nr_logical = 7, .level = 0, .type = 1, .id_shift = 3, }, + [1] = { .nr_logical = 35, .level = 1, .type = 2, .id_shift = 6, }, + }, + }, + }, + { + .threads_per_core = 2, .cores_per_pkg = 128, + .policy = { + .x86_vendor = X86_VENDOR_INTEL, + .topo.subleaf = { + [0] = { .nr_logical = 2, .level = 0, .type = 1, .id_shift = 1, }, + [1] = { .nr_logical = 256, .level = 1, .type = 2, .id_shift = 8, }, + }, + }, + }, + }; + + printf("Testing topology synthesis from parts:\n"); + + for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i ) + { + const struct test *t = &tests[i]; + struct cpu_policy actual = { .x86_vendor = t->policy.x86_vendor }; + int rc = x86_topo_from_parts(&actual, t->threads_per_core, t->cores_per_pkg); + + if ( rc || memcmp(&actual.topo, &t->policy.topo, sizeof(actual.topo)) ) + { +#define TOPO(n) topo.subleaf[(n)] + fail("FAIL[%d] - '%s %u t/c, %u c/p'\n", + rc, + x86_cpuid_vendor_to_str(t->policy.x86_vendor), + t->threads_per_core, t->cores_per_pkg); + printf(" subleaf=%u expected_n=%u actual_n=%u\n" + " expected_lvl=%u actual_lvl=%u\n" + " expected_type=%u actual_type=%u\n" + " expected_shift=%u actual_shift=%u\n", + 0, t->policy.TOPO(0).nr_logical, actual.TOPO(0).nr_logical, + t->policy.TOPO(0).level, actual.TOPO(0).level, + t->policy.TOPO(0).type, actual.TOPO(0).type, + t->policy.TOPO(0).id_shift, actual.TOPO(0).id_shift); + + printf(" subleaf=%u expected_n=%u actual_n=%u\n" + " expected_lvl=%u actual_lvl=%u\n" + " expected_type=%u actual_type=%u\n" + " expected_shift=%u actual_shift=%u\n", + 1, t->policy.TOPO(1).nr_logical, actual.TOPO(1).nr_logical, + t->policy.TOPO(1).level, actual.TOPO(1).level, + t->policy.TOPO(1).type, actual.TOPO(1).type, + t->policy.TOPO(1).id_shift, actual.TOPO(1).id_shift); +#undef TOPO + } + } +} + int main(int argc, char **argv) { printf("CPU Policy unit tests\n"); @@ -667,6 +793,8 @@ int main(int argc, char **argv) test_is_compatible_success(); test_is_compatible_failure(); + test_topo_from_parts(); + if ( nr_failures ) printf("Done: %u failures\n", nr_failures); else diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h index 392320b9adbe..f5df18e9f77c 100644 --- a/xen/include/xen/lib/x86/cpu-policy.h +++ b/xen/include/xen/lib/x86/cpu-policy.h @@ -551,6 +551,22 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host, */ uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_policy *p, uint32_t id); +/** + * Synthesise topology information in `p` given high-level constraints + * + * Topology is given in various fields accross several leaves, some of + * which are vendor-specific. This function uses the policy itself to + * derive such leaves from threads/core and cores/package. + * + * @param p CPU policy of the domain. + * @param threads_per_core threads/core. Doesn't need to be a power of 2. + * @param cores_per_package cores/package. Doesn't need to be a power of 2. + * @return 0 on success; -errno on failure + */ +int x86_topo_from_parts(struct cpu_policy *p, + unsigned int threads_per_core, + unsigned int cores_per_pkg); + #endif /* !XEN_LIB_X86_POLICIES_H */ /* diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c index 4cef658feeb8..d033ee5398dd 100644 --- a/xen/lib/x86/policy.c +++ b/xen/lib/x86/policy.c @@ -13,6 +13,92 @@ uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_policy *p, uint32_t id) return vcpu_id * 2; } +static unsigned int order(unsigned int n) +{ + return 8 * sizeof(n) - __builtin_clz(n); +} + +int x86_topo_from_parts(struct cpu_policy *p, + unsigned int threads_per_core, + unsigned int cores_per_pkg) +{ + unsigned int threads_per_pkg = threads_per_core * cores_per_pkg; + unsigned int apic_id_size; + + if ( !p || !threads_per_core || !cores_per_pkg ) + return -EINVAL; + + p->basic.max_leaf = MAX(0xb, p->basic.max_leaf); + + memset(p->topo.raw, 0, sizeof(p->topo.raw)); + + /* thread level */ + p->topo.subleaf[0].nr_logical = threads_per_core; + p->topo.subleaf[0].id_shift = 0; + p->topo.subleaf[0].level = 0; + p->topo.subleaf[0].type = 1; + if ( threads_per_core > 1 ) + p->topo.subleaf[0].id_shift = order(threads_per_core - 1); + + /* core level */ + p->topo.subleaf[1].nr_logical = cores_per_pkg; + if ( p->x86_vendor == X86_VENDOR_INTEL ) + p->topo.subleaf[1].nr_logical = threads_per_pkg; + p->topo.subleaf[1].id_shift = p->topo.subleaf[0].id_shift; + p->topo.subleaf[1].level = 1; + p->topo.subleaf[1].type = 2; + if ( cores_per_pkg > 1 ) + p->topo.subleaf[1].id_shift += order(cores_per_pkg - 1); + + apic_id_size = p->topo.subleaf[1].id_shift; + + /* + * Contrary to what the name might seem to imply. HTT is an enabler for + * SMP and there's no harm in setting it even with a single vCPU. + */ + p->basic.htt = true; + p->basic.lppp = MIN(0xff, p->basic.lppp); + + switch ( p->x86_vendor ) + { + case X86_VENDOR_INTEL: { + struct cpuid_cache_leaf *sl = p->cache.subleaf; + for ( size_t i = 0; sl->type && + i < ARRAY_SIZE(p->cache.raw); i++, sl++ ) + { + sl->cores_per_package = cores_per_pkg - 1; + sl->threads_per_cache = threads_per_core - 1; + if ( sl->type == 3 /* unified cache */ ) + sl->threads_per_cache = threads_per_pkg - 1; + } + break; + } + case X86_VENDOR_AMD: + case X86_VENDOR_HYGON: + /* Expose p->basic.lppp */ + p->extd.cmp_legacy = true; + + /* Clip NC to the maximum value it can hold */ + p->extd.nc = 0xff; + if ( threads_per_pkg <= 0xff ) + p->extd.nc = threads_per_pkg - 1; + + /* TODO: Expose leaf e1E */ + p->extd.topoext = false; + + /* + * Clip APIC ID to 8 bits, as that's what high core-count machines do + * + * That what AMD EPYC 9654 does with >256 CPUs + */ + p->extd.apic_id_size = MIN(8, apic_id_size); + + break; + } + + return 0; +} + int x86_cpu_policies_are_compatible(const struct cpu_policy *host, const struct cpu_policy *guest, struct cpu_policy_errors *err) From patchwork Wed May 8 12:39:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13658674 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EDAA9C25B78 for ; Wed, 8 May 2024 12:39:50 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.718782.1121321 (Exim 4.92) (envelope-from ) id 1s4gaD-0003pQ-6U; Wed, 08 May 2024 12:39:41 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 718782.1121321; Wed, 08 May 2024 12:39:41 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s4gaC-0003o2-Tp; Wed, 08 May 2024 12:39:40 +0000 Received: by outflank-mailman (input) for mailman id 718782; Wed, 08 May 2024 12:39:40 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s4gaB-0002bO-RN for xen-devel@lists.xenproject.org; Wed, 08 May 2024 12:39:39 +0000 Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com [2a00:1450:4864:20::633]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 084d133c-0d38-11ef-b4bb-af5377834399; Wed, 08 May 2024 14:39:38 +0200 (CEST) Received: by mail-ej1-x633.google.com with SMTP id a640c23a62f3a-a59b097b202so851533466b.0 for ; Wed, 08 May 2024 05:39:38 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net (default-46-102-197-194.interdsl.co.uk. [46.102.197.194]) by smtp.gmail.com with ESMTPSA id uj4-20020a170907c98400b00a599f876c28sm5984439ejc.38.2024.05.08.05.39.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 May 2024 05:39:37 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 084d133c-0d38-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1715171977; x=1715776777; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=h0aXIfx/yfl8UuSfpAwTUvnGU7Teh/O7MEVINpnauu4=; b=hy++ymaiEXPVnLDqM4/WZ9q9u6ujYGhksxMMbxSuutD31RCbpI+RDAX3Q7UhQr26eR jz+T/mMiclLsfEp6Nfp9EYnDCS2vZjw0OZTXYXB8Adny3QxVEXr94wcVdkswRyEXevOa 8ShZHR+sK+VbrQiI0N/a65oktZOpbR335J2z0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715171977; x=1715776777; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=h0aXIfx/yfl8UuSfpAwTUvnGU7Teh/O7MEVINpnauu4=; b=xIeeh5DbkvtGDoDHwRyrwYgGDiCqR8YBk+vWo3lz8ie4XnLX1jpt301q2ZRYuy1vdw FiMuzos3WFpd/Z0V9/bFqLfwyuBbrVSEbZyz+GjhGe2lw9qZfsunwb9qaLSZNfU0UXP0 OQMdwn9Tfn5zGDhGl4cUZA0HlHhO0yH4ydah3Pfryk5pbD/HpHnCXUEKXzT/Sdt2AjqZ IW4tBXxecnLKGUqaFPkrhMiN45CzKnUJJzPQAlpikz+p/Gfruod50pG2ra9Sz6o6XxQF 2LQOGXNFcgtMtlzDmlgqm4GG5nJ9d03GtNK2BhC68lTOklX8C5JQpD64z2/iH2d7LZ++ 9Igw== X-Gm-Message-State: AOJu0YzQqnxnVooQh7M367Vxg21ru6HHSTy+5HErgzYe/0OwuLm3gfnT +COfbgyjsy7QapxNzwC5JxHBSb9yvFTgAOwo9vTlYJKp4R5OQIGKMrDvNHoQsAoO4SiYKm1wmVL k X-Google-Smtp-Source: AGHT+IGPNenRVYdkse7eucd1yFdgEvIk9xBV/nUjri4SqCLroW46G1BK17f6LxgMI3Mu+6BsAJeyxQ== X-Received: by 2002:a17:906:b7cb:b0:a59:ba18:2fb9 with SMTP id a640c23a62f3a-a59fb9211d3mr151555366b.12.1715171977453; Wed, 08 May 2024 05:39:37 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Anthony PERARD Subject: [PATCH v2 7/8] xen/x86: Derive topologically correct x2APIC IDs from the policy Date: Wed, 8 May 2024 13:39:26 +0100 Message-Id: <87a2a4589e330472b7260ff6ab513744596a4488.1715102098.git.alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Implements the helper for mapping vcpu_id to x2apic_id given a valid topology in a policy. The algo is written with the intention of extending it to leaves 0x1f and e26 in the future. Toolstack doesn't set leaf 0xb and the HVM default policy has it cleared, so the leaf is not implemented. In that case, the new helper just returns the legacy mapping. Signed-off-by: Alejandro Vallejo --- v2: * const-ify the test definitions * Cosmetic changes (newline + parameter name in prototype) --- tools/tests/cpu-policy/test-cpu-policy.c | 63 ++++++++++++++++++++ xen/include/xen/lib/x86/cpu-policy.h | 2 + xen/lib/x86/policy.c | 73 ++++++++++++++++++++++-- 3 files changed, 133 insertions(+), 5 deletions(-) diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c index 0ba8c418b1b3..82a6aeb23317 100644 --- a/tools/tests/cpu-policy/test-cpu-policy.c +++ b/tools/tests/cpu-policy/test-cpu-policy.c @@ -776,6 +776,68 @@ static void test_topo_from_parts(void) } } +static void test_x2apic_id_from_vcpu_id_success(void) +{ + static const struct test { + unsigned int vcpu_id; + unsigned int threads_per_core; + unsigned int cores_per_pkg; + uint32_t x2apic_id; + uint8_t x86_vendor; + } tests[] = { + { + .vcpu_id = 3, .threads_per_core = 3, .cores_per_pkg = 8, + .x2apic_id = 1 << 2, + }, + { + .vcpu_id = 6, .threads_per_core = 3, .cores_per_pkg = 8, + .x2apic_id = 2 << 2, + }, + { + .vcpu_id = 24, .threads_per_core = 3, .cores_per_pkg = 8, + .x2apic_id = 1 << 5, + }, + { + .vcpu_id = 35, .threads_per_core = 3, .cores_per_pkg = 8, + .x2apic_id = (35 % 3) | (((35 / 3) % 8) << 2) | ((35 / 24) << 5), + }, + { + .vcpu_id = 96, .threads_per_core = 7, .cores_per_pkg = 3, + .x2apic_id = (96 % 7) | (((96 / 7) % 3) << 3) | ((96 / 21) << 5), + }, + }; + + const uint8_t vendors[] = { + X86_VENDOR_INTEL, + X86_VENDOR_AMD, + X86_VENDOR_CENTAUR, + X86_VENDOR_SHANGHAI, + X86_VENDOR_HYGON, + }; + + printf("Testing x2apic id from vcpu id success:\n"); + + /* Perform the test run on every vendor we know about */ + for ( size_t i = 0; i < ARRAY_SIZE(vendors); ++i ) + { + struct cpu_policy policy = { .x86_vendor = vendors[i] }; + for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i ) + { + const struct test *t = &tests[i]; + uint32_t x2apic_id; + int rc = x86_topo_from_parts(&policy, t->threads_per_core, t->cores_per_pkg); + + x2apic_id = x86_x2apic_id_from_vcpu_id(&policy, t->vcpu_id); + if ( rc || x2apic_id != t->x2apic_id ) + fail("FAIL[%d] - '%s cpu%u %u t/c %u c/p'. bad x2apic_id: expected=%u actual=%u\n", + rc, + x86_cpuid_vendor_to_str(policy.x86_vendor), + t->vcpu_id, t->threads_per_core, t->cores_per_pkg, + t->x2apic_id, x2apic_id); + } + } +} + int main(int argc, char **argv) { printf("CPU Policy unit tests\n"); @@ -794,6 +856,7 @@ int main(int argc, char **argv) test_is_compatible_failure(); test_topo_from_parts(); + test_x2apic_id_from_vcpu_id_success(); if ( nr_failures ) printf("Done: %u failures\n", nr_failures); diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h index f5df18e9f77c..2cbc2726a861 100644 --- a/xen/include/xen/lib/x86/cpu-policy.h +++ b/xen/include/xen/lib/x86/cpu-policy.h @@ -545,6 +545,8 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host, /** * Calculates the x2APIC ID of a vCPU given a CPU policy * + * If the policy lacks leaf 0xb falls back to legacy mapping of apic_id=cpu*2 + * * @param p CPU policy of the domain. * @param id vCPU ID of the vCPU. * @returns x2APIC ID of the vCPU. diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c index d033ee5398dd..e498e32f8fd7 100644 --- a/xen/lib/x86/policy.c +++ b/xen/lib/x86/policy.c @@ -2,15 +2,78 @@ #include +static uint32_t parts_per_higher_scoped_level(const struct cpu_policy *p, size_t lvl) +{ + /* + * `nr_logical` reported by Intel is the number of THREADS contained in + * the next topological scope. For example, assuming a system with 2 + * threads/core and 3 cores/module in a fully symmetric topology, + * `nr_logical` at the core level will report 6. Because it's reporting + * the number of threads in a module. + * + * On AMD/Hygon, nr_logical is already normalized by the higher scoped + * level (cores/complex, etc) so we can return it as-is. + */ + if ( p->x86_vendor != X86_VENDOR_INTEL || !lvl ) + return p->topo.subleaf[lvl].nr_logical; + + return p->topo.subleaf[lvl].nr_logical / p->topo.subleaf[lvl - 1].nr_logical; +} + uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_policy *p, uint32_t id) { + uint32_t shift = 0, x2apic_id = 0; + + /* In the absence of topology leaves, fallback to traditional mapping */ + if ( !p->topo.subleaf[0].type ) + return id * 2; + /* - * TODO: Derive x2APIC ID from the topology information inside `p` - * rather than from vCPU ID. This bodge is a temporary measure - * until all infra is in place to retrieve or derive the initial - * x2APIC ID from migrated domains. + * `id` means different things at different points of the algo + * + * At lvl=0: global thread_id (same as vcpu_id) + * At lvl=1: global core_id + * At lvl=2: global socket_id (actually complex_id in AMD, module_id + * in Intel, but the name is inconsequential) + * + * +--+ + * ____ |#0| ______ <= 1 socket + * / +--+ \+--+ + * __#0__ __|#1|__ <= 2 cores/socket + * / | \ +--+/ +-|+ \ + * #0 #1 #2 |#3| #4 #5 <= 3 threads/core + * +--+ + * + * ... and so on. Global in this context means that it's a unique + * identifier for the whole topology, and not relative to the level + * it's in. For example, in the diagram shown above, we're looking at + * thread #3 in the global sense, though it's #0 within its core. + * + * Note that dividing a global thread_id by the number of threads per + * core returns the global core id that contains it. e.g: 0, 1 or 2 + * divided by 3 returns core_id=0. 3, 4 or 5 divided by 3 returns core + * 1, and so on. An analogous argument holds for higher levels. This is + * the property we exploit to derive x2apic_id from vcpu_id. + * + * NOTE: `topo` is currently derived from leaf 0xb, which is bound to + * two levels, but once we track leaves 0x1f (or e26) there will be a + * few more. The algorithm is written to cope with that case. */ - return vcpu_id * 2; + for ( uint32_t i = 0; i < ARRAY_SIZE(p->topo.raw); i++ ) + { + uint32_t nr_parts; + + if ( !p->topo.subleaf[i].type ) + /* sentinel subleaf */ + break; + + nr_parts = parts_per_higher_scoped_level(p, i); + x2apic_id |= (id % nr_parts) << shift; + id /= nr_parts; + shift = p->topo.subleaf[i].id_shift; + } + + return (id << shift) | x2apic_id; } static unsigned int order(unsigned int n) From patchwork Wed May 8 12:39:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13658670 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8A25FC25B75 for ; Wed, 8 May 2024 12:39:50 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.718783.1121337 (Exim 4.92) (envelope-from ) id 1s4gaE-0004Nq-HA; Wed, 08 May 2024 12:39:42 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 718783.1121337; Wed, 08 May 2024 12:39:42 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s4gaE-0004Li-9k; Wed, 08 May 2024 12:39:42 +0000 Received: by outflank-mailman (input) for mailman id 718783; Wed, 08 May 2024 12:39:40 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s4gaC-0002b0-JS for xen-devel@lists.xenproject.org; Wed, 08 May 2024 12:39:40 +0000 Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com [2a00:1450:4864:20::62c]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 08ccfda2-0d38-11ef-909c-e314d9c70b13; Wed, 08 May 2024 14:39:39 +0200 (CEST) Received: by mail-ej1-x62c.google.com with SMTP id a640c23a62f3a-a59a0168c75so1133579866b.1 for ; Wed, 08 May 2024 05:39:39 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net (default-46-102-197-194.interdsl.co.uk. [46.102.197.194]) by smtp.gmail.com with ESMTPSA id uj4-20020a170907c98400b00a599f876c28sm5984439ejc.38.2024.05.08.05.39.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 May 2024 05:39:37 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 08ccfda2-0d38-11ef-909c-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1715171978; x=1715776778; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WBnTFB98oFX4thcLAgiTD7V4T2rkgzAOEvWeV1KCES8=; b=UdnJkS1Xey+jFVrHMVucdBDQL2SzZ4TR83e/tNSjcKcBV8XdzPbsUDFeeqIbA0zXUs 7LB9lbKeGF2ACT95pL4gZe9h9f6LatQDMCvcUvvJP8ceU9w3sK2YJ+F6nv1zNPEkE8nW yGdkNsooVmP7y0ZPN/4KepW6Ha6i5ErQkZ4gE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715171978; x=1715776778; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WBnTFB98oFX4thcLAgiTD7V4T2rkgzAOEvWeV1KCES8=; b=h+l/lku6MBcCaymWVfNWhh5zVvLCL5WZprm4D0ArFblpjTcTTQL48IFieRDWf3fePQ PrY9IyvvzOHmN72n9lxfU2K/mKFDgd4y4YUTTI2yU8TzkmAO+6v2m692KWjguDTcUiLb kAGb1LXK23dH6RCX7+VzsJg4/RipsqDCoXkdRn+0LoCjn3q0TUwcK4A6QqdsZTdATgd6 VHm0RLY12lHc5YRm957cciIJKwhorPTOvcDcFOzzKJNAghFFnmd5mTe2nInq0AoSuZYz zoQvC6rSWTK/lB2VcWox/1ZrXPuH7F5KqndeMx0i1SdXViuIct5xi79tZza71VyG1kyl yqlg== X-Gm-Message-State: AOJu0Yx89cirfJKIdjUKQHpJYzqQMvbXbThylkbJfTKBEHspsgBnpa0j Syw5Jy3SclAn4EDMho9cejyzjPkiJa+X23wLIQgJqqMx2BiMjmuvvdPuBpu7Mfv45ZKgiAOCL81 x X-Google-Smtp-Source: AGHT+IERL7rxnRlKAg01EIgOZcb957IgS/zzCgslJ25W7IBtQ+EVGdBOj/K4oGZ1jLeGW1fySAEirQ== X-Received: by 2002:a17:906:7951:b0:a59:ce25:9b88 with SMTP id a640c23a62f3a-a59fb9dbc65mr193715266b.54.1715171978238; Wed, 08 May 2024 05:39:38 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Anthony PERARD , Juergen Gross , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH v2 8/8] xen/x86: Synthesise domain topologies Date: Wed, 8 May 2024 13:39:27 +0100 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Expose sensible topologies in leaf 0xb. At the moment it synthesises non-HT systems, in line with the previous code intent. Signed-off-by: Alejandro Vallejo --- v2: * Zap the topology leaves of (pv/hvm)_(def/max)_policy rather than the host policy --- tools/libs/guest/xg_cpuid_x86.c | 62 +++++---------------------------- xen/arch/x86/cpu-policy.c | 9 +++-- 2 files changed, 15 insertions(+), 56 deletions(-) diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c index 4453178100ad..8170769dbe43 100644 --- a/tools/libs/guest/xg_cpuid_x86.c +++ b/tools/libs/guest/xg_cpuid_x86.c @@ -584,7 +584,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore, bool hvm; xc_domaininfo_t di; struct xc_cpu_policy *p = xc_cpu_policy_init(); - unsigned int i, nr_leaves = ARRAY_SIZE(p->leaves), nr_msrs = 0; + unsigned int nr_leaves = ARRAY_SIZE(p->leaves), nr_msrs = 0; uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1; uint32_t host_featureset[FEATURESET_NR_ENTRIES] = {}; uint32_t len = ARRAY_SIZE(host_featureset); @@ -727,59 +727,15 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore, } else { - /* - * Topology for HVM guests is entirely controlled by Xen. For now, we - * hardcode APIC_ID = vcpu_id * 2 to give the illusion of no SMT. - */ - p->policy.basic.htt = true; - p->policy.extd.cmp_legacy = false; - - /* - * Leaf 1 EBX[23:16] is Maximum Logical Processors Per Package. - * Update to reflect vLAPIC_ID = vCPU_ID * 2, but make sure to avoid - * overflow. - */ - if ( !p->policy.basic.lppp ) - p->policy.basic.lppp = 2; - else if ( !(p->policy.basic.lppp & 0x80) ) - p->policy.basic.lppp *= 2; - - switch ( p->policy.x86_vendor ) + /* TODO: Expose the ability to choose a custom topology for HVM/PVH */ + unsigned int threads_per_core = 1; + unsigned int cores_per_pkg = di.max_vcpu_id + 1; + rc = x86_topo_from_parts(&p->policy, threads_per_core, cores_per_pkg); + if ( rc ) { - case X86_VENDOR_INTEL: - for ( i = 0; (p->policy.cache.subleaf[i].type && - i < ARRAY_SIZE(p->policy.cache.raw)); ++i ) - { - p->policy.cache.subleaf[i].cores_per_package = - (p->policy.cache.subleaf[i].cores_per_package << 1) | 1; - p->policy.cache.subleaf[i].threads_per_cache = 0; - } - break; - - case X86_VENDOR_AMD: - case X86_VENDOR_HYGON: - /* - * Leaf 0x80000008 ECX[15:12] is ApicIdCoreSize. - * Leaf 0x80000008 ECX[7:0] is NumberOfCores (minus one). - * Update to reflect vLAPIC_ID = vCPU_ID * 2. But avoid - * - overflow, - * - going out of sync with leaf 1 EBX[23:16], - * - incrementing ApicIdCoreSize when it's zero (which changes the - * meaning of bits 7:0). - * - * UPDATE: I addition to avoiding overflow, some - * proprietary operating systems have trouble with - * apic_id_size values greater than 7. Limit the value to - * 7 for now. - */ - if ( p->policy.extd.nc < 0x7f ) - { - if ( p->policy.extd.apic_id_size != 0 && p->policy.extd.apic_id_size < 0x7 ) - p->policy.extd.apic_id_size++; - - p->policy.extd.nc = (p->policy.extd.nc << 1) | 1; - } - break; + ERROR("Failed to generate topology: t/c=%u c/p=%u", + threads_per_core, cores_per_pkg); + goto out; } } diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c index 4b6d96276399..0ad871732ba0 100644 --- a/xen/arch/x86/cpu-policy.c +++ b/xen/arch/x86/cpu-policy.c @@ -278,9 +278,6 @@ static void recalculate_misc(struct cpu_policy *p) p->basic.raw[0x8] = EMPTY_LEAF; - /* TODO: Rework topology logic. */ - memset(p->topo.raw, 0, sizeof(p->topo.raw)); - p->basic.raw[0xc] = EMPTY_LEAF; p->extd.e1d &= ~CPUID_COMMON_1D_FEATURES; @@ -621,6 +618,9 @@ static void __init calculate_pv_max_policy(void) recalculate_xstate(p); p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */ + + /* Wipe host topology. Toolstack is expected to synthesise a sensible one */ + memset(p->topo.raw, 0, sizeof(p->topo.raw)); } static void __init calculate_pv_def_policy(void) @@ -773,6 +773,9 @@ static void __init calculate_hvm_max_policy(void) /* It's always possible to emulate CPUID faulting for HVM guests */ p->platform_info.cpuid_faulting = true; + + /* Wipe host topology. Toolstack is expected to synthesise a sensible one */ + memset(p->topo.raw, 0, sizeof(p->topo.raw)); } static void __init calculate_hvm_def_policy(void)