Message ID | 20230215153846.18582-3-sergey.dyasli@citrix.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | x86/ucode/AMD: load ucode on every logical thread | expand |
On 15.02.2023 16:38, Sergey Dyasli wrote: > --- a/xen/arch/x86/cpu/microcode/core.c > +++ b/xen/arch/x86/cpu/microcode/core.c > @@ -398,10 +398,16 @@ static int cf_check microcode_nmi_callback( > (!ucode_in_nmi && cpu == primary) ) > return 0; > > - if ( cpu == primary ) > + if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD ) Given their origin, I'm pretty certain Hygon wants treating the same here and below. > + /* load ucode on every logical thread/core */ > ret = primary_thread_work(nmi_patch); > else > - ret = secondary_nmi_work(); > + { > + if ( cpu == primary ) > + ret = primary_thread_work(nmi_patch); > + else > + ret = secondary_nmi_work(); > + } May I ask to get away with less code churn and less overall indentation? Already if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD ) /* load ucode on every logical thread/core */ ret = primary_thread_work(nmi_patch); else if ( cpu == primary ) ret = primary_thread_work(nmi_patch); else ret = secondary_nmi_work(); would be shorter, but I don't see anything wrong with simply going with if ( cpu == primary || /* Load ucode on every logical thread/core: */ (boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON) ) ret = primary_thread_work(nmi_patch); else ret = secondary_nmi_work(); Same again further down, I believe. Jan
On Tue, Feb 21, 2023 at 2:03 PM Jan Beulich <jbeulich@suse.com> wrote: > > On 15.02.2023 16:38, Sergey Dyasli wrote: > > --- a/xen/arch/x86/cpu/microcode/core.c > > +++ b/xen/arch/x86/cpu/microcode/core.c > > @@ -398,10 +398,16 @@ static int cf_check microcode_nmi_callback( > > (!ucode_in_nmi && cpu == primary) ) > > return 0; > > > > - if ( cpu == primary ) > > + if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD ) > > Given their origin, I'm pretty certain Hygon wants treating the same here > and below. Hygon? ucode_ops is currently initialised only for Amd and Intel. Speaking of which, I'm thinking about adding a new function is_cpu_primary() there. This would make the core code much cleaner. I'll see if I can make it work. Thanks, Sergey
On 21.02.2023 22:26, Sergey Dyasli wrote: > On Tue, Feb 21, 2023 at 2:03 PM Jan Beulich <jbeulich@suse.com> wrote: >> >> On 15.02.2023 16:38, Sergey Dyasli wrote: >>> --- a/xen/arch/x86/cpu/microcode/core.c >>> +++ b/xen/arch/x86/cpu/microcode/core.c >>> @@ -398,10 +398,16 @@ static int cf_check microcode_nmi_callback( >>> (!ucode_in_nmi && cpu == primary) ) >>> return 0; >>> >>> - if ( cpu == primary ) >>> + if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD ) >> >> Given their origin, I'm pretty certain Hygon wants treating the same here >> and below. > > Hygon? ucode_ops is currently initialised only for Amd and Intel. Hmm, you're right, we still haven't sorted that aspect. I'm inclined to say though that adding Hygon in your changes right away reduces the burden later on. And it'll do no harm as long as early_microcode_init() isn't properly dealing with Hygon. > Speaking of which, I'm thinking about adding a new function > is_cpu_primary() there. This would make the core code much cleaner. > I'll see if I can make it work. Thanks - I was actually meaning to suggest something like that, realizing the potential improvement only after sending the earlier reply. Even just a static helper (without new hook) may already improve things. Jan
diff --git a/xen/arch/x86/cpu/microcode/core.c b/xen/arch/x86/cpu/microcode/core.c index ba6e7b42c6..f720030761 100644 --- a/xen/arch/x86/cpu/microcode/core.c +++ b/xen/arch/x86/cpu/microcode/core.c @@ -398,10 +398,16 @@ static int cf_check microcode_nmi_callback( (!ucode_in_nmi && cpu == primary) ) return 0; - if ( cpu == primary ) + if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD ) + /* load ucode on every logical thread/core */ ret = primary_thread_work(nmi_patch); else - ret = secondary_nmi_work(); + { + if ( cpu == primary ) + ret = primary_thread_work(nmi_patch); + else + ret = secondary_nmi_work(); + } this_cpu(loading_err) = ret; return 0; @@ -540,7 +546,6 @@ static int control_thread_fn(const struct microcode_patch *patch) static int cf_check do_microcode_update(void *patch) { unsigned int cpu = smp_processor_id(); - int ret; /* * The control thread set state to coordinate ucode loading. Primary @@ -548,13 +553,18 @@ static int cf_check do_microcode_update(void *patch) * the completion of the ucode loading process. */ if ( cpu == cpumask_first(&cpu_online_map) ) - ret = control_thread_fn(patch); - else if ( cpu == cpumask_first(this_cpu(cpu_sibling_mask)) ) - ret = primary_thread_fn(patch); - else - ret = secondary_thread_fn(); + return control_thread_fn(patch); - return ret; + if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD ) + /* load ucode on every logical thread/core */ + return primary_thread_fn(patch); + else + { + if ( cpu == cpumask_first(this_cpu(cpu_sibling_mask)) ) + return primary_thread_fn(patch); + else + return secondary_thread_fn(); + } } struct ucode_buf { @@ -642,8 +652,13 @@ static long cf_check microcode_update_helper(void *data) /* Calculate the number of online CPU core */ nr_cores = 0; for_each_online_cpu(cpu) - if ( cpu == cpumask_first(per_cpu(cpu_sibling_mask, cpu)) ) + { + if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD ) + /* load ucode on every logical thread/core */ + nr_cores++; + else if ( cpu == cpumask_first(per_cpu(cpu_sibling_mask, cpu)) ) nr_cores++; + } printk(XENLOG_INFO "%u cores are to update their microcode\n", nr_cores);
Currently late ucode loading is performed only on the first core of CPU siblings. But according to the latest recommendation from AMD, late ucode loading should happen on every logical thread/core. To achieve that, consider every logical cpu as "primary" when running on AMD cpus, i.e. skip cpu_sibling_mask checks. Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com> --- v4: - new patch --- xen/arch/x86/cpu/microcode/core.c | 35 ++++++++++++++++++++++--------- 1 file changed, 25 insertions(+), 10 deletions(-)