Message ID | 20230629152656.12655-5-alejandro.vallejo@cloud.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | Prevent attempting updates known to fail | expand |
On 29.06.2023 17:26, Alejandro Vallejo wrote: > --- a/xen/arch/x86/cpu/microcode/core.c > +++ b/xen/arch/x86/cpu/microcode/core.c > @@ -847,17 +847,21 @@ int __init early_microcode_init(unsigned long *module_map, > { > const struct cpuinfo_x86 *c = &boot_cpu_data; > int rc = 0; > + bool can_load = false; > > switch ( c->x86_vendor ) > { > case X86_VENDOR_AMD: > if ( c->x86 >= 0x10 ) > + { > ucode_ops = amd_ucode_ops; > + can_load = true; > + } > break; > > case X86_VENDOR_INTEL: > - if ( c->x86 >= 6 ) > - ucode_ops = intel_ucode_ops; > + ucode_ops = intel_ucode_ops; > + can_load = intel_can_load_microcode(); > break; > } > > @@ -874,7 +878,7 @@ int __init early_microcode_init(unsigned long *module_map, > * mean that they will not accept microcode updates. We take the hint > * and ignore the microcode interface in that case. > */ > - if ( this_cpu(cpu_sig).rev == ~0 ) > + if ( this_cpu(cpu_sig).rev == ~0 || !can_load ) While not too bad, the addition brings code and comment slightly out of sync. > --- a/xen/arch/x86/cpu/microcode/intel.c > +++ b/xen/arch/x86/cpu/microcode/intel.c > @@ -385,6 +385,19 @@ static struct microcode_patch *cf_check cpu_request_microcode( > return patch; > } > > +bool __init intel_can_load_microcode(void) > +{ > + uint64_t mcu_ctrl; > + > + if ( !cpu_has_mcu_ctrl ) > + return true; > + > + rdmsrl(MSR_MCU_CONTROL, mcu_ctrl); While one would hope that feature bit and MSR access working come in matched pairs, I still wonder whether - just to be on the safe side - the caller wouldn't better avoid calling here when rev == ~0 (and hence we won't try to load ucode anyway). I would envision can_load's initializer to become this_cpu(cpu_sig).rev != ~0, with other logic adjusted as necessary in early_microcode_init(). Jan
On Wed, Jul 05, 2023 at 12:51:47PM +0200, Jan Beulich wrote: > > --- a/xen/arch/x86/cpu/microcode/intel.c > > +++ b/xen/arch/x86/cpu/microcode/intel.c > > @@ -385,6 +385,19 @@ static struct microcode_patch *cf_check cpu_request_microcode( > > return patch; > > } > > > > +bool __init intel_can_load_microcode(void) > > +{ > > + uint64_t mcu_ctrl; > > + > > + if ( !cpu_has_mcu_ctrl ) > > + return true; > > + > > + rdmsrl(MSR_MCU_CONTROL, mcu_ctrl); > > While one would hope that feature bit and MSR access working come in > matched pairs, I still wonder whether - just to be on the safe side - > the caller wouldn't better avoid calling here when rev == ~0 (and > hence we won't try to load ucode anyway). I would envision can_load's > initializer to become this_cpu(cpu_sig).rev != ~0, with other logic > adjusted as necessary in early_microcode_init(). > > Jan We only know about the ucode revision after the collect_cpu_info() call, and we can only make that call after the vendor-specific section that sets the function pointers up (and calls intel_can_load_microcode()). One could imagine turning can_load into a function pointer so that its execution is deferred until after the revision check (and skipped altogether if `rev==~0`). Alejandro
On 05.07.2023 16:03, Alejandro Vallejo wrote: > On Wed, Jul 05, 2023 at 12:51:47PM +0200, Jan Beulich wrote: >>> --- a/xen/arch/x86/cpu/microcode/intel.c >>> +++ b/xen/arch/x86/cpu/microcode/intel.c >>> @@ -385,6 +385,19 @@ static struct microcode_patch *cf_check cpu_request_microcode( >>> return patch; >>> } >>> >>> +bool __init intel_can_load_microcode(void) >>> +{ >>> + uint64_t mcu_ctrl; >>> + >>> + if ( !cpu_has_mcu_ctrl ) >>> + return true; >>> + >>> + rdmsrl(MSR_MCU_CONTROL, mcu_ctrl); >> >> While one would hope that feature bit and MSR access working come in >> matched pairs, I still wonder whether - just to be on the safe side - >> the caller wouldn't better avoid calling here when rev == ~0 (and >> hence we won't try to load ucode anyway). I would envision can_load's >> initializer to become this_cpu(cpu_sig).rev != ~0, with other logic >> adjusted as necessary in early_microcode_init(). >> > We only know about the ucode revision after the collect_cpu_info() call, > and we can only make that call after the vendor-specific section that sets > the function pointers up (and calls intel_can_load_microcode()). Hmm, right, that wasn't quite visible from looking at patch and current tree, because of what earlier patches in the series do. > One could imagine turning can_load into a function pointer so that its > execution is deferred until after the revision check (and skipped > altogether if `rev==~0`). Perhaps not worth going this far, and instead stay with what you have until we know (if ever) that further tweaking is necessary. Reviewed-by: Jan Beulich <jbeulich@suse.com> (maybe with an adjustment to the comment, as mentioned in the earlier reply) Jan
On Wed, Jul 05, 2023 at 04:30:02PM +0200, Jan Beulich wrote: > (maybe with an adjustment to the comment, as mentioned in the > earlier reply) > > Jan Yes, that sounds good to me. Thanks. Alejandro
diff --git a/xen/arch/x86/cpu/microcode/core.c b/xen/arch/x86/cpu/microcode/core.c index 98a5aebfe3..982b278c9e 100644 --- a/xen/arch/x86/cpu/microcode/core.c +++ b/xen/arch/x86/cpu/microcode/core.c @@ -847,17 +847,21 @@ int __init early_microcode_init(unsigned long *module_map, { const struct cpuinfo_x86 *c = &boot_cpu_data; int rc = 0; + bool can_load = false; switch ( c->x86_vendor ) { case X86_VENDOR_AMD: if ( c->x86 >= 0x10 ) + { ucode_ops = amd_ucode_ops; + can_load = true; + } break; case X86_VENDOR_INTEL: - if ( c->x86 >= 6 ) - ucode_ops = intel_ucode_ops; + ucode_ops = intel_ucode_ops; + can_load = intel_can_load_microcode(); break; } @@ -874,7 +878,7 @@ int __init early_microcode_init(unsigned long *module_map, * mean that they will not accept microcode updates. We take the hint * and ignore the microcode interface in that case. */ - if ( this_cpu(cpu_sig).rev == ~0 ) + if ( this_cpu(cpu_sig).rev == ~0 || !can_load ) { printk(XENLOG_WARNING "Microcode loading disabled\n"); ucode_ops.apply_microcode = NULL; diff --git a/xen/arch/x86/cpu/microcode/intel.c b/xen/arch/x86/cpu/microcode/intel.c index 8d4d6574aa..060c529a6e 100644 --- a/xen/arch/x86/cpu/microcode/intel.c +++ b/xen/arch/x86/cpu/microcode/intel.c @@ -385,6 +385,19 @@ static struct microcode_patch *cf_check cpu_request_microcode( return patch; } +bool __init intel_can_load_microcode(void) +{ + uint64_t mcu_ctrl; + + if ( !cpu_has_mcu_ctrl ) + return true; + + rdmsrl(MSR_MCU_CONTROL, mcu_ctrl); + + /* If DIS_MCU_LOAD is set applying microcode updates won't work */ + return !(mcu_ctrl & MCU_CONTROL_DIS_MCU_LOAD); +} + const struct microcode_ops __initconst_cf_clobber intel_ucode_ops = { .cpu_request_microcode = cpu_request_microcode, .collect_cpu_info = collect_cpu_info, diff --git a/xen/arch/x86/cpu/microcode/private.h b/xen/arch/x86/cpu/microcode/private.h index 626aeb4d08..d80787205a 100644 --- a/xen/arch/x86/cpu/microcode/private.h +++ b/xen/arch/x86/cpu/microcode/private.h @@ -60,6 +60,13 @@ struct microcode_ops { const struct microcode_patch *new, const struct microcode_patch *old); }; +/** + * Checks whether we can perform microcode updates on this Intel system + * + * @return True iff the microcode update facilities are enabled + */ +bool intel_can_load_microcode(void); + extern const struct microcode_ops amd_ucode_ops, intel_ucode_ops; #endif /* ASM_X86_MICROCODE_PRIVATE_H */ diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h index e2cb8f3cc7..608bc4dce0 100644 --- a/xen/arch/x86/include/asm/cpufeature.h +++ b/xen/arch/x86/include/asm/cpufeature.h @@ -192,6 +192,7 @@ static inline bool boot_cpu_has(unsigned int feat) #define cpu_has_if_pschange_mc_no boot_cpu_has(X86_FEATURE_IF_PSCHANGE_MC_NO) #define cpu_has_tsx_ctrl boot_cpu_has(X86_FEATURE_TSX_CTRL) #define cpu_has_taa_no boot_cpu_has(X86_FEATURE_TAA_NO) +#define cpu_has_mcu_ctrl boot_cpu_has(X86_FEATURE_MCU_CTRL) #define cpu_has_fb_clear boot_cpu_has(X86_FEATURE_FB_CLEAR) #define cpu_has_rrsba boot_cpu_has(X86_FEATURE_RRSBA) diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h index 2749e433d2..5c1350b5f9 100644 --- a/xen/arch/x86/include/asm/msr-index.h +++ b/xen/arch/x86/include/asm/msr-index.h @@ -165,6 +165,11 @@ #define PASID_PASID_MASK 0x000fffff #define PASID_VALID (_AC(1, ULL) << 31) +#define MSR_MCU_CONTROL 0x00001406 +#define MCU_CONTROL_LOCK (_AC(1, ULL) << 0) +#define MCU_CONTROL_DIS_MCU_LOAD (_AC(1, ULL) << 1) +#define MCU_CONTROL_EN_SMM_BYPASS (_AC(1, ULL) << 2) + #define MSR_UARCH_MISC_CTRL 0x00001b01 #define UARCH_CTRL_DOITM (_AC(1, ULL) << 0)
If IA32_MSR_MCU_CONTROL exists then it's possible a CPU may be unable to perform microcode updates. This is controlled through the DIS_MCU_LOAD bit and is intended for baremetal clouds where the owner may not trust the tenant to choose the microcode version in use. If we notice that bit being set then simply disable the "apply_microcode" handler so we can't even try to perform update (as it's known to be silently dropped). While at it, remove the Intel family check, as microcode loading is supported on every Intel 64 CPU. Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com> --- v5: * Removed __init on declaration * Minor style fix (2 spaces rather than 1 after "return") --- xen/arch/x86/cpu/microcode/core.c | 10 +++++++--- xen/arch/x86/cpu/microcode/intel.c | 13 +++++++++++++ xen/arch/x86/cpu/microcode/private.h | 7 +++++++ xen/arch/x86/include/asm/cpufeature.h | 1 + xen/arch/x86/include/asm/msr-index.h | 5 +++++ 5 files changed, 33 insertions(+), 3 deletions(-)