Message ID | 1564654971-31328-13-git-send-email-chao.gao@intel.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | improve late microcode loading | expand |
On 01.08.2019 12:22, Chao Gao wrote: > --- a/xen/arch/x86/microcode.c > +++ b/xen/arch/x86/microcode.c > @@ -189,12 +189,20 @@ static DEFINE_SPINLOCK(microcode_mutex); > > DEFINE_PER_CPU(struct cpu_signature, cpu_sig); > > -struct microcode_info { > - unsigned int cpu; > - uint32_t buffer_size; > - int error; > - char buffer[1]; > -}; > +/* > + * Return a patch that covers current CPU. If there are multiple patches, > + * return the one with the highest revision number. Return error If no > + * patch is found and an error occurs during the parsing process. Otherwise > + * return NULL. > + */ > +static struct microcode_patch *microcode_parse_blob(const char *buf, > + uint32_t len) Btw - you'd have less issues with line length if you omitted the "microcode_" prefix from static functions. > @@ -250,49 +251,88 @@ bool microcode_update_cache(struct microcode_patch *patch) > return true; > } > > -static int microcode_update_cpu(const void *buf, size_t size) > +/* > + * Load a microcode update to current CPU. > + * > + * If no patch is provided, the cached patch will be loaded. Microcode update > + * during APs bringup and CPU resuming falls into this case. > + */ > +static int microcode_update_cpu(const struct microcode_patch *patch) > { > - int err; > - unsigned int cpu = smp_processor_id(); > - struct cpu_signature *sig = &per_cpu(cpu_sig, cpu); > + int err = microcode_ops->collect_cpu_info(&this_cpu(cpu_sig)); > + > + if ( unlikely(err) ) > + return err; > > spin_lock(µcode_mutex); > > - err = microcode_ops->collect_cpu_info(sig); > - if ( likely(!err) ) > - err = microcode_ops->cpu_request_microcode(buf, size); > + if ( patch ) > + { > + /* > + * If a patch is specified, it should has newer revision than > + * that of the patch cached. > + */ > + if ( microcode_cache && > + microcode_ops->compare_patch(patch, microcode_cache) != NEW_UCODE ) While I see that you've taken care of the one case in Intel specific code, this is again a case where I don't think you can validly call this hook in the Intel case. Albeit maybe it is okay, provided that the caller has already verified it against the CPU signature. Then again I wonder why this check gets done here rather than in the caller (next to that other check) anyway. Afaics this way you'd also avoid re-checking on every CPU a CPU-independent property. > -static long do_microcode_update(void *_info) > +static long do_microcode_update(void *patch) > { > - struct microcode_info *info = _info; > - int error; > - > - BUG_ON(info->cpu != smp_processor_id()); > + unsigned int cpu; > > - error = microcode_update_cpu(info->buffer, info->buffer_size); > - if ( error ) > - info->error = error; > + /* store the patch after a successful loading */ Nit: Comments should start with an uppercase letter. Jan
On Fri, Aug 02, 2019 at 03:40:55PM +0000, Jan Beulich wrote: >On 01.08.2019 12:22, Chao Gao wrote: >> --- a/xen/arch/x86/microcode.c >> +++ b/xen/arch/x86/microcode.c >> @@ -189,12 +189,20 @@ static DEFINE_SPINLOCK(microcode_mutex); >> >> DEFINE_PER_CPU(struct cpu_signature, cpu_sig); >> >> -struct microcode_info { >> - unsigned int cpu; >> - uint32_t buffer_size; >> - int error; >> - char buffer[1]; >> -}; >> +/* >> + * Return a patch that covers current CPU. If there are multiple patches, >> + * return the one with the highest revision number. Return error If no >> + * patch is found and an error occurs during the parsing process. Otherwise >> + * return NULL. >> + */ >> +static struct microcode_patch *microcode_parse_blob(const char *buf, >> + uint32_t len) > >Btw - you'd have less issues with line length if you omitted the >"microcode_" prefix from static functions. > >> @@ -250,49 +251,88 @@ bool microcode_update_cache(struct microcode_patch *patch) >> return true; >> } >> >> -static int microcode_update_cpu(const void *buf, size_t size) >> +/* >> + * Load a microcode update to current CPU. >> + * >> + * If no patch is provided, the cached patch will be loaded. Microcode update >> + * during APs bringup and CPU resuming falls into this case. >> + */ >> +static int microcode_update_cpu(const struct microcode_patch *patch) >> { >> - int err; >> - unsigned int cpu = smp_processor_id(); >> - struct cpu_signature *sig = &per_cpu(cpu_sig, cpu); >> + int err = microcode_ops->collect_cpu_info(&this_cpu(cpu_sig)); >> + >> + if ( unlikely(err) ) >> + return err; >> >> spin_lock(µcode_mutex); >> >> - err = microcode_ops->collect_cpu_info(sig); >> - if ( likely(!err) ) >> - err = microcode_ops->cpu_request_microcode(buf, size); >> + if ( patch ) >> + { >> + /* >> + * If a patch is specified, it should has newer revision than >> + * that of the patch cached. >> + */ >> + if ( microcode_cache && >> + microcode_ops->compare_patch(patch, microcode_cache) != NEW_UCODE ) > >While I see that you've taken care of the one case in Intel specific >code, this is again a case where I don't think you can validly call >this hook in the Intel case. Albeit maybe it is okay, provided that >the caller has already verified it against the CPU signature. Then >again I wonder why this check gets done here rather than in the >caller (next to that other check) anyway. Afaics this way you'd >also avoid re-checking on every CPU a CPU-independent property. As said in an earlier reply to patch 6, ->compare_patch can be simplified a lot. Do you think it is fine to call it here with that change? About avoiding re-checking, we should check it with "microcode_mutex" held otherwise we cannot ensure nobody is updating the cache. If we want to avoid re-checking, then this lock is held for a long time from loading on the first core to the last core. And also for early loading and late loading, we pass 'NULL' to this function on following CPUs after the first successful loading. I am afraid that there is no redundant checking. Thanks Chao
On 05.08.2019 09:36, Chao Gao wrote: > On Fri, Aug 02, 2019 at 03:40:55PM +0000, Jan Beulich wrote: >> On 01.08.2019 12:22, Chao Gao wrote: >>> --- a/xen/arch/x86/microcode.c >>> +++ b/xen/arch/x86/microcode.c >>> @@ -189,12 +189,20 @@ static DEFINE_SPINLOCK(microcode_mutex); >>> >>> DEFINE_PER_CPU(struct cpu_signature, cpu_sig); >>> >>> -struct microcode_info { >>> - unsigned int cpu; >>> - uint32_t buffer_size; >>> - int error; >>> - char buffer[1]; >>> -}; >>> +/* >>> + * Return a patch that covers current CPU. If there are multiple patches, >>> + * return the one with the highest revision number. Return error If no >>> + * patch is found and an error occurs during the parsing process. Otherwise >>> + * return NULL. >>> + */ >>> +static struct microcode_patch *microcode_parse_blob(const char *buf, >>> + uint32_t len) >> >> Btw - you'd have less issues with line length if you omitted the >> "microcode_" prefix from static functions. >> >>> @@ -250,49 +251,88 @@ bool microcode_update_cache(struct microcode_patch *patch) >>> return true; >>> } >>> >>> -static int microcode_update_cpu(const void *buf, size_t size) >>> +/* >>> + * Load a microcode update to current CPU. >>> + * >>> + * If no patch is provided, the cached patch will be loaded. Microcode update >>> + * during APs bringup and CPU resuming falls into this case. >>> + */ >>> +static int microcode_update_cpu(const struct microcode_patch *patch) >>> { >>> - int err; >>> - unsigned int cpu = smp_processor_id(); >>> - struct cpu_signature *sig = &per_cpu(cpu_sig, cpu); >>> + int err = microcode_ops->collect_cpu_info(&this_cpu(cpu_sig)); >>> + >>> + if ( unlikely(err) ) >>> + return err; >>> >>> spin_lock(µcode_mutex); >>> >>> - err = microcode_ops->collect_cpu_info(sig); >>> - if ( likely(!err) ) >>> - err = microcode_ops->cpu_request_microcode(buf, size); >>> + if ( patch ) >>> + { >>> + /* >>> + * If a patch is specified, it should has newer revision than >>> + * that of the patch cached. >>> + */ >>> + if ( microcode_cache && >>> + microcode_ops->compare_patch(patch, microcode_cache) != NEW_UCODE ) >> >> While I see that you've taken care of the one case in Intel specific >> code, this is again a case where I don't think you can validly call >> this hook in the Intel case. Albeit maybe it is okay, provided that >> the caller has already verified it against the CPU signature. Then >> again I wonder why this check gets done here rather than in the >> caller (next to that other check) anyway. Afaics this way you'd >> also avoid re-checking on every CPU a CPU-independent property. > > As said in an earlier reply to patch 6, ->compare_patch can > be simplified a lot. Do you think it is fine to call it here > with that change? As said there - yes, this looks to be an option. > About avoiding re-checking, we should check it with "microcode_mutex" > held otherwise we cannot ensure nobody is updating the cache. If we want > to avoid re-checking, then this lock is held for a long time from loading > on the first core to the last core. And also for early loading and late > loading, we pass 'NULL' to this function on following CPUs after the > first successful loading. I am afraid that there is no redundant checking. There should not be any updating of the cache while one (system-wide) ucode load operation is in progress, or else you risk leaving the system in a "mixed ucode versions" state in the end. As said, also from a logical flow-of-events perspective it seems to me that it should be the caller to validate applicability of the patch before calling here. But as also said, I may as well be missing some important aspect here. Jan
On Mon, Aug 05, 2019 at 09:38:00AM +0000, Jan Beulich wrote: >On 05.08.2019 09:36, Chao Gao wrote: >> On Fri, Aug 02, 2019 at 03:40:55PM +0000, Jan Beulich wrote: >>> On 01.08.2019 12:22, Chao Gao wrote: >>>> --- a/xen/arch/x86/microcode.c >>>> +++ b/xen/arch/x86/microcode.c >>>> @@ -189,12 +189,20 @@ static DEFINE_SPINLOCK(microcode_mutex); >>>> >>>> DEFINE_PER_CPU(struct cpu_signature, cpu_sig); >>>> >>>> -struct microcode_info { >>>> - unsigned int cpu; >>>> - uint32_t buffer_size; >>>> - int error; >>>> - char buffer[1]; >>>> -}; >>>> +/* >>>> + * Return a patch that covers current CPU. If there are multiple patches, >>>> + * return the one with the highest revision number. Return error If no >>>> + * patch is found and an error occurs during the parsing process. Otherwise >>>> + * return NULL. >>>> + */ >>>> +static struct microcode_patch *microcode_parse_blob(const char *buf, >>>> + uint32_t len) >>> >>> Btw - you'd have less issues with line length if you omitted the >>> "microcode_" prefix from static functions. >>> >>>> @@ -250,49 +251,88 @@ bool microcode_update_cache(struct microcode_patch *patch) >>>> return true; >>>> } >>>> >>>> -static int microcode_update_cpu(const void *buf, size_t size) >>>> +/* >>>> + * Load a microcode update to current CPU. >>>> + * >>>> + * If no patch is provided, the cached patch will be loaded. Microcode update >>>> + * during APs bringup and CPU resuming falls into this case. >>>> + */ >>>> +static int microcode_update_cpu(const struct microcode_patch *patch) >>>> { >>>> - int err; >>>> - unsigned int cpu = smp_processor_id(); >>>> - struct cpu_signature *sig = &per_cpu(cpu_sig, cpu); >>>> + int err = microcode_ops->collect_cpu_info(&this_cpu(cpu_sig)); >>>> + >>>> + if ( unlikely(err) ) >>>> + return err; >>>> >>>> spin_lock(µcode_mutex); >>>> >>>> - err = microcode_ops->collect_cpu_info(sig); >>>> - if ( likely(!err) ) >>>> - err = microcode_ops->cpu_request_microcode(buf, size); >>>> + if ( patch ) >>>> + { >>>> + /* >>>> + * If a patch is specified, it should has newer revision than >>>> + * that of the patch cached. >>>> + */ >>>> + if ( microcode_cache && >>>> + microcode_ops->compare_patch(patch, microcode_cache) != NEW_UCODE ) >>> >>> While I see that you've taken care of the one case in Intel specific >>> code, this is again a case where I don't think you can validly call >>> this hook in the Intel case. Albeit maybe it is okay, provided that >>> the caller has already verified it against the CPU signature. Then >>> again I wonder why this check gets done here rather than in the >>> caller (next to that other check) anyway. Afaics this way you'd >>> also avoid re-checking on every CPU a CPU-independent property. >> >> As said in an earlier reply to patch 6, ->compare_patch can >> be simplified a lot. Do you think it is fine to call it here >> with that change? > >As said there - yes, this looks to be an option. > >> About avoiding re-checking, we should check it with "microcode_mutex" >> held otherwise we cannot ensure nobody is updating the cache. If we want >> to avoid re-checking, then this lock is held for a long time from loading >> on the first core to the last core. And also for early loading and late >> loading, we pass 'NULL' to this function on following CPUs after the >> first successful loading. I am afraid that there is no redundant checking. > >There should not be any updating of the cache while one (system-wide) >ucode load operation is in progress, or else you risk leaving the >system in a "mixed ucode versions" state in the end. As said, also Yes. I agree. >from a logical flow-of-events perspective it seems to me that it should >be the caller to validate applicability of the patch before calling >here. Maybe we can just remove this check. The check is to avoid loading an older update than the cached one (an error is returned if it is the case). But actually, ->apply_microcode() will ensure the provided update's revision is newer than current update revision by calling match_cpu(). So probably this check is redundant. Thanks Chao
diff --git a/xen/arch/x86/microcode.c b/xen/arch/x86/microcode.c index 03bc0aa..f0b1e39 100644 --- a/xen/arch/x86/microcode.c +++ b/xen/arch/x86/microcode.c @@ -189,12 +189,20 @@ static DEFINE_SPINLOCK(microcode_mutex); DEFINE_PER_CPU(struct cpu_signature, cpu_sig); -struct microcode_info { - unsigned int cpu; - uint32_t buffer_size; - int error; - char buffer[1]; -}; +/* + * Return a patch that covers current CPU. If there are multiple patches, + * return the one with the highest revision number. Return error If no + * patch is found and an error occurs during the parsing process. Otherwise + * return NULL. + */ +static struct microcode_patch *microcode_parse_blob(const char *buf, + uint32_t len) +{ + if ( likely(!microcode_ops->collect_cpu_info(&this_cpu(cpu_sig))) ) + return microcode_ops->cpu_request_microcode(buf, len); + + return NULL; +} int microcode_resume_cpu(void) { @@ -220,13 +228,6 @@ void microcode_free_patch(struct microcode_patch *microcode_patch) xfree(microcode_patch); } -const struct microcode_patch *microcode_get_cache(void) -{ - ASSERT(spin_is_locked(µcode_mutex)); - - return microcode_cache; -} - /* Return true if cache gets updated. Otherwise, return false */ bool microcode_update_cache(struct microcode_patch *patch) { @@ -250,49 +251,88 @@ bool microcode_update_cache(struct microcode_patch *patch) return true; } -static int microcode_update_cpu(const void *buf, size_t size) +/* + * Load a microcode update to current CPU. + * + * If no patch is provided, the cached patch will be loaded. Microcode update + * during APs bringup and CPU resuming falls into this case. + */ +static int microcode_update_cpu(const struct microcode_patch *patch) { - int err; - unsigned int cpu = smp_processor_id(); - struct cpu_signature *sig = &per_cpu(cpu_sig, cpu); + int err = microcode_ops->collect_cpu_info(&this_cpu(cpu_sig)); + + if ( unlikely(err) ) + return err; spin_lock(µcode_mutex); - err = microcode_ops->collect_cpu_info(sig); - if ( likely(!err) ) - err = microcode_ops->cpu_request_microcode(buf, size); + if ( patch ) + { + /* + * If a patch is specified, it should has newer revision than + * that of the patch cached. + */ + if ( microcode_cache && + microcode_ops->compare_patch(patch, microcode_cache) != NEW_UCODE ) + { + spin_unlock(µcode_mutex); + return -EINVAL; + } + } + else if ( microcode_cache ) + patch = microcode_cache; + else + /* No patch to update */ + err = -ENOENT; + + if ( patch ) + { + err = microcode_ops->apply_microcode(patch); + /* clean up patch cache if we failed to load the cached patch */ + if ( patch == microcode_cache && err == -EIO ) + { + microcode_free_patch(microcode_cache); + microcode_cache = NULL; + } + } + spin_unlock(µcode_mutex); return err; } -static long do_microcode_update(void *_info) +static long do_microcode_update(void *patch) { - struct microcode_info *info = _info; - int error; - - BUG_ON(info->cpu != smp_processor_id()); + unsigned int cpu; - error = microcode_update_cpu(info->buffer, info->buffer_size); - if ( error ) - info->error = error; + /* store the patch after a successful loading */ + if ( !microcode_update_cpu(patch) && patch ) + { + spin_lock(µcode_mutex); + microcode_update_cache(patch); + spin_unlock(µcode_mutex); + patch = NULL; + } if ( microcode_ops->end_update ) microcode_ops->end_update(); - info->cpu = cpumask_next(info->cpu, &cpu_online_map); - if ( info->cpu < nr_cpu_ids ) - return continue_hypercall_on_cpu(info->cpu, do_microcode_update, info); + cpu = cpumask_next(smp_processor_id(), &cpu_online_map); + if ( cpu < nr_cpu_ids ) + return continue_hypercall_on_cpu(cpu, do_microcode_update, patch); + + /* Free the patch if no CPU has loaded it successfully. */ + if ( patch ) + microcode_free_patch(patch); - error = info->error; - xfree(info); - return error; + return 0; } int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void) buf, unsigned long len) { int ret; - struct microcode_info *info; + void *buffer; + struct microcode_patch *patch; if ( len != (uint32_t)len ) return -E2BIG; @@ -300,32 +340,44 @@ int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void) buf, unsigned long len) if ( microcode_ops == NULL ) return -EINVAL; - info = xmalloc_bytes(sizeof(*info) + len); - if ( info == NULL ) + buffer = xmalloc_bytes(len); + if ( !buffer ) return -ENOMEM; - ret = copy_from_guest(info->buffer, buf, len); - if ( ret != 0 ) + if ( copy_from_guest(buffer, buf, len) ) { - xfree(info); - return ret; + ret = -EFAULT; + goto free; } - info->buffer_size = len; - info->error = 0; - info->cpu = cpumask_first(&cpu_online_map); - if ( microcode_ops->start_update ) { ret = microcode_ops->start_update(); if ( ret != 0 ) - { - xfree(info); - return ret; - } + goto free; + } + + patch = microcode_parse_blob(buffer, len); + if ( IS_ERR(patch) ) + { + ret = PTR_ERR(patch); + printk(XENLOG_INFO "Parsing microcode blob error %d\n", ret); + goto free; } - return continue_hypercall_on_cpu(info->cpu, do_microcode_update, info); + if ( !patch ) + { + printk(XENLOG_INFO "No ucode found. Update aborted!\n"); + ret = -EINVAL; + goto free; + } + + ret = continue_hypercall_on_cpu(cpumask_first(&cpu_online_map), + do_microcode_update, patch); + + free: + xfree(buffer); + return ret; } static int __init microcode_init(void) @@ -372,15 +424,40 @@ int __init early_microcode_update_cpu(bool start_update) microcode_ops->collect_cpu_info(&this_cpu(cpu_sig)); + if ( !start_update ) + return microcode_update_cpu(NULL); + if ( data ) { - if ( start_update && microcode_ops->start_update ) + struct microcode_patch *patch; + + if ( microcode_ops->start_update ) rc = microcode_ops->start_update(); if ( rc ) return rc; - return microcode_update_cpu(data, len); + patch = microcode_parse_blob(data, len); + if ( IS_ERR(patch) ) + { + printk(XENLOG_INFO "Parsing microcode blob error %ld\n", + PTR_ERR(patch)); + return PTR_ERR(patch); + } + + if ( !patch ) + { + printk(XENLOG_INFO "No ucode found. Update aborted!\n"); + return -EINVAL; + } + + spin_lock(µcode_mutex); + rc = microcode_update_cache(patch); + spin_unlock(µcode_mutex); + + ASSERT(rc); + + return microcode_update_cpu(NULL); } else return -ENOMEM; diff --git a/xen/arch/x86/microcode_amd.c b/xen/arch/x86/microcode_amd.c index fed044a..93d2a98 100644 --- a/xen/arch/x86/microcode_amd.c +++ b/xen/arch/x86/microcode_amd.c @@ -446,9 +446,11 @@ static bool_t check_final_patch_levels(unsigned int cpu) return 0; } -static int cpu_request_microcode(const void *buf, size_t bufsize) +static struct microcode_patch *cpu_request_microcode(const void *buf, + size_t bufsize) { struct microcode_amd *mc_amd; + struct microcode_patch *patch = NULL; size_t offset = 0; int error = 0; unsigned int current_cpu_id; @@ -553,19 +555,22 @@ static int cpu_request_microcode(const void *buf, size_t bufsize) break; } - /* Update cache if this patch covers current CPU */ - if ( microcode_fits(new_patch->mc_amd) != MIS_UCODE ) - microcode_update_cache(new_patch); - else - microcode_free_patch(new_patch); - - if ( match_cpu(microcode_get_cache()) ) + /* + * If the new patch covers current CPU, compare patches and store the + * one with higher revision. + */ + if ( (microcode_fits(new_patch->mc_amd) != MIS_UCODE) && + (!patch || (compare_patch(new_patch, patch) == NEW_UCODE)) ) { - error = apply_microcode(microcode_get_cache()); - if ( error ) - break; + struct microcode_patch *tmp = patch; + + patch = new_patch; + new_patch = tmp; } + if ( new_patch ) + microcode_free_patch(new_patch); + if ( offset >= bufsize ) break; @@ -599,13 +604,10 @@ static int cpu_request_microcode(const void *buf, size_t bufsize) xfree(mc_amd); out: - /* - * In some cases we may return an error even if processor's microcode has - * been updated. For example, the first patch in a container file is loaded - * successfully but subsequent container file processing encounters a - * failure. - */ - return error; + if ( error && !patch ) + patch = ERR_PTR(error); + + return patch; } static int start_update(void) diff --git a/xen/arch/x86/microcode_intel.c b/xen/arch/x86/microcode_intel.c index bcb48bc..8780be0 100644 --- a/xen/arch/x86/microcode_intel.c +++ b/xen/arch/x86/microcode_intel.c @@ -279,15 +279,9 @@ static enum microcode_match_result compare_patch( old_header->pf, old_header->rev); } -/* - * return 0 - no update found - * return 1 - found update - * return < 0 - error - */ -static int get_matching_microcode(const void *mc) +static struct microcode_patch *alloc_microcode_patch( + const struct microcode_header_intel *mc_header) { - struct cpu_signature *sig = &this_cpu(cpu_sig); - const struct microcode_header_intel *mc_header = mc; unsigned long total_size = get_totalsize(mc_header); void *new_mc = xmalloc_bytes(total_size); struct microcode_patch *new_patch = xmalloc(struct microcode_patch); @@ -296,26 +290,12 @@ static int get_matching_microcode(const void *mc) { xfree(new_patch); xfree(new_mc); - return -ENOMEM; + return ERR_PTR(-ENOMEM); } - memcpy(new_mc, mc, total_size); + memcpy(new_mc, mc_header, total_size); new_patch->mc_intel = new_mc; - /* Make sure that this patch covers current CPU */ - if ( microcode_update_match(&new_patch->mc_intel->hdr, sig->sig, - sig->pf, sig->rev) == MIS_UCODE ) - { - microcode_free_patch(new_patch); - return 0; - } - - microcode_update_cache(new_patch); - - pr_debug("microcode: CPU%d found a matching microcode update with" - " version %#x (current=%#x)\n", - smp_processord_id(), mc_header->rev, sig->rev); - - return 1; + return new_patch; } static int apply_microcode(const struct microcode_patch *patch) @@ -391,26 +371,46 @@ static long get_next_ucode_from_buffer(void **mc, const u8 *buf, return offset + total_size; } -static int cpu_request_microcode(const void *buf, size_t size) +static struct microcode_patch *cpu_request_microcode(const void *buf, + size_t size) { long offset = 0; int error = 0; void *mc; + struct microcode_patch *patch = NULL; + const struct cpu_signature *sig = &this_cpu(cpu_sig); while ( (offset = get_next_ucode_from_buffer(&mc, buf, size, offset)) > 0 ) { + struct microcode_patch *new_patch; + error = microcode_sanity_check(mc); if ( error ) break; - error = get_matching_microcode(mc); - if ( error < 0 ) + + new_patch = alloc_microcode_patch(mc); + if ( IS_ERR(new_patch) ) + { + error = PTR_ERR(new_patch); break; + } + /* - * It's possible the data file has multiple matching ucode, - * lets keep searching till the latest version + * If the new patch covers current CPU, compare patches and store the + * one with higher revision. */ - if ( error == 1 ) - error = 0; + if ( (microcode_update_match(&new_patch->mc_intel->hdr, sig->sig, + sig->pf, sig->rev) != MIS_UCODE) && + (!patch || (compare_patch(new_patch, patch) == NEW_UCODE)) ) + { + struct microcode_patch *tmp = patch; + + patch = new_patch; + new_patch = tmp; + } + + if ( new_patch ) + microcode_free_patch(new_patch); xfree(mc); } @@ -419,10 +419,10 @@ static int cpu_request_microcode(const void *buf, size_t size) if ( offset < 0 ) error = offset; - if ( !error && match_cpu(microcode_get_cache()) ) - error = apply_microcode(microcode_get_cache()); + if ( error && !patch ) + patch = ERR_PTR(error); - return error; + return patch; } static const struct microcode_ops microcode_intel_ops = { diff --git a/xen/include/asm-x86/microcode.h b/xen/include/asm-x86/microcode.h index 8c7de9d..8e71615 100644 --- a/xen/include/asm-x86/microcode.h +++ b/xen/include/asm-x86/microcode.h @@ -20,7 +20,8 @@ struct microcode_patch { }; struct microcode_ops { - int (*cpu_request_microcode)(const void *buf, size_t size); + struct microcode_patch *(*cpu_request_microcode)(const void *buf, + size_t size); int (*collect_cpu_info)(struct cpu_signature *csig); int (*apply_microcode)(const struct microcode_patch *patch); int (*start_update)(void); @@ -41,8 +42,6 @@ struct cpu_signature { DECLARE_PER_CPU(struct cpu_signature, cpu_sig); extern const struct microcode_ops *microcode_ops; -const struct microcode_patch *microcode_get_cache(void); -bool microcode_update_cache(struct microcode_patch *patch); void microcode_free_patch(struct microcode_patch *patch); #endif /* ASM_X86__MICROCODE_H */
During late microcode loading, apply_microcode() is invoked in cpu_request_microcode(). To make late microcode update more reliable, we want to put the apply_microcode() into stop_machine context. So we split out it from cpu_request_microcode(). In general, for both early loading on BSP and late loading, cpu_request_microcode() is called first to get the matching microcode update contained by the blob and then apply_microcode() is invoked explicitly on each cpu in common code. Given that all CPUs are supposed to have the same signature, parsing microcode only needs to be done once. So cpu_request_microcode() is also moved out of microcode_update_cpu(). In some cases (e.g. a broken bios), the system may have multiple revisions of microcode update. So we would try to load a microcode update as long as it covers current cpu. And if a cpu loads this patch successfully, the patch would be stored into the patch cache. Signed-off-by: Chao Gao <chao.gao@intel.com> --- Changes in v8: - divide the original patch into three patches to improve readability - load an update on each cpu as long as the update covers current cpu - store an update after the first successful loading on a CPU - Make sure the current CPU (especially pf value) is covered by updates. changes in v7: - to handle load failure, unvalidated patches won't be cached. They are passed as function arguments. So if update failed, we needn't any cleanup to microcode cache. --- xen/arch/x86/microcode.c | 181 ++++++++++++++++++++++++++++------------ xen/arch/x86/microcode_amd.c | 38 +++++---- xen/arch/x86/microcode_intel.c | 70 ++++++++-------- xen/include/asm-x86/microcode.h | 5 +- 4 files changed, 186 insertions(+), 108 deletions(-)