Message ID | 20201002122235.1280-1-shiju.jose@huawei.com (mailing list archive) |
---|---|
Headers | show |
Series | RAS/CEC: Extend CEC for errors count check on short time period | expand |
On Fri, Oct 02, 2020 at 01:22:28PM +0100, Shiju Jose wrote: > Open Questions based on the feedback from Boris, > 1. ARM processor error types are cache/TLB/bus errors. > [Reference N2.4.4.1 ARM Processor Error Information UEFI Spec v2.8] > Any of the above error types should not be consider for the > error collection and CPU core isolation? > > 2.If disabling entire CPU core is not acceptable, > please suggest method to disable L1 and L2 cache on ARM64 core? More open questions: > This requirement is the part of the early fault prediction by taking > action when large number of corrected errors reported on a CPU core > before it causing serious faults. And do you know of actual real-life examples where this is really the case? Do you have any users who report a large error count on ARM CPUs, originating from the caches and that something like that would really help? Because from my x86 CPUs limited experience, the cache arrays are mostly fine and errors reported there are not something that happens very frequently so we don't even need to collect and count those. So is this something which you need to have in order to check a box somewhere that there is some functionality or is there an actual real-life use case behind it which a customer has requested? Open question from James with my reply to it: On Thu, Oct 01, 2020 at 06:16:03PM +0100, James Morse wrote: > If the corrected-count is available somewhere, can't this policy be > made in user-space? You mean rasdaemon goes and offlines CPUs when certain thresholds are reached? Sure. It would be much more flexible too. First we answer questions and discuss, then we code.
Hi Boris, Hi James, >-----Original Message----- >From: Borislav Petkov [mailto:bp@alien8.de] >Sent: 02 October 2020 13:44 >To: Shiju Jose <shiju.jose@huawei.com> >Cc: linux-edac@vger.kernel.org; linux-acpi@vger.kernel.org; linux- >kernel@vger.kernel.org; tony.luck@intel.com; rjw@rjwysocki.net; >james.morse@arm.com; lenb@kernel.org; Linuxarm ><linuxarm@huawei.com> >Subject: Re: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on >short time period > >On Fri, Oct 02, 2020 at 01:22:28PM +0100, Shiju Jose wrote: >> Open Questions based on the feedback from Boris, 1. ARM processor >> error types are cache/TLB/bus errors. >> [Reference N2.4.4.1 ARM Processor Error Information UEFI Spec v2.8] >> Any of the above error types should not be consider for the error >> collection and CPU core isolation? >> >> 2.If disabling entire CPU core is not acceptable, please suggest >> method to disable L1 and L2 cache on ARM64 core? > >More open questions: > >> This requirement is the part of the early fault prediction by taking >> action when large number of corrected errors reported on a CPU core >> before it causing serious faults. > >And do you know of actual real-life examples where this is really the case? Do >you have any users who report a large error count on ARM CPUs, originating >from the caches and that something like that would really help? > >Because from my x86 CPUs limited experience, the cache arrays are mostly >fine and errors reported there are not something that happens very >frequently so we don't even need to collect and count those. > >So is this something which you need to have in order to check a box >somewhere that there is some functionality or is there an actual real-life use >case behind it which a customer has requested? We have not got a real-life example for this case. However rare errors like this can occur frequently sometimes at scale, which would cause more serious issues if not handled. > >Open question from James with my reply to it: > >On Thu, Oct 01, 2020 at 06:16:03PM +0100, James Morse wrote: >> If the corrected-count is available somewhere, can't this policy be >> made in user-space? The error count is present in the struct cper_arm_err_info, the fields of this structure are not reported to the user-space through trace events? Presently the fields of table struct cper_sec_proc_arm only are reported to the user-space through trace-arm-event. Also there can be multiple cper_arm_err_info per cper_sec_proc_arm. Thus I think this need reporting through a new trace event? Also the logical index of a CPU which I think need to extract from the 'mpidr' field of struct cper_sec_proc_arm using platform dependent kernel function get_logical_index(). Thus cpu index also need to report to the user space. > >You mean rasdaemon goes and offlines CPUs when certain thresholds are >reached? Sure. It would be much more flexible too. I think adding the CPU error collection to the kernel has the following advantages, 1. The CPU error collection and isolation would not be active if the rasdaemon stopped running or not running on a machine. 2. Receiving errors and isolating a CPU core from the user-space would probably delayed more, when large number of errors are reported. 3. Supporting the interface for configuration parameters and error statistics etc probably easy to implement in the kernel. 4. The interface given for disabling a CPU is easy to use from the kernel level. > >First we answer questions and discuss, then we code. > >-- >Regards/Gruss, > Boris. > Thanks, Shiju >https://people.kernel.org/tglx/notes-about-netiquette
> Because from my x86 CPUs limited experience, the cache arrays are mostly > fine and errors reported there are not something that happens very > frequently so we don't even need to collect and count those. On Intel X86 we leave the counting and threshold decisions about cache health to the hardware. When a cache reaches the limit, it logs a "yellow" status instead of "green" in the machine check bank (error is still marked as "corrected"). The mcelog(8) daemon may attempt to take CPUs that share that cache offline. See Intel SDM volume 3B "15.4 Enhanced Cache Error Reporting" -Tony
Hi Shiju, On 02/10/2020 16:38, Shiju Jose wrote: >> -----Original Message----- >> From: Borislav Petkov [mailto:bp@alien8.de] >> Sent: 02 October 2020 13:44 >> To: Shiju Jose <shiju.jose@huawei.com> >> Cc: linux-edac@vger.kernel.org; linux-acpi@vger.kernel.org; linux- >> kernel@vger.kernel.org; tony.luck@intel.com; rjw@rjwysocki.net; >> james.morse@arm.com; lenb@kernel.org; Linuxarm >> <linuxarm@huawei.com> >> Subject: Re: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on >> short time period >> >> On Fri, Oct 02, 2020 at 01:22:28PM +0100, Shiju Jose wrote: >>> Open Questions based on the feedback from Boris, 1. ARM processor >>> error types are cache/TLB/bus errors. >>> [Reference N2.4.4.1 ARM Processor Error Information UEFI Spec v2.8] >>> Any of the above error types should not be consider for the error >>> collection and CPU core isolation? Boris' earlier example was that Bus errors have very little to do with the CPU. It may just be that this CPU is handling the IRQs for a fault device, and thus receiving the errors. irqbalance could change that anytime. I'd prefer we just stick with the caches for now. >>> 2.If disabling entire CPU core is not acceptable, please suggest >>> method to disable L1 and L2 cache on ARM64 core? This is not something linux can do. It may not be possible to do it all. >> More open questions: >> >>> This requirement is the part of the early fault prediction by taking >>> action when large number of corrected errors reported on a CPU core >>> before it causing serious faults. >> >> And do you know of actual real-life examples where this is really the case? Do >> you have any users who report a large error count on ARM CPUs, originating >>from the caches and that something like that would really help? >> >> Because from my x86 CPUs limited experience, the cache arrays are mostly >> fine and errors reported there are not something that happens very >> frequently so we don't even need to collect and count those. >> >> So is this something which you need to have in order to check a box >> somewhere that there is some functionality or is there an actual real-life use >> case behind it which a customer has requested? > We have not got a real-life example for this case. However rare errors > like this can occur frequently sometimes at scale, which would cause > more serious issues if not handled. Don't you need to look across all your 'at scale' machines to know what normal looks like? I can't see how a reasonable prediction can be made from just one machine's behaviour since boot. These are corrected errors, nothing has gone wrong. >> Open question from James with my reply to it: >> >> On Thu, Oct 01, 2020 at 06:16:03PM +0100, James Morse wrote: >>> If the corrected-count is available somewhere, can't this policy be >>> made in user-space? > The error count is present in the struct cper_arm_err_info, the fields of > this structure are not reported to the user-space through trace events? > Presently the fields of table struct cper_sec_proc_arm only are reported > to the user-space through trace-arm-event. > Also there can be multiple cper_arm_err_info per cper_sec_proc_arm. > Thus I think this need reporting through a new trace event? I think it would be more useful to feed this into edac like ghes.c already does for memory errors. These would end up as corrected errors counts on devices for L3 or whatever. This saves fixing your user-space component to the arm specific CPER record format, or even firmware-first, meaning its useful to the widest number of people. > Also the logical index of a CPU which I think need to extract from the 'mpidr' field of > struct cper_sec_proc_arm using platform dependent kernel function get_logical_index(). > Thus cpu index also need to report to the user space. I thought you were talking about caches. These structures have a 'level' for cache errors. Certainly you need a way of knowing which cache it is, and from that number you should also be able to work out which the CPUs it is attached to. x86 already has a way of doing this: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/x86/resctrl_ui.rst#n327 arm64 doesn't have anything equivalent, but my current proposal for MPAM is here: https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git/commit/?h=mpam/snapshot/feb&id=ce3148bd39509ac8b12f5917f0f92ce014a5b22f I was hoping the PPTT table would grow something we could use as an ID, but I've not seen anything yet. >> You mean rasdaemon goes and offlines CPUs when certain thresholds are >> reached? Sure. It would be much more flexible too. > I think adding the CPU error collection to the kernel > has the following advantages, > 1. The CPU error collection and isolation would not be active if the > rasdaemon stopped running or not running on a machine. Having CPUs disappear when nothing has gone wrong is deeply surprising. This is going to be very specific to a small number of people. I bet they want to calculate the threshold cluster-wide. Having this policy in user-space means you have the ability to do something much more useful... e.g move your corrected-error-intolerant workload off the CPU that seems to be affected. (If someone who needs to solve this problem by offlining CPUs could chime in, that would really help) > 2. Receiving errors and isolating a CPU core from the user-space would > probably delayed more, when large number of errors are reported. These are corrected errors. Nothing has gone wrong. > 3. Supporting the interface for configuration parameters and error statistics etc > probably easy to implement in the kernel. I disagree! Once its added as a kernel interface, we can't change it. Its much better for these policy-specific algorithms and thresholds to live in user-space. The kernel can just deal with the unchanging work of making the counter available. > 4. The interface given for disabling a CPU is easy to use from the kernel level. Its even easier for privileged user-space: | echo 0 > /sys/devices/system/cpu/cpu0/online Thanks, James
On Fri, Oct 02, 2020 at 06:33:17PM +0100, James Morse wrote: > > I think adding the CPU error collection to the kernel > > has the following advantages, > > 1. The CPU error collection and isolation would not be active if the > > rasdaemon stopped running or not running on a machine. Wasn't there this thing called systemd which promised that it would restart daemons when they fail? And even if it is not there, you can always do your own cronjob which checks rasdaemon presence and restarts it if it has died and sends a mail to the admin to check why it had died. Everything else I've trimmed but James has put it a lot more eloquently than me and I cannot agree more with what he says. Doing this in userspace is better in every aspect you can think of. The current CEC thing runs in the kernel because it has a completely different purpose - to limit corrected error reports which turn into very expensive support calls for errors which were corrected but people simply don't get that they were corrected. Instead, they throw hands in the air and go "OMG, my hardware is failing". Where those are, as James says: > These are corrected errors. Nothing has gone wrong.
Hi James, Thanks for the reply and the information shared. >-----Original Message----- >From: James Morse [mailto:james.morse@arm.com] >Sent: 02 October 2020 18:33 >To: Shiju Jose <shiju.jose@huawei.com> >Cc: Borislav Petkov <bp@alien8.de>; linux-edac@vger.kernel.org; linux- >acpi@vger.kernel.org; linux-kernel@vger.kernel.org; tony.luck@intel.com; >rjw@rjwysocki.net; lenb@kernel.org; Linuxarm <linuxarm@huawei.com> >Subject: Re: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on >short time period > >Hi Shiju, > >On 02/10/2020 16:38, Shiju Jose wrote: >>> -----Original Message----- >>> From: Borislav Petkov [mailto:bp@alien8.de] >>> Sent: 02 October 2020 13:44 >>> To: Shiju Jose <shiju.jose@huawei.com> >>> Cc: linux-edac@vger.kernel.org; linux-acpi@vger.kernel.org; linux- >>> kernel@vger.kernel.org; tony.luck@intel.com; rjw@rjwysocki.net; >>> james.morse@arm.com; lenb@kernel.org; Linuxarm ><linuxarm@huawei.com> >>> Subject: Re: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count >>> check on short time period >>> >>> On Fri, Oct 02, 2020 at 01:22:28PM +0100, Shiju Jose wrote: >>>> Open Questions based on the feedback from Boris, 1. ARM processor >>>> error types are cache/TLB/bus errors. >>>> [Reference N2.4.4.1 ARM Processor Error Information UEFI Spec >>>> v2.8] Any of the above error types should not be consider for the >>>> error collection and CPU core isolation? > >Boris' earlier example was that Bus errors have very little to do with the CPU. >It may just be that this CPU is handling the IRQs for a fault device, and thus >receiving the errors. irqbalance could change that anytime. > >I'd prefer we just stick with the caches for now. > [...] > >>> Open question from James with my reply to it: >>> >>> On Thu, Oct 01, 2020 at 06:16:03PM +0100, James Morse wrote: >>>> If the corrected-count is available somewhere, can't this policy be >>>> made in user-space? > >> The error count is present in the struct cper_arm_err_info, the fields >> of this structure are not reported to the user-space through trace events? > >> Presently the fields of table struct cper_sec_proc_arm only are >> reported to the user-space through trace-arm-event. >> Also there can be multiple cper_arm_err_info per cper_sec_proc_arm. >> Thus I think this need reporting through a new trace event? > >I think it would be more useful to feed this into edac like ghes.c already does >for memory errors. These would end up as corrected errors counts on devices >for L3 or whatever. > >This saves fixing your user-space component to the arm specific CPER record >format, or even firmware-first, meaning its useful to the widest number of >people. > > >> Also the logical index of a CPU which I think need to extract from the >'mpidr' field of >> struct cper_sec_proc_arm using platform dependent kernel function >get_logical_index(). >> Thus cpu index also need to report to the user space. > >I thought you were talking about caches. These structures have a 'level' for >cache errors. > >Certainly you need a way of knowing which cache it is, and from that number >you should also be able to work out which the CPUs it is attached to. > >x86 already has a way of doing this: >https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Docu >mentation/x86/resctrl_ui.rst#n327 > >arm64 doesn't have anything equivalent, but my current proposal for MPAM >is here: >https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git/commit/?h= >mpam/snapshot/feb&id=ce3148bd39509ac8b12f5917f0f92ce014a5b22f > >I was hoping the PPTT table would grow something we could use as an ID, but >I've not seen anything yet. Please find following pseudo code we added for the kernel side to make sure we correctly understand your suggestions. 1. Create edac device and edac device sysfs entries for the online CPU caches. /drivers/edac/edac_device.c struct edac_device_ctl_info *edac_device_add_cache(unsigned int id, u8 level, u8 type) { ... /* Check edac entry for cache already present */ edev_cache = find_edac_device_cache(id, level, type); if (edev_cache) return edev_cache; edev_cache = edac_device_alloc_ctrl_info(...); if (!edev_cache) return NULL; rc = edac_device_add_device(edev_cache); if (rc) goto exit; /* store edev_cache for future use */ ... return edev_cache; exit: ... return NULL; } /drivers/base/cacheinfo.c int cache_create_edac_entries(u64 mpidr, u8 cache_level, u8 cache_type) { ... /* Get cacheinfo for each online cpus */ for_each_online_cpu(i) { struct cpu_cacheinfo *cpu_ci = get_cpu_cacheinfo(i); if (!cpu_ci || !cpu_ci->id) continue; ... /*Add the edac entry for the CPU cache */ edev_cache = edac_device_add_cache(cpu_ci->id, cpu_ci ->level, cpu_ci ->type) if (!edev_cache) break; ... } ... } unsigned int cache_get_cache_id(u64 proc_id, u8 cache_level, u8 cache_type) { unsigned int cache_id = 0; ... /* Walk looking for matching cache node */ for_each_online_cpu(i) { struct cpu_cacheinfo *cpu_ci = get_cpu_cacheinfo(i); if (!cpu_ci || !cpu_ci->id) continue; id = CONV(proc_id); /* need to check */ if((id == cpu_ci->id) && (cache_level == cpu_ci->level) && (cache_type == cpu_ci->type)) { cache_id = cpu_ci->id; break; } } return cache_id; } 2. Store CPU CE count in the edac sysfs entry for the CPU cache. drivers/edac/ghes_edac.c void ghes_edac_report_cpu_error(int cache_id, u8 cache_level, u8 cache_type , uint32 ce_count) { ... /* Check edac entry for cache already present, if not add new entry */ edev_cache = find_edac_device_cache(cache_id, cache_level, cache_type); if (!edev_cache) { /*Add the edac entry for the cache */ edev_cache = edac_device_add_cache(cache_id, cache_level, cache_type); if (!edev_cache) return; } /* Store the ce_count to /sys/devices/system/edac/ cpu/cpu<no>/L<N>cache/ce_count */ edac_device_handle_ce_count(edev_cache, ce_count, ...) } drivers/acpi/apei/ghes.c void ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata) { ... if (sec_sev != GHES_SEV_CORRECTED) return; mpidr = cper_sec_proc_arm->mpidr; for(i = 0; i < cper_sec_proc_arm->err_info_num; i++) { if(cper_sec_proc_info->type != CPER_ARM_CACHE_ERROR) continue; ce_count = cper_arm_err_info->multiple_error + 1; cache_type = cper_arm_err_info->type; cache_level = cper_arm_err_info->error_info<24: 22>; cache_id = cache_get_cache_id(mpidr, cache_level, cache_type); if (!cache_id) return; ghes_edac_report_cpu_error(cache_id, cache_level, cache_type , ce_count); } ... return; } > > >>> You mean rasdaemon goes and offlines CPUs when certain thresholds are >>> reached? Sure. It would be much more flexible too. > [...] > > >Thanks, > >James Thanks, Shiju
Hi Shiju, On 06/10/2020 17:13, Shiju Jose wrote: [...] > Please find following pseudo code we added for the kernel side to make sure > we correctly understand your suggestions. > > 1. Create edac device and edac device sysfs entries for the online CPU caches. > /drivers/edac/edac_device.c > struct edac_device_ctl_info *edac_device_add_cache(unsigned int id, u8 level, u8 type) { Eh? Ah, you are adding helpers for devices that are a cache. As far as I can see, edac only cares about 'devices', I don't think this is needed unless there are multiple users, or it makes a visible difference to user-space. Otherwise this could just go into ghes_edac.c How this looks to user-space probably needs discussing. We should avoid inventing anything new. I'd expect user-space to see something like the structure described at the top of edac_device.h... but I can't spot a driver using this. (its a shame its not the other way up, to avoid duplicating shared caches) Some archaeology may be needed! (if there is some existing structure, I agree it should be wrapped up in helpers to ensure its easiest to be the same. This may be what a edac_device_block is...) > } > /drivers/base/cacheinfo.c > int cache_create_edac_entries(u64 mpidr, u8 cache_level, u8 cache_type) > { > ... > /* Get cacheinfo for each online cpus */ > for_each_online_cpu(i) { > struct cpu_cacheinfo *cpu_ci = get_cpu_cacheinfo(i); I agree the structure of the caches should come from cacheinfo, and you spotted it only works for online CPUs!. This means there is an interaction with cpuhp here) > if (!cpu_ci || !cpu_ci->id) cpu->id? 0 is a valid id, there is an attribute flag to say this is valid. This field exists in struct cacheinfo, not struct cpu_cacheinfo. > continue; > ... > /*Add the edac entry for the CPU cache */ > edev_cache = edac_device_add_cache(cpu_ci->id, cpu_ci ->level, cpu_ci ->type) > if (!edev_cache) > break; This would break all other edac users. The edac driver for the platform should take care of creating this stuff, not the core cacheinfo code. The edac driver for the platform may know that L2 doesn't report RAS errors, so there is no point exposing it. For firmware-first, we can't know this until an error shows up, so have to create everything. This stuff should only be created/exported when ghes_edac.c is determined to be this platforms edac driver. This code should live in ghes_edac.c. > ... > } > ... > } > unsigned int cache_get_cache_id(u64 proc_id, u8 cache_level, u8 cache_type) See get_cpu_cacheinfo_id(int cpu, int level) in next. (something very similar to this lived in arch/x86, bits of the MPAM tree that moved it got queued for next) > { > unsigned int cache_id = 0; > ... > /* Walk looking for matching cache node */ > for_each_online_cpu(i) { (there is an interaction with cpuhp here) > struct cpu_cacheinfo *cpu_ci = get_cpu_cacheinfo(i); > if (!cpu_ci || !cpu_ci->id) > continue; > id = CONV(proc_id); /* need to check */ No idea what is going on here. (Deriving an ID from the CPU_s_ that are attached to the cache is arm64 specific. This has to work for x86 too. The MPAM out-of-tree code does this as we don't have anything else. Feedback when it was posted as RFC was that the id values should be compacted, I was hoping we would get something like an id from the PPTT before needing this value as resctrl ABI for MPAM) > if((id == cpu_ci->id) && (cache_level == cpu_ci->level) && (cache_type == cpu_ci->type)) { > cache_id = cpu_ci->id; > break; > } > } > return cache_id; > } > 2. Store CPU CE count in the edac sysfs entry for the CPU cache. > > drivers/edac/ghes_edac.c > void ghes_edac_report_cpu_error(int cache_id, u8 cache_level, u8 cache_type , uint32 ce_count) > { > ... > /* Check edac entry for cache already present, if not add new entry */ You can't create devices at runtime! The notification comes in irq context, and edac_device_add_device() takes a mutex, and you need to allocate memory. This could be deferred to process context - but I bet its a nuisance for user-space to now know what counters are available until errors show up. > edev_cache = find_edac_device_cache(cache_id, cache_level, cache_type); > if (!edev_cache) { > /*Add the edac entry for the cache */ > edev_cache = edac_device_add_cache(cache_id, cache_level, cache_type); > if (!edev_cache) > return; > } Better would be to lookup the device based on the CPER. (It already looks up the DIMM based on the CPER) > /* Store the ce_count to /sys/devices/system/edac/ cpu/cpu<no>/L<N>cache/ce_count */ > edac_device_handle_ce_count(edev_cache, ce_count, ...) > } > > drivers/acpi/apei/ghes.c > void ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata) { > ... > if (sec_sev != GHES_SEV_CORRECTED) > return; (Xiaofei Tan is looking at supporting some other of these, so you need to interact with that work too) > mpidr = cper_sec_proc_arm->mpidr; You want an arch-specific call to convert this to the linux CPU number, and then use that everywhere. This makes the code more re-usable, and less surprising to other readers. arm64's get_logical_index() looks like it does what you want. > for(i = 0; i < cper_sec_proc_arm->err_info_num; i++) { > if(cper_sec_proc_info->type != CPER_ARM_CACHE_ERROR) > continue; > ce_count = cper_arm_err_info->multiple_error + 1; > cache_type = cper_arm_err_info->type; > cache_level = cper_arm_err_info->error_info<24: 22>; > cache_id = cache_get_cache_id(mpidr, cache_level, cache_type); This needs to be architecture agnostic, it must take the cpu number. > if (!cache_id) > return; 0 is a valid id. > ghes_edac_report_cpu_error(cache_id, cache_level, cache_type , ce_count); > } > ... > return; > } Thanks, James