Message ID | 20231007072818.58951-2-xueshuai@linux.alibaba.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v9,1/2] ACPI: APEI: set memory failure flags as MF_ACTION_REQUIRED on synchronous events | expand |
Hi Shuai, On 07/10/2023 08:28, Shuai Xue wrote: > There are two major types of uncorrected recoverable (UCR) errors : Is UCR a well known x86 acronym? It's best to just spell this out each time, there is enough jargon in this area already. > > - Action Required (AR): The error is detected and the processor already > consumes the memory. OS requires to take action (for example, offline > failure page/kill failure thread) to recover this uncorrectable error. > > - Action Optional (AO): The error is detected out of processor execution > context. Some data in the memory are corrupted. But the data have not > been consumed. OS is optional to take action to recover this > uncorrectable error. As elsewhere, please don't think of errors as 'action required', this is how things get reported to user-space. Action-required for one thread may be action-optional for another that has the same page mapped - its really not a property of the error. It would be better to describe this as synchronous and asynchronous, or in-band and out-of-band. > The essential difference between AR and AO errors is that AR is a > synchronous event, while AO is an asynchronous event. The hardware will > signal a synchronous exception (Machine Check Exception on X86 and > Synchronous External Abort on Arm64) when an error is detected and the > memory access has been architecturally executed. > When APEI firmware first is enabled, a platform may describe one error > source for the handling of synchronous errors (e.g. MCE or SEA notification > ), or for handling asynchronous errors (e.g. SCI or External Interrupt > notification). In other words, we can distinguish synchronous errors by > APEI notification. For AR errors, kernel will kill current process > accessing the poisoned page by sending SIGBUS with BUS_MCEERR_AR. In > addition, for AO errors, kernel will notify the process who owns the > poisoned page by sending SIGBUS with BUS_MCEERR_AO in early kill mode. > However, the GHES driver always sets mf_flags to 0 so that all UCR errors > are handled as AO errors in memory failure. To make this easier to read: UCR and AR -> synchronous AO -> asynchronous > To this end, set memory failure flags as MF_ACTION_REQUIRED on synchronous > events. > Fixes: ba61ca4aab47 ("ACPI, APEI, GHES: Add hardware memory error recovery support")' Erm, this predates arm64 support, and what you have here doesn't change the behaviour on x86. You can blame 7f17b4a121d0d50 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors"), which should have covered this. > diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c > index ef59d6ea16da..88178aa6222d 100644 > --- a/drivers/acpi/apei/ghes.c > +++ b/drivers/acpi/apei/ghes.c > @@ -101,6 +101,20 @@ static inline bool is_hest_type_generic_v2(struct ghes *ghes) > return ghes->generic->header.type == ACPI_HEST_TYPE_GENERIC_ERROR_V2; > } > > +/* > + * A platform may describe one error source for the handling of synchronous > + * errors (e.g. MCE or SEA), or for handling asynchronous errors (e.g. SCI > + * or External Interrupt). On x86, the HEST notifications are always > + * asynchronous, so only SEA on ARM is delivered as a synchronous > + * notification. > + */ > +static inline bool is_hest_sync_notify(struct ghes *ghes) > +{ > + u8 notify_type = ghes->generic->notify.type; > + > + return notify_type == ACPI_HEST_NOTIFY_SEA; > +} and as you had in earlier versions, sometimes SDEI. SDEI can report by synchronous and asynchronous errors, I wouldn't too surprised if the hardware NMI can be used for the same. It would be good to chase up having a hint of this in the CPER records and pass that in here as a hint. Unfortunately, its not safe to assume either way for SDEI. Reviewed-by: James Morse <james.morse@arm.com> Thanks, James
On 2023/12/1 01:39, James Morse wrote: > Hi Shuai, > > On 07/10/2023 08:28, Shuai Xue wrote: >> There are two major types of uncorrected recoverable (UCR) errors : > > Is UCR a well known x86 acronym? It's best to just spell this out each time, > there is enough jargon in this area already. Quite agreed, will replace the commit log with "uncorrected recoverable error". > >> >> - Action Required (AR): The error is detected and the processor already >> consumes the memory. OS requires to take action (for example, offline >> failure page/kill failure thread) to recover this uncorrectable error. >> >> - Action Optional (AO): The error is detected out of processor execution >> context. Some data in the memory are corrupted. But the data have not >> been consumed. OS is optional to take action to recover this >> uncorrectable error. > > As elsewhere, please don't think of errors as 'action required', this is how > things get reported to user-space. Action-required for one thread may be > action-optional for another that has the same page mapped - its really not a > property of the error. > It would be better to describe this as synchronous and asynchronous, or in-band > and out-of-band. Thank you for explanation. I will change to "synchronous and asynchronous". > > >> The essential difference between AR and AO errors is that AR is a >> synchronous event, while AO is an asynchronous event. The hardware will >> signal a synchronous exception (Machine Check Exception on X86 and >> Synchronous External Abort on Arm64) when an error is detected and the >> memory access has been architecturally executed. > >> When APEI firmware first is enabled, a platform may describe one error >> source for the handling of synchronous errors (e.g. MCE or SEA notification >> ), or for handling asynchronous errors (e.g. SCI or External Interrupt >> notification). In other words, we can distinguish synchronous errors by >> APEI notification. For AR errors, kernel will kill current process >> accessing the poisoned page by sending SIGBUS with BUS_MCEERR_AR. In >> addition, for AO errors, kernel will notify the process who owns the >> poisoned page by sending SIGBUS with BUS_MCEERR_AO in early kill mode. >> However, the GHES driver always sets mf_flags to 0 so that all UCR errors >> are handled as AO errors in memory failure. > > To make this easier to read: > UCR and AR -> synchronous > AO -> asynchronous > Will do that. > >> To this end, set memory failure flags as MF_ACTION_REQUIRED on synchronous >> events. > >> Fixes: ba61ca4aab47 ("ACPI, APEI, GHES: Add hardware memory error recovery support")' > > Erm, this predates arm64 support, and what you have here doesn't change the behaviour on x86. > > You can blame 7f17b4a121d0d50 ("ACPI: APEI: Kick the memory_failure() queue for > synchronous errors"), which should have covered this. Do you mean just drop the "Fixes" tags? > >> diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c >> index ef59d6ea16da..88178aa6222d 100644 >> --- a/drivers/acpi/apei/ghes.c >> +++ b/drivers/acpi/apei/ghes.c >> @@ -101,6 +101,20 @@ static inline bool is_hest_type_generic_v2(struct ghes *ghes) >> return ghes->generic->header.type == ACPI_HEST_TYPE_GENERIC_ERROR_V2; >> } >> >> +/* >> + * A platform may describe one error source for the handling of synchronous >> + * errors (e.g. MCE or SEA), or for handling asynchronous errors (e.g. SCI >> + * or External Interrupt). On x86, the HEST notifications are always >> + * asynchronous, so only SEA on ARM is delivered as a synchronous >> + * notification. >> + */ >> +static inline bool is_hest_sync_notify(struct ghes *ghes) >> +{ >> + u8 notify_type = ghes->generic->notify.type; >> + >> + return notify_type == ACPI_HEST_NOTIFY_SEA; >> +} > > and as you had in earlier versions, sometimes SDEI. > SDEI can report by synchronous and asynchronous errors, I wouldn't too surprised if the > hardware NMI can be used for the same. It would be good to chase up having a hint of this > in the CPER records and pass that in here as a hint.> > Unfortunately, its not safe to assume either way for SDEI. For SDEI notification, only x0-x17 has preserved by firmware. As SDEI TRM[1] describes "the dispatcher can simulate an exception-like entry into the client, **with the client providing an additional asynchronous entry point similar to an interrupt entry point**". The client (kernel) lacks complete synchronous context, e.g. system register (ELR, ESR, etc). So I think SDEI notification should not be used for asynchronous error, can you help to confirm this? For NMI notification, as far as I know, AArch64 (aka arm64 in the Linux tree) does not provide architected NMIs. > > Reviewed-by: James Morse <james.morse@arm.com> > Thank you for valuable comments. Best Regards, Shuai [1] https://developer.arm.com/documentation/den0054/latest/
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index ef59d6ea16da..88178aa6222d 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -101,6 +101,20 @@ static inline bool is_hest_type_generic_v2(struct ghes *ghes) return ghes->generic->header.type == ACPI_HEST_TYPE_GENERIC_ERROR_V2; } +/* + * A platform may describe one error source for the handling of synchronous + * errors (e.g. MCE or SEA), or for handling asynchronous errors (e.g. SCI + * or External Interrupt). On x86, the HEST notifications are always + * asynchronous, so only SEA on ARM is delivered as a synchronous + * notification. + */ +static inline bool is_hest_sync_notify(struct ghes *ghes) +{ + u8 notify_type = ghes->generic->notify.type; + + return notify_type == ACPI_HEST_NOTIFY_SEA; +} + /* * This driver isn't really modular, however for the time being, * continuing to use module_param is the easiest way to remain @@ -475,7 +489,7 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags) } static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, - int sev) + int sev, bool sync) { int flags = -1; int sec_sev = ghes_severity(gdata->error_severity); @@ -489,7 +503,7 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED)) flags = MF_SOFT_OFFLINE; if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE) - flags = 0; + flags = sync ? MF_ACTION_REQUIRED : 0; if (flags != -1) return ghes_do_memory_failure(mem_err->physical_addr, flags); @@ -497,9 +511,11 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, return false; } -static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int sev) +static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, + int sev, bool sync) { struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata); + int flags = sync ? MF_ACTION_REQUIRED : 0; bool queued = false; int sec_sev, i; char *p; @@ -524,7 +540,7 @@ static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int s * and don't filter out 'corrected' error here. */ if (is_cache && has_pa) { - queued = ghes_do_memory_failure(err_info->physical_fault_addr, 0); + queued = ghes_do_memory_failure(err_info->physical_fault_addr, flags); p += err_info->length; continue; } @@ -645,6 +661,7 @@ static bool ghes_do_proc(struct ghes *ghes, const guid_t *fru_id = &guid_null; char *fru_text = ""; bool queued = false; + bool sync = is_hest_sync_notify(ghes); sev = ghes_severity(estatus->error_severity); apei_estatus_for_each_section(estatus, gdata) { @@ -662,13 +679,13 @@ static bool ghes_do_proc(struct ghes *ghes, atomic_notifier_call_chain(&ghes_report_chain, sev, mem_err); arch_apei_report_mem_error(sev, mem_err); - queued = ghes_handle_memory_failure(gdata, sev); + queued = ghes_handle_memory_failure(gdata, sev, sync); } else if (guid_equal(sec_type, &CPER_SEC_PCIE)) { ghes_handle_aer(gdata); } else if (guid_equal(sec_type, &CPER_SEC_PROC_ARM)) { - queued = ghes_handle_arm_hw_error(gdata, sev); + queued = ghes_handle_arm_hw_error(gdata, sev, sync); } else { void *err = acpi_hest_get_payload(gdata);