mbox series

[RFC,v2,0/9] Use ERST for persistent storage of MCE and APEI errors

Message ID 20230925074426.97856-1-xueshuai@linux.alibaba.com (mailing list archive)
Headers show
Series Use ERST for persistent storage of MCE and APEI errors | expand

Message

Shuai Xue Sept. 25, 2023, 7:44 a.m. UTC
changes log since v1:
- fix a compile waring by dereferencing rcd pointer before memset
- add a compile error by add CONFIG_X86_MCE
- Link: https://lore.kernel.org/all/20230916130316.65815-3-xueshuai@linux.alibaba.com/

In certain scenarios (ie. hosts/guests with root filesystems on NFS/iSCSI
where networking software and/or hardware fails, and thus kdump fails), it
is necessary to serialize hardware error information available for
post-mortem debugging. Save the hardware error log into flash via ERST
before go panic, the hardware error log can be gotten from the flash after
system boot successful again, which is very useful in production.

On X86 platform, the kernel has supported to serialize and deserialize MCE
error record by commit 482908b49ebf ("ACPI, APEI, Use ERST for persistent
storage of MCE"). The process involves two steps:

- MCE Producer: When a hardware error is detected, MCE raised and its
  handler writes MCE error record into flash via ERST before panic
- MCE Consumor: After system reboot, /sbin/mcelog run, it reads /dev/mcelog
  to check flash for error record of previous boot via ERST

After /dev/mcelog character device deprecated by commit 5de97c9f6d85
("x86/mce: Factor out and deprecate the /dev/mcelog driver"), the
serialized MCE error record, of previous boot in persistent storage is not
collected via APEI ERST.

This patch set include two part:

- PATCH 1-3: rework apei_{read,write}_mce to use pstore data structure and emit
  the mce_record tracepoint, enabling the collection of MCE records by the
  rasdaemon tool.
- PATCH 4-9: use ERST for persistent storage of APEI errors, and emit
  tracepoints for CPER sections, enabling the collection of MCE records by the
  rasdaemon tool.

Shuai Xue (9):
  pstore: move pstore creator id, section type and record struct to
    common header
  ACPI: APEI: Use common ERST struct to read/write serialized MCE record
  ACPI: APEI: ERST: Emit the mce_record tracepoint
  ACPI: tables: change section_type of generic error data as guid_t
  ACPI: APEI: GHES: Use ERST to serialize APEI generic error before
    panic
  ACPI: APEI: GHES: export ghes_report_chain
  ACPI: APEI: ESRT: kick ghes_report_chain notifier to report serialized
    memory errors
  ACPI: APEI: ESRT: print AER to report serialized PCIe errors
  ACPI: APEI: ESRT: log ARM processor error

 arch/x86/kernel/cpu/mce/apei.c | 82 +++++++++++++++-------------------
 drivers/acpi/acpi_extlog.c     |  2 +-
 drivers/acpi/apei/erst.c       | 55 ++++++++++++++---------
 drivers/acpi/apei/ghes.c       | 48 +++++++++++++++++++-
 drivers/firmware/efi/cper.c    |  2 +-
 fs/pstore/platform.c           |  3 ++
 include/acpi/actbl1.h          |  5 ++-
 include/acpi/ghes.h            |  2 +-
 include/linux/pstore.h         | 29 ++++++++++++
 9 files changed, 154 insertions(+), 74 deletions(-)

Comments

Borislav Petkov Sept. 28, 2023, 2:43 p.m. UTC | #1
On Mon, Sep 25, 2023 at 03:44:17PM +0800, Shuai Xue wrote:
> After /dev/mcelog character device deprecated by commit 5de97c9f6d85
> ("x86/mce: Factor out and deprecate the /dev/mcelog driver"), the
> serialized MCE error record, of previous boot in persistent storage is not
> collected via APEI ERST.

You lost me here. /dev/mcelog is deprecated but you can still use it and
apei_write_mce() still happens.

Looking at your patches, you're adding this to ghes so how about you sit
down first and explain your exact use case and what exactly you wanna
do?

Thx.
Shuai Xue Oct. 7, 2023, 7:15 a.m. UTC | #2
On 2023/9/28 22:43, Borislav Petkov wrote:
> On Mon, Sep 25, 2023 at 03:44:17PM +0800, Shuai Xue wrote:
>> After /dev/mcelog character device deprecated by commit 5de97c9f6d85
>> ("x86/mce: Factor out and deprecate the /dev/mcelog driver"), the
>> serialized MCE error record, of previous boot in persistent storage is not
>> collected via APEI ERST.
> 
> You lost me here. /dev/mcelog is deprecated but you can still use it and
> apei_write_mce() still happens.

Yes, you are right. apei_write_mce() still happens so that MCE records are
written to persistent storage and the MCE records can be retrieved by
apei_read_mce(). Previously, the task was performed by the mcelog package.
However, it has been deprecated, some distributions like Arch kernels are
not even compiled with the necessary configuration option
CONFIG_X86_MCELOG_LEGACY.[1]

So, IMHO, it's better to add a way to retrieve MCE records through switching
to the new generation rasdaemon solution.

> 
> Looking at your patches, you're adding this to ghes so how about you sit
> down first and explain your exact use case and what exactly you wanna
> do?
> 
> Thx.
> 

Sorry for the poor cover letter. I hope the following response can clarify
the matter.

Q1: What is the exact problem?

Traditionally, fatal hardware errors will cause Linux print error log to
console, e.g. print_mce() or __ghes_print_estatus(), then reboot. With
Linux, the primary method for obtaining debugging information of a serious
error or fault is via the kdump mechanism. Kdump captures a wealth of
kernel and machine state and writes it to a file for post-mortem debugging.

In certain scenarios, ie. hosts/guests with root filesystems on NFS/iSCSI
where networking software and/or hardware fails, and thus kdump fails to
collect the hardware error context, leaving us unaware of what actually
occurred. In the public cloud scenario, multiple virtual machines run on a
single physical server, and if that server experiences a failure, it can
potentially impact multiple tenants. It is crucial for us to thoroughly
analyze the root causes of each instance failure in order to:

- Provide customers with a detailed explanation of the outage to reassure them.
- Collect the characteristics of the failures, such as ECC syndrome, to enable fault prediction.
- Explore potential solutions to prevent widespread outages.

In short, it is necessary to serialize hardware error information available
for post-mortem debugging.

Q2: What exactly I wanna do:

The MCE handler, do_machine_check(), saves the MCE record to persistent
storage and it is retrieved by mcelog. Mcelog has been deprecated when
kernel 4.12 released in 2017, and the help of the configuration option
CONFIG_X86_MCELOG_LEGACY suggest to consider switching to the new
generation rasdaemon solution. The GHES handler does not support APEI error
record now.

To serialize hardware error information available for post-mortem
debugging:
- add support to save APEI error record into flash via ERST before go panic,
- add support to retrieve MCE or APEI error record from the flash and emit
the related tracepoint after system boot successful again so that rasdaemon
can collect them


Best Regards,
Shuai


[1] https://wiki.archlinux.org/title/Machine-check_exception
Shuai Xue Oct. 26, 2023, 10:21 a.m. UTC | #3
On 2023/10/7 15:15, Shuai Xue wrote:
> 
> 
> On 2023/9/28 22:43, Borislav Petkov wrote:
>> On Mon, Sep 25, 2023 at 03:44:17PM +0800, Shuai Xue wrote:
>>> After /dev/mcelog character device deprecated by commit 5de97c9f6d85
>>> ("x86/mce: Factor out and deprecate the /dev/mcelog driver"), the
>>> serialized MCE error record, of previous boot in persistent storage is not
>>> collected via APEI ERST.
>>
>> You lost me here. /dev/mcelog is deprecated but you can still use it and
>> apei_write_mce() still happens.
> 
> Yes, you are right. apei_write_mce() still happens so that MCE records are
> written to persistent storage and the MCE records can be retrieved by
> apei_read_mce(). Previously, the task was performed by the mcelog package.
> However, it has been deprecated, some distributions like Arch kernels are
> not even compiled with the necessary configuration option
> CONFIG_X86_MCELOG_LEGACY.[1]
> 
> So, IMHO, it's better to add a way to retrieve MCE records through switching
> to the new generation rasdaemon solution.
> 
>>
>> Looking at your patches, you're adding this to ghes so how about you sit
>> down first and explain your exact use case and what exactly you wanna
>> do?
>>
>> Thx.
>>
> 
> Sorry for the poor cover letter. I hope the following response can clarify
> the matter.
> 
> Q1: What is the exact problem?
> 
> Traditionally, fatal hardware errors will cause Linux print error log to
> console, e.g. print_mce() or __ghes_print_estatus(), then reboot. With
> Linux, the primary method for obtaining debugging information of a serious
> error or fault is via the kdump mechanism. Kdump captures a wealth of
> kernel and machine state and writes it to a file for post-mortem debugging.
> 
> In certain scenarios, ie. hosts/guests with root filesystems on NFS/iSCSI
> where networking software and/or hardware fails, and thus kdump fails to
> collect the hardware error context, leaving us unaware of what actually
> occurred. In the public cloud scenario, multiple virtual machines run on a
> single physical server, and if that server experiences a failure, it can
> potentially impact multiple tenants. It is crucial for us to thoroughly
> analyze the root causes of each instance failure in order to:
> 
> - Provide customers with a detailed explanation of the outage to reassure them.
> - Collect the characteristics of the failures, such as ECC syndrome, to enable fault prediction.
> - Explore potential solutions to prevent widespread outages.
> 
> In short, it is necessary to serialize hardware error information available
> for post-mortem debugging.
> 
> Q2: What exactly I wanna do:
> 
> The MCE handler, do_machine_check(), saves the MCE record to persistent
> storage and it is retrieved by mcelog. Mcelog has been deprecated when
> kernel 4.12 released in 2017, and the help of the configuration option
> CONFIG_X86_MCELOG_LEGACY suggest to consider switching to the new
> generation rasdaemon solution. The GHES handler does not support APEI error
> record now.
> 
> To serialize hardware error information available for post-mortem
> debugging:
> - add support to save APEI error record into flash via ERST before go panic,
> - add support to retrieve MCE or APEI error record from the flash and emit
> the related tracepoint after system boot successful again so that rasdaemon
> can collect them
> 
> 
> Best Regards,
> Shuai
> 
> 
> [1] https://wiki.archlinux.org/title/Machine-check_exception


Hi, Borislav,

I would like to inquire about your satisfaction with the motivation
provided. If you have no objections, I am prepared to address Kees's
comments, update the cover letter, and proceed with sending a new version.

Thank you.

Best Regards,
Shuai
Borislav Petkov Oct. 26, 2023, 1:32 p.m. UTC | #4
On Sat, Oct 07, 2023 at 03:15:45PM +0800, Shuai Xue wrote:
> So, IMHO, it's better to add a way to retrieve MCE records through switching
> to the new generation rasdaemon solution.

rasdaemon already collects errors and even saves them in a database of
sorts. No kernel changes needed.

> Sorry for the poor cover letter. I hope the following response can clarify
> the matter.
> 
> Q1: What is the exact problem?
> 
> Traditionally, fatal hardware errors will cause Linux print error log to
> console, e.g. print_mce() or __ghes_print_estatus(), then reboot. With
> Linux, the primary method for obtaining debugging information of a serious
> error or fault is via the kdump mechanism.

Not necessarily - see above.

> In the public cloud scenario, multiple virtual machines run on a
> single physical server, and if that server experiences a failure, it can
> potentially impact multiple tenants. It is crucial for us to thoroughly
> analyze the root causes of each instance failure in order to:
> 
> - Provide customers with a detailed explanation of the outage to reassure them.
> - Collect the characteristics of the failures, such as ECC syndrome, to enable fault prediction.
> - Explore potential solutions to prevent widespread outages.

Huh, are you talking about providing customers with error information
from the *underlying* physical machine which runs the cloud VMs? That
sounds suspicious, to say the least.

AFAICT, all you can tell the VM owner is: yah, the hw had an
uncorrectable error in its memory and crashed. Is that the use case?

To be able to tell the VM owners why it crashed?

> In short, it is necessary to serialize hardware error information available
> for post-mortem debugging.
> 
> Q2: What exactly I wanna do:
> 
> The MCE handler, do_machine_check(), saves the MCE record to persistent
> storage and it is retrieved by mcelog. Mcelog has been deprecated when
> kernel 4.12 released in 2017, and the help of the configuration option
> CONFIG_X86_MCELOG_LEGACY suggest to consider switching to the new
> generation rasdaemon solution. The GHES handler does not support APEI error
> record now.

I think you're confusing things: MCEs do get reported to userspace
through the trace_mc_record tracepoint and rasdaemon opens it and reads
error info from there. And then writes it out to its db. So that works
now.

GHES is something different: it is a fw glue around error reporting so
that you don't have to develop a reporting driver for every platform but
you can use a single one - only the fw glue needs to be added.

The problem with GHES is that it is notoriously buggy and currently
it loads on a single platform only on x86.

ARM are doing something in that area - you're better off talking to
James Morse about it. And he's on Cc.

> To serialize hardware error information available for post-mortem
> debugging:
> - add support to save APEI error record into flash via ERST before go panic,
> - add support to retrieve MCE or APEI error record from the flash and emit
> the related tracepoint after system boot successful again so that rasdaemon
> can collect them

Now that is yet another thing: you want to save error records into
firmware. First of all, you don't really need it if you do kdump as
explained above.

Then, that thing has its own troubles: it is buggy like every firmware
is and it can brick the machine.

I'm not saying it is not useful - there are some use cases for it which
are being worked on but if all you wanna do is dump MCEs to rasdaemon,
that works even now.

But then you have an ARM patch there and I'm confused because MCEs are
x86 thing - ARM has different stuff.

So I think you need to elaborate more here.

Thx.
Shuai Xue Nov. 26, 2024, 7:04 a.m. UTC | #5
在 2023/10/26 21:32, Borislav Petkov 写道:

Hi, Borislav,

Sorry for the late reply.

> On Sat, Oct 07, 2023 at 03:15:45PM +0800, Shuai Xue wrote:
>> So, IMHO, it's better to add a way to retrieve MCE records through switching
>> to the new generation rasdaemon solution.
> 
> rasdaemon already collects errors and even saves them in a database of
> sorts. No kernel changes needed.

I did not figure out how rasdaemon *already* collects errors.

Both rasdaemon and mcelog are designed to collect errors generated by the
x86_mce_decoder_chain notifier. However, due to the queuing of mce_irq_work on
the current CPU during an MCE context, the associated notifier_call is not
executed, preventing error collection before a system panic occurs. As a
result, neither rasdaemon nor mcelog can capture errors at this critical time.

Upon inspection, rasdaemon fails to record any errors, as evidenced by the
output of `ras-mc-ctl --errors`, which shows no memory or PCIe AER errors,
among others.

   # run after a new reboot caused by fatal memory error
   #ras-mc-ctl --errors
   No Memory errors.
   
   No PCIe AER errors.
   
   No Extlog errors.
   
   No devlink errors.
   
   No disk errors.
   
   No Memory failure errors.
   
   No MCE errors.

Conversely, mcelog is able to retrieve and log detailed MCE error records
post-reboot, providing valuable insights into hardware error events, even in
the case of fatal errors.

   #journalctl -u mcelog --no-pager
   -- Reboot --
   systemd[1]: Started Machine Check Exception Logging Daemon.
   mcelog[2783]: Running trigger `dimm-error-trigger' (reporter: memdb)
   mcelog[2783]: Hardware event. This is not a software error.
   mcelog[2783]: MCE 0
   mcelog[2783]: not finished?
   mcelog[2783]: CPU 0 BANK 16 TSC 2307d829a77
   mcelog[2783]: RIP !INEXACT! 10:ffffffffa9588d6b
   mcelog[2783]: MISC a0001201618f886 ADDR 1715d9880
   mcelog[2783]: TIME 1732588816 Tue Nov 26 10:40:16 2024
   mcelog[2783]: MCG status:RIPV MCIP
   mcelog[2783]: MCi status:
   mcelog[2783]: Uncorrected error
   mcelog[2783]: Error enabled
   mcelog[2783]: MCi_MISC register valid
   mcelog[2783]: MCi_ADDR register valid
   mcelog[2783]: Processor context corrupt
   mcelog[2783]: MCA: MEMORY CONTROLLER RD_CHANNEL1_ERR
   mcelog[2783]: Transaction: Memory read error
   mcelog[2783]: MemCtrl: Uncorrected read error
   mcelog[2783]: bank: 0x2 bankgroup: 0x1 row: 0x402c3 column: 0x1f0
   mcelog[2783]: rank: 0x2 subrank: 0x0
   mcelog[2783]: ecc mode: SDDC
   mcelog[2783]: STATUS be00000200a00091 MCGSTATUS 5
   mcelog[2783]: MCGCAP f000c15 APICID 0 SOCKETID 0
   mcelog[2783]: PPIN 74f8640abf43c587
   mcelog[2783]: MICROCODE 2b000571
   mcelog[2783]: CPUID Vendor Intel Family 6 Model 143 Step 4

This patchset is based on the fact that we can not collect the MCE records
which are written to persistent storage if we switch to rasdaemon. Please
correct me if I missed anything.

> 
>> Sorry for the poor cover letter. I hope the following response can clarify
>> the matter.
>>
>> Q1: What is the exact problem?
>>
>> Traditionally, fatal hardware errors will cause Linux print error log to
>> console, e.g. print_mce() or __ghes_print_estatus(), then reboot. With
>> Linux, the primary method for obtaining debugging information of a serious
>> error or fault is via the kdump mechanism.
> 
> Not necessarily - see above.
> 
>> In the public cloud scenario, multiple virtual machines run on a
>> single physical server, and if that server experiences a failure, it can
>> potentially impact multiple tenants. It is crucial for us to thoroughly
>> analyze the root causes of each instance failure in order to:
>>
>> - Provide customers with a detailed explanation of the outage to reassure them.
>> - Collect the characteristics of the failures, such as ECC syndrome, to enable fault prediction.
>> - Explore potential solutions to prevent widespread outages.
> 
> Huh, are you talking about providing customers with error information
> from the *underlying* physical machine which runs the cloud VMs? That
> sounds suspicious, to say the least.
> 
> AFAICT, all you can tell the VM owner is: yah, the hw had an
> uncorrectable error in its memory and crashed. Is that the use case?

Yes, I mean that the MCE record is a important evidence to dig out the root
cause for every panic in production to aovid suffering potential wildly
outages, so we want to collect as many error logs as possible.

> 
> To be able to tell the VM owners why it crashed?
> 
>> In short, it is necessary to serialize hardware error information available
>> for post-mortem debugging.
>>
>> Q2: What exactly I wanna do:
>>
>> The MCE handler, do_machine_check(), saves the MCE record to persistent
>> storage and it is retrieved by mcelog. Mcelog has been deprecated when
>> kernel 4.12 released in 2017, and the help of the configuration option
>> CONFIG_X86_MCELOG_LEGACY suggest to consider switching to the new
>> generation rasdaemon solution. The GHES handler does not support APEI error
>> record now.
> 
> I think you're confusing things: MCEs do get reported to userspace
> through the trace_mc_record tracepoint and rasdaemon opens it and reads
> error info from there. And then writes it out to its db. So that works
> now.

For recoverable errors, MCEs are recorded in rasdaemon by the trace_mc_record
tracepoint. But not for fatal errors. See my experiment above.

> 
> GHES is something different: it is a fw glue around error reporting so
> that you don't have to develop a reporting driver for every platform but
> you can use a single one - only the fw glue needs to be added.
> 
> The problem with GHES is that it is notoriously buggy and currently
> it loads on a single platform only on x86.

As far as I know, GHES is wildly used on ARM platfrom and it is the primary
method to dliver error record from firmware to OS.

> 
> ARM are doing something in that area - you're better off talking to
> James Morse about it. And he's on Cc.

Thanks.

> 
>> To serialize hardware error information available for post-mortem
>> debugging:
>> - add support to save APEI error record into flash via ERST before go panic,
>> - add support to retrieve MCE or APEI error record from the flash and emit
>> the related tracepoint after system boot successful again so that rasdaemon
>> can collect them
> 
> Now that is yet another thing: you want to save error records into
> firmware. First of all, you don't really need it if you do kdump as
> explained above.
> 
> Then, that thing has its own troubles: it is buggy like every firmware
> is and it can brick the machine.
> 
> I'm not saying it is not useful - there are some use cases for it which
> are being worked on but if all you wanna do is dump MCEs to rasdaemon,
> that works even now.
> 
> But then you have an ARM patch there and I'm confused because MCEs are
> x86 thing - ARM has different stuff.
> 
> So I think you need to elaborate more here.


Yes, may I need to split this patchset into two parts.

> 
> Thx.
> 


Thanks for valuable comments.

Best Regards,
Shuai