Message ID | 20220406063542.183946-6-Smita.KoralahalliChannabasappa@amd.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Handle corrected machine check interrupt storms | expand |
+ /* Return early on an interrupt storm */ + if (this_cpu_read(bank_storm[bank])) + return; Is you reasoning for early return that you already have plenty of logged errors from this bank, so OK to skip additional processing of this one? -Tony
Hi, On 4/6/22 5:44 PM, Luck, Tony wrote: > + /* Return early on an interrupt storm */ > + if (this_cpu_read(bank_storm[bank])) > + return; > > Is you reasoning for early return that you already have plenty of > logged errors from this bank, so OK to skip additional processing > of this one? The idea behind this was: Once, the interrupts are turned off by track_cmci_storm() on a storm, (which is called before this "if statement") logging and handling of subsequent corrected errors will be taken care by machine_check_poll(). Hence, no need to redo this again in the handler.... Let me know what are your thoughts on this? > > -Tony
On Fri, Apr 08, 2022 at 02:48:47AM -0500, Koralahalli Channabasappa, Smita wrote: > Hi, > > On 4/6/22 5:44 PM, Luck, Tony wrote: > > > + /* Return early on an interrupt storm */ > > + if (this_cpu_read(bank_storm[bank])) > > + return; > > > > Is you reasoning for early return that you already have plenty of > > logged errors from this bank, so OK to skip additional processing > > of this one? > > The idea behind this was: Once, the interrupts are turned off by > track_cmci_storm() on a storm, (which is called before this "if > statement") logging and handling of subsequent corrected errors > will be taken care by machine_check_poll(). Hence, no need to > redo this again in the handler.... > > Let me know what are your thoughts on this? Makes sense. There's a storm, so picking up this error now, or waiting for machine_check_poll() to get it makes little difference. -Tony
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c index 1940d305db1c..941b09f4dac5 100644 --- a/arch/x86/kernel/cpu/mce/amd.c +++ b/arch/x86/kernel/cpu/mce/amd.c @@ -466,6 +466,47 @@ static void threshold_restart_bank(void *_tr) wrmsr(tr->b->address, lo, hi); } +static void _reset_block(struct threshold_block *block) +{ + struct thresh_restart tr; + + memset(&tr, 0, sizeof(tr)); + tr.b = block; + threshold_restart_bank(&tr); +} + +static void toggle_interrupt_reset_block(struct threshold_block *block, bool on) +{ + if (!block) + return; + + block->interrupt_enable = !!on; + _reset_block(block); +} + +void mce_amd_handle_storm(int bank, bool on) +{ + struct threshold_block *first_block = NULL, *block = NULL, *tmp = NULL; + struct threshold_bank **bp = this_cpu_read(threshold_banks); + unsigned long flags; + + if (!bp) + return; + + local_irq_save(flags); + + first_block = bp[bank]->blocks; + if (!first_block) + goto end; + + toggle_interrupt_reset_block(first_block, on); + + list_for_each_entry_safe(block, tmp, &first_block->miscj, miscj) + toggle_interrupt_reset_block(block, on); +end: + local_irq_restore(flags); +} + static void mce_threshold_block_init(struct threshold_block *b, int offset) { struct thresh_restart tr = { @@ -867,6 +908,7 @@ static void amd_threshold_interrupt(void) struct threshold_block *first_block = NULL, *block = NULL, *tmp = NULL; struct threshold_bank **bp = this_cpu_read(threshold_banks); unsigned int bank, cpu = smp_processor_id(); + u64 status; /* * Validate that the threshold bank has been initialized already. The @@ -880,6 +922,13 @@ static void amd_threshold_interrupt(void) if (!(per_cpu(bank_map, cpu) & (1 << bank))) continue; + rdmsrl(mca_msr_reg(bank, MCA_STATUS), status); + track_cmci_storm(bank, status); + + /* Return early on an interrupt storm */ + if (this_cpu_read(bank_storm[bank])) + return; + first_block = bp[bank]->blocks; if (!first_block) continue; diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 6caee488bf7d..c510dd17f2c5 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -2078,6 +2078,7 @@ static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c) case X86_VENDOR_AMD: { mce_amd_feature_init(c); + mce_handle_storm = mce_amd_handle_storm; break; } diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index 49907cadf9ad..b9e8c8155c66 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -213,7 +213,11 @@ extern bool filter_mce(struct mce *m); #ifdef CONFIG_X86_MCE_AMD extern bool amd_filter_mce(struct mce *m); +void track_cmci_storm(int bank, u64 status); +void mce_amd_handle_storm(int bank, bool on); #else +static inline void track_cmci_storm(int bank, u64 status) { } +# define mce_amd_handle_storm mce_handle_storm_default static inline bool amd_filter_mce(struct mce *m) { return false; } #endif
Extend the logic of handling CMCI storms to AMD threshold interrupts. Rely on the similar approach as of Intel's CMCI to mitigate storms per CPU and per bank. But, unlike CMCI, do not set thresholds and reduce interrupt rate on a storm. Rather, disable the interrupt on the corresponding CPU and bank. Re-enable back the interrupts if enough consecutive polls of the bank show no corrected errors (30, as programmed by Intel). Turning off the threshold interrupts would be a better solution on AMD systems as other error severities will still be handled even if the threshold interrupts are disabled. Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com> --- arch/x86/kernel/cpu/mce/amd.c | 49 ++++++++++++++++++++++++++++++ arch/x86/kernel/cpu/mce/core.c | 1 + arch/x86/kernel/cpu/mce/internal.h | 4 +++ 3 files changed, 54 insertions(+)