Message ID | 20180818065126.77912-1-okaya@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Delegated to: | Bjorn Helgaas |
Headers | show |
Series | [v8,1/2] PCI: pciehp: Ignore link events when there is a fatal error pending | expand |
On Fri, Aug 17, 2018 at 11:51:09PM -0700, Sinan Kaya wrote: > --- a/drivers/pci/hotplug/pciehp_ctrl.c > +++ b/drivers/pci/hotplug/pciehp_ctrl.c > @@ -222,9 +222,27 @@ void pciehp_handle_disable_request(struct slot *slot) > void pciehp_handle_presence_or_link_change(struct slot *slot, u32 events) > { > struct controller *ctrl = slot->ctrl; > + struct pci_dev *pdev = ctrl->pcie->port; > bool link_active; > u8 present; > > + /* If a fatal error is pending, wait for AER or DPC to handle it. */ > + if (pcie_fatal_error_pending(pdev)) { > + bool recovered; > + > + recovered = pcie_wait_fatal_error_clear(pdev); > + > + /* If the fatal error is gone and the link is up, return */ > + if (recovered && pcie_wait_for_link(pdev, true)) { > + ctrl_info(ctrl, "Slot(%s): Ignoring Link event due to successful fatal error recovery\n", > + slot_name(slot)); > + return; > + } > + > + ctrl_info(ctrl, "Slot(%s): Fatal error recovery failed for Link event, trying hotplug reset\n", > + slot_name(slot)); > + } > + This differs from v7 of the patch in that *any* fatal error, not just a Surprise Link Down, results in pciehp waiting for the error to clear. I'm wondering if that's safe: Theoretically, the user might quickly swap the card in the slot during, say, a Completion Timeout Error, and with this patch pciehp would carry on as if nothing happened. Thanks, Lukas
On 8/20/2018 5:22 AM, Lukas Wunner wrote: >> + > This differs from v7 of the patch in that*any* fatal error, not just > a Surprise Link Down, results in pciehp waiting for the error to clear. > > I'm wondering if that's safe: Theoretically, the user might quickly > swap the card in the slot during, say, a Completion Timeout Error, > and with this patch pciehp would carry on as if nothing happened. Functionally both patches are identical. The v7 was still allowing AER/DPC to handle all fatal error events except Surprise Link Down. Now, second patch (v8 2/2) is masking the surprise link down event as we have talked before. Therefore, there is no need to filter out incoming errors by reading the status register and masking the unwanted bits. Just to clarify something, this patch will wait for only the FATAL error events to be handled by the error handling services only. Completion Timeout is a NONFATAL error event by default unless somebody tweaks the severity bits. Anyhow, all FATAL errors cause one sort of link down either initiated by software (AER) or hardware (DPC). Therefore, hotplug driver will observe a link down event and AER/DPC needs to handle the event as usual.
On Mon, Aug 20, 2018 at 12:59:05PM -0400, Sinan Kaya wrote: > On 8/20/2018 5:22 AM, Lukas Wunner wrote: > > > + > > This differs from v7 of the patch in that*any* fatal error, not just > > a Surprise Link Down, results in pciehp waiting for the error to clear. > > > > I'm wondering if that's safe: Theoretically, the user might quickly > > swap the card in the slot during, say, a Completion Timeout Error, > > and with this patch pciehp would carry on as if nothing happened. > > Functionally both patches are identical. The v7 was still allowing > AER/DPC to handle all fatal error events except Surprise Link Down. > > Now, second patch (v8 2/2) is masking the surprise link down event > as we have talked before. Therefore, there is no need to filter > out incoming errors by reading the status register and masking the > unwanted bits. Ok, missed that. > Just to clarify something, this patch will wait for only the FATAL > error events to be handled by the error handling services only. > > Completion Timeout is a NONFATAL error event by default unless > somebody tweaks the severity bits. > > Anyhow, all FATAL errors cause one sort of link down either > initiated by software (AER) or hardware (DPC). > Therefore, hotplug driver will observe a link down event and > AER/DPC needs to handle the event as usual. Thanks for the clarification. Lukas
diff --git a/drivers/pci/hotplug/pciehp_ctrl.c b/drivers/pci/hotplug/pciehp_ctrl.c index da7c72372ffc..22354b6850c3 100644 --- a/drivers/pci/hotplug/pciehp_ctrl.c +++ b/drivers/pci/hotplug/pciehp_ctrl.c @@ -222,9 +222,27 @@ void pciehp_handle_disable_request(struct slot *slot) void pciehp_handle_presence_or_link_change(struct slot *slot, u32 events) { struct controller *ctrl = slot->ctrl; + struct pci_dev *pdev = ctrl->pcie->port; bool link_active; u8 present; + /* If a fatal error is pending, wait for AER or DPC to handle it. */ + if (pcie_fatal_error_pending(pdev)) { + bool recovered; + + recovered = pcie_wait_fatal_error_clear(pdev); + + /* If the fatal error is gone and the link is up, return */ + if (recovered && pcie_wait_for_link(pdev, true)) { + ctrl_info(ctrl, "Slot(%s): Ignoring Link event due to successful fatal error recovery\n", + slot_name(slot)); + return; + } + + ctrl_info(ctrl, "Slot(%s): Fatal error recovery failed for Link event, trying hotplug reset\n", + slot_name(slot)); + } + /* * If the slot is on and presence or link has changed, turn it off. * Even if it's occupied again, we cannot assume the card is the same. diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h index c358e7a07f3f..e2d98654630b 100644 --- a/drivers/pci/pci.h +++ b/drivers/pci/pci.h @@ -356,6 +356,8 @@ void pci_enable_acs(struct pci_dev *dev); /* PCI error reporting and recovery */ void pcie_do_fatal_recovery(struct pci_dev *dev, u32 service); void pcie_do_nonfatal_recovery(struct pci_dev *dev); +bool pcie_fatal_error_pending(struct pci_dev *pdev); +bool pcie_wait_fatal_error_clear(struct pci_dev *pdev); bool pcie_wait_for_link(struct pci_dev *pdev, bool active); #ifdef CONFIG_PCIEASPM diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c index f7ce0cb0b0b7..b1b5604cb00b 100644 --- a/drivers/pci/pcie/err.c +++ b/drivers/pci/pcie/err.c @@ -16,6 +16,7 @@ #include <linux/kernel.h> #include <linux/errno.h> #include <linux/aer.h> +#include <linux/delay.h> #include "portdrv.h" #include "../pci.h" @@ -386,3 +387,36 @@ void pcie_do_nonfatal_recovery(struct pci_dev *dev) /* TODO: Should kernel panic here? */ pci_info(dev, "AER: Device recovery failed\n"); } + +bool pcie_fatal_error_pending(struct pci_dev *pdev) +{ + u16 err_status = 0; + int rc; + + if (!pci_is_pcie(pdev)) + return false; + + rc = pcie_capability_read_word(pdev, PCI_EXP_DEVSTA, &err_status); + if (rc) + return false; + + return !!(err_status & PCI_EXP_DEVSTA_FED); +} + +bool pcie_wait_fatal_error_clear(struct pci_dev *pdev) +{ + int timeout = 1000; + bool ret; + + for (;;) { + ret = pcie_fatal_error_pending(pdev); + if (ret == false) + return true; + if (timeout <= 0) + break; + msleep(20); + timeout -= 20; + } + + return false; +}
AER/DPC reset is known as warm-resets. HP link recovery is known as cold-reset via power-off and power-on command to the PCI slot. In the middle of a warm-reset operation (AER/DPC), we are: 1. turning off the slow power. Slot power needs to be kept on in order for recovery to succeed. 2. performing a cold reset causing Fatal Error recovery to fail. If link goes down due to a DPC event, it should be recovered by DPC status trigger. Injecting a cold reset in the middle can cause a HW lockup as it is an undefined behavior. Similarly, If link goes down due to an AER secondary bus reset issue, it should be recovered by HW. Injecting a cold reset in the middle of a secondary bus reset can cause a HW lockup as it is an undefined behavior. 1. HP ISR observes link down interrupt. 2. HP ISR checks that there is a fatal error pending, it doesn't touch the link. 3. HP ISR waits until link recovery happens. 4. HP ISR calls the read vendor id function. 5. If all fails, try the cold-reset approach. If fatal error is pending and a fatal error service such as DPC or AER is running, it is the responsibility of the fatal error service to recover the link. Signed-off-by: Sinan Kaya <okaya@kernel.org> --- drivers/pci/hotplug/pciehp_ctrl.c | 18 ++++++++++++++++ drivers/pci/pci.h | 2 ++ drivers/pci/pcie/err.c | 34 +++++++++++++++++++++++++++++++ 3 files changed, 54 insertions(+)