diff mbox series

[v2,2/2] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd

Message ID 20241112135419.59491-3-xueshuai@linux.alibaba.com (mailing list archive)
State New
Headers show
Series PCI/AER: Report fatal errors of RCiEP and EP if link recoverd | expand

Commit Message

Shuai Xue Nov. 12, 2024, 1:54 p.m. UTC
The AER driver has historically avoided reading the configuration space of
an endpoint or RCiEP that reported a fatal error, considering the link to
that device unreliable. Consequently, when a fatal error occurs, the AER
and DPC drivers do not report specific error types, resulting in logs like:

  pcieport 0000:30:03.0: EDR: EDR event received
  pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
  pcieport 0000:30:03.0: DPC: ERR_FATAL detected
  pcieport 0000:30:03.0: AER: broadcast error_detected message
  nvme nvme0: frozen state error detected, reset controller
  nvme 0000:34:00.0: ready 0ms after DPC
  pcieport 0000:30:03.0: AER: broadcast slot_reset message

AER status registers are sticky and Write-1-to-clear. If the link recovered
after hot reset, we can still safely access AER status of the error device.
In such case, report fatal errors which helps to figure out the error root
case.

After this patch, the logs like:

  pcieport 0000:30:03.0: EDR: EDR event received
  pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
  pcieport 0000:30:03.0: DPC: ERR_FATAL detected
  pcieport 0000:30:03.0: AER: broadcast error_detected message
  nvme nvme0: frozen state error detected, reset controller
  pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
  nvme 0000:34:00.0: ready 0ms after DPC
  nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
  nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
  nvme 0000:34:00.0:    [ 4] DLP                    (First)
  pcieport 0000:30:03.0: AER: broadcast slot_reset message

Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
---
 drivers/pci/pci.h      |  3 ++-
 drivers/pci/pcie/aer.c | 11 +++++++----
 drivers/pci/pcie/dpc.c |  2 +-
 drivers/pci/pcie/err.c |  9 +++++++++
 4 files changed, 19 insertions(+), 6 deletions(-)

Comments

Lukas Wunner Nov. 15, 2024, 9:06 a.m. UTC | #1
On Tue, Nov 12, 2024 at 09:54:19PM +0800, Shuai Xue wrote:
> The AER driver has historically avoided reading the configuration space of
> an endpoint or RCiEP that reported a fatal error, considering the link to
> that device unreliable.

It would be good if you could mention the relevant commit here:

9d938ea53b26 ("PCI/AER: Don't read upstream ports below fatal errors")

Thanks,

Lukas
Shuai Xue Nov. 15, 2024, 9:22 a.m. UTC | #2
在 2024/11/15 17:06, Lukas Wunner 写道:
> On Tue, Nov 12, 2024 at 09:54:19PM +0800, Shuai Xue wrote:
>> The AER driver has historically avoided reading the configuration space of
>> an endpoint or RCiEP that reported a fatal error, considering the link to
>> that device unreliable.
> 
> It would be good if you could mention the relevant commit here:
> 
> 9d938ea53b26 ("PCI/AER: Don't read upstream ports below fatal errors")
> 
> Thanks,
> 
> Lukas

Sure, will add it.

Thank you.
Best Regards,
Shuai
Bowman, Terry Nov. 15, 2024, 8:20 p.m. UTC | #3
Hi Shuai,


On 11/12/2024 7:54 AM, Shuai Xue wrote:
> The AER driver has historically avoided reading the configuration space of
> an endpoint or RCiEP that reported a fatal error, considering the link to
> that device unreliable. Consequently, when a fatal error occurs, the AER
> and DPC drivers do not report specific error types, resulting in logs like:
> 
>   pcieport 0000:30:03.0: EDR: EDR event received
>   pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>   pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>   pcieport 0000:30:03.0: AER: broadcast error_detected message
>   nvme nvme0: frozen state error detected, reset controller
>   nvme 0000:34:00.0: ready 0ms after DPC
>   pcieport 0000:30:03.0: AER: broadcast slot_reset message
> 
> AER status registers are sticky and Write-1-to-clear. If the link recovered
> after hot reset, we can still safely access AER status of the error device.
> In such case, report fatal errors which helps to figure out the error root
> case.
> 
> After this patch, the logs like:
> 
>   pcieport 0000:30:03.0: EDR: EDR event received
>   pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>   pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>   pcieport 0000:30:03.0: AER: broadcast error_detected message
>   nvme nvme0: frozen state error detected, reset controller
>   pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
>   nvme 0000:34:00.0: ready 0ms after DPC
>   nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
>   nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
>   nvme 0000:34:00.0:    [ 4] DLP                    (First)
>   pcieport 0000:30:03.0: AER: broadcast slot_reset message
> 
> Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
> ---
>  drivers/pci/pci.h      |  3 ++-
>  drivers/pci/pcie/aer.c | 11 +++++++----
>  drivers/pci/pcie/dpc.c |  2 +-
>  drivers/pci/pcie/err.c |  9 +++++++++
>  4 files changed, 19 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> index 0866f79aec54..6f827c313639 100644
> --- a/drivers/pci/pci.h
> +++ b/drivers/pci/pci.h
> @@ -504,7 +504,8 @@ struct aer_err_info {
>  	struct pcie_tlp_log tlp;	/* TLP Header */
>  };
>  
> -int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info);
> +int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
> +			      bool link_healthy);
>  void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
>  #endif	/* CONFIG_PCIEAER */
>  
> diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
> index 13b8586924ea..97ec1c17b6f4 100644
> --- a/drivers/pci/pcie/aer.c
> +++ b/drivers/pci/pcie/aer.c
> @@ -1200,12 +1200,14 @@ EXPORT_SYMBOL_GPL(aer_recover_queue);
>   * aer_get_device_error_info - read error status from dev and store it to info
>   * @dev: pointer to the device expected to have a error record
>   * @info: pointer to structure to store the error record
> + * @link_healthy: link is healthy or not
>   *
>   * Return 1 on success, 0 on error.
>   *
>   * Note that @info is reused among all error devices. Clear fields properly.
>   */
> -int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
> +int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
> +			      bool link_healthy)
>  {
>  	int type = pci_pcie_type(dev);
>  	int aer = dev->aer_cap;
> @@ -1229,7 +1231,8 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
>  	} else if (type == PCI_EXP_TYPE_ROOT_PORT ||
>  		   type == PCI_EXP_TYPE_RC_EC ||
>  		   type == PCI_EXP_TYPE_DOWNSTREAM ||
> -		   info->severity == AER_NONFATAL) {
> +		   info->severity == AER_NONFATAL ||
> +		   (info->severity == AER_FATAL && link_healthy)) {
>  
>  		/* Link is still healthy for IO reads */
>  		pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS,
> @@ -1258,11 +1261,11 @@ static inline void aer_process_err_devices(struct aer_err_info *e_info)
>  
>  	/* Report all before handle them, not to lost records by reset etc. */
>  	for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
> -		if (aer_get_device_error_info(e_info->dev[i], e_info))
> +		if (aer_get_device_error_info(e_info->dev[i], e_info, false))
>  			aer_print_error(e_info->dev[i], e_info);
>  	}

Would it be reasonable to detect if the link is intact and set the aer_get_device_error_info()
function's 'link_healthy' parameter accordingly? I was thinking the port upstream capability 
link status register could be used to indicate the link viability.

Regards,
Terry
Shuai Xue Nov. 16, 2024, 12:44 p.m. UTC | #4
在 2024/11/16 04:20, Bowman, Terry 写道:
> Hi Shuai,
> 
> 
> On 11/12/2024 7:54 AM, Shuai Xue wrote:
>> The AER driver has historically avoided reading the configuration space of
>> an endpoint or RCiEP that reported a fatal error, considering the link to
>> that device unreliable. Consequently, when a fatal error occurs, the AER
>> and DPC drivers do not report specific error types, resulting in logs like:
>>
>>    pcieport 0000:30:03.0: EDR: EDR event received
>>    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>>    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>>    pcieport 0000:30:03.0: AER: broadcast error_detected message
>>    nvme nvme0: frozen state error detected, reset controller
>>    nvme 0000:34:00.0: ready 0ms after DPC
>>    pcieport 0000:30:03.0: AER: broadcast slot_reset message
>>
>> AER status registers are sticky and Write-1-to-clear. If the link recovered
>> after hot reset, we can still safely access AER status of the error device.
>> In such case, report fatal errors which helps to figure out the error root
>> case.
>>
>> After this patch, the logs like:
>>
>>    pcieport 0000:30:03.0: EDR: EDR event received
>>    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>>    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>>    pcieport 0000:30:03.0: AER: broadcast error_detected message
>>    nvme nvme0: frozen state error detected, reset controller
>>    pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
>>    nvme 0000:34:00.0: ready 0ms after DPC
>>    nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
>>    nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
>>    nvme 0000:34:00.0:    [ 4] DLP                    (First)
>>    pcieport 0000:30:03.0: AER: broadcast slot_reset message
>>
>> Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
>> ---
>>   drivers/pci/pci.h      |  3 ++-
>>   drivers/pci/pcie/aer.c | 11 +++++++----
>>   drivers/pci/pcie/dpc.c |  2 +-
>>   drivers/pci/pcie/err.c |  9 +++++++++
>>   4 files changed, 19 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
>> index 0866f79aec54..6f827c313639 100644
>> --- a/drivers/pci/pci.h
>> +++ b/drivers/pci/pci.h
>> @@ -504,7 +504,8 @@ struct aer_err_info {
>>   	struct pcie_tlp_log tlp;	/* TLP Header */
>>   };
>>   
>> -int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info);
>> +int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
>> +			      bool link_healthy);
>>   void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
>>   #endif	/* CONFIG_PCIEAER */
>>   
>> diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
>> index 13b8586924ea..97ec1c17b6f4 100644
>> --- a/drivers/pci/pcie/aer.c
>> +++ b/drivers/pci/pcie/aer.c
>> @@ -1200,12 +1200,14 @@ EXPORT_SYMBOL_GPL(aer_recover_queue);
>>    * aer_get_device_error_info - read error status from dev and store it to info
>>    * @dev: pointer to the device expected to have a error record
>>    * @info: pointer to structure to store the error record
>> + * @link_healthy: link is healthy or not
>>    *
>>    * Return 1 on success, 0 on error.
>>    *
>>    * Note that @info is reused among all error devices. Clear fields properly.
>>    */
>> -int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
>> +int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
>> +			      bool link_healthy)
>>   {
>>   	int type = pci_pcie_type(dev);
>>   	int aer = dev->aer_cap;
>> @@ -1229,7 +1231,8 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
>>   	} else if (type == PCI_EXP_TYPE_ROOT_PORT ||
>>   		   type == PCI_EXP_TYPE_RC_EC ||
>>   		   type == PCI_EXP_TYPE_DOWNSTREAM ||
>> -		   info->severity == AER_NONFATAL) {
>> +		   info->severity == AER_NONFATAL ||
>> +		   (info->severity == AER_FATAL && link_healthy)) {
>>   
>>   		/* Link is still healthy for IO reads */
>>   		pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS,
>> @@ -1258,11 +1261,11 @@ static inline void aer_process_err_devices(struct aer_err_info *e_info)
>>   
>>   	/* Report all before handle them, not to lost records by reset etc. */
>>   	for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
>> -		if (aer_get_device_error_info(e_info->dev[i], e_info))
>> +		if (aer_get_device_error_info(e_info->dev[i], e_info, false))
>>   			aer_print_error(e_info->dev[i], e_info);
>>   	}
> 
> Would it be reasonable to detect if the link is intact and set the aer_get_device_error_info()
> function's 'link_healthy' parameter accordingly? I was thinking the port upstream capability
> link status register could be used to indicate the link viability.
> 
> Regards,
> Terry

Good idea. I think pciehp_check_link_active is a good implementation to check
link_healthy in aer_get_device_error_info().

   int pciehp_check_link_active(struct controller *ctrl)
   {
   	struct pci_dev *pdev = ctrl_dev(ctrl);
   	u16 lnk_status;
   	int ret;
   
   	ret = pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status);
   	if (ret == PCIBIOS_DEVICE_NOT_FOUND || PCI_POSSIBLE_ERROR(lnk_status))
   		return -ENODEV;
   
   	ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA);
   	ctrl_dbg(ctrl, "%s: lnk_status = %x\n", __func__, lnk_status);
   
   	return ret;
   }

Thank you for valuable comments.

Best Regards
Shuai
Shuai Xue Nov. 17, 2024, 1:36 p.m. UTC | #5
在 2024/11/16 20:44, Shuai Xue 写道:
> 
> 
> 在 2024/11/16 04:20, Bowman, Terry 写道:
>> Hi Shuai,
>>
>>
>> On 11/12/2024 7:54 AM, Shuai Xue wrote:
>>> The AER driver has historically avoided reading the configuration space of
>>> an endpoint or RCiEP that reported a fatal error, considering the link to
>>> that device unreliable. Consequently, when a fatal error occurs, the AER
>>> and DPC drivers do not report specific error types, resulting in logs like:
>>>
>>>    pcieport 0000:30:03.0: EDR: EDR event received
>>>    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>>>    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>>>    pcieport 0000:30:03.0: AER: broadcast error_detected message
>>>    nvme nvme0: frozen state error detected, reset controller
>>>    nvme 0000:34:00.0: ready 0ms after DPC
>>>    pcieport 0000:30:03.0: AER: broadcast slot_reset message
>>>
>>> AER status registers are sticky and Write-1-to-clear. If the link recovered
>>> after hot reset, we can still safely access AER status of the error device.
>>> In such case, report fatal errors which helps to figure out the error root
>>> case.
>>>
>>> After this patch, the logs like:
>>>
>>>    pcieport 0000:30:03.0: EDR: EDR event received
>>>    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>>>    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>>>    pcieport 0000:30:03.0: AER: broadcast error_detected message
>>>    nvme nvme0: frozen state error detected, reset controller
>>>    pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
>>>    nvme 0000:34:00.0: ready 0ms after DPC
>>>    nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
>>>    nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
>>>    nvme 0000:34:00.0:    [ 4] DLP                    (First)
>>>    pcieport 0000:30:03.0: AER: broadcast slot_reset message
>>>
>>> Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
>>> ---
>>>   drivers/pci/pci.h      |  3 ++-
>>>   drivers/pci/pcie/aer.c | 11 +++++++----
>>>   drivers/pci/pcie/dpc.c |  2 +-
>>>   drivers/pci/pcie/err.c |  9 +++++++++
>>>   4 files changed, 19 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
>>> index 0866f79aec54..6f827c313639 100644
>>> --- a/drivers/pci/pci.h
>>> +++ b/drivers/pci/pci.h
>>> @@ -504,7 +504,8 @@ struct aer_err_info {
>>>       struct pcie_tlp_log tlp;    /* TLP Header */
>>>   };
>>> -int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info);
>>> +int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
>>> +                  bool link_healthy);
>>>   void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
>>>   #endif    /* CONFIG_PCIEAER */
>>> diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
>>> index 13b8586924ea..97ec1c17b6f4 100644
>>> --- a/drivers/pci/pcie/aer.c
>>> +++ b/drivers/pci/pcie/aer.c
>>> @@ -1200,12 +1200,14 @@ EXPORT_SYMBOL_GPL(aer_recover_queue);
>>>    * aer_get_device_error_info - read error status from dev and store it to info
>>>    * @dev: pointer to the device expected to have a error record
>>>    * @info: pointer to structure to store the error record
>>> + * @link_healthy: link is healthy or not
>>>    *
>>>    * Return 1 on success, 0 on error.
>>>    *
>>>    * Note that @info is reused among all error devices. Clear fields properly.
>>>    */
>>> -int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
>>> +int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
>>> +                  bool link_healthy)
>>>   {
>>>       int type = pci_pcie_type(dev);
>>>       int aer = dev->aer_cap;
>>> @@ -1229,7 +1231,8 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
>>>       } else if (type == PCI_EXP_TYPE_ROOT_PORT ||
>>>              type == PCI_EXP_TYPE_RC_EC ||
>>>              type == PCI_EXP_TYPE_DOWNSTREAM ||
>>> -           info->severity == AER_NONFATAL) {
>>> +           info->severity == AER_NONFATAL ||
>>> +           (info->severity == AER_FATAL && link_healthy)) {
>>>           /* Link is still healthy for IO reads */
>>>           pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS,
>>> @@ -1258,11 +1261,11 @@ static inline void aer_process_err_devices(struct aer_err_info *e_info)
>>>       /* Report all before handle them, not to lost records by reset etc. */
>>>       for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
>>> -        if (aer_get_device_error_info(e_info->dev[i], e_info))
>>> +        if (aer_get_device_error_info(e_info->dev[i], e_info, false))
>>>               aer_print_error(e_info->dev[i], e_info);
>>>       }
>>
>> Would it be reasonable to detect if the link is intact and set the aer_get_device_error_info()
>> function's 'link_healthy' parameter accordingly? I was thinking the port upstream capability
>> link status register could be used to indicate the link viability.
>>
>> Regards,
>> Terry
> 
> Good idea. I think pciehp_check_link_active is a good implementation to check
> link_healthy in aer_get_device_error_info().
> 
>    int pciehp_check_link_active(struct controller *ctrl)
>    {
>        struct pci_dev *pdev = ctrl_dev(ctrl);
>        u16 lnk_status;
>        int ret;
>        ret = pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status);
>        if (ret == PCIBIOS_DEVICE_NOT_FOUND || PCI_POSSIBLE_ERROR(lnk_status))
>            return -ENODEV;
>        ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA);
>        ctrl_dbg(ctrl, "%s: lnk_status = %x\n", __func__, lnk_status);
>        return ret;
>    }
> 
> Thank you for valuable comments.
> 
> Best Regards
> Shuai

Hi, Bowman,

After dive into the code details, I found that both dpc_reset_link() and
aer_root_reset() use pci_bridge_wait_for_secondary_bus() to wait for secondary
bus to be accessible. IMHO, pci_bridge_wait_for_secondary_bus() is better
robustness than function like pciehp_check_link_active(). So I think
reset_subordinates() is good boundary for delineating whether a link is
accessible.

Besides, for DPC driver, the link status of upstream port, e.g, rootport, is
inactive when DPC is triggered, and is recoverd to active until
dpc_reset_link() success. But for AER driver, the link is active before and
after aer_root_reset(). As a result, the AER status will be reported twice.

Best Regards,
Shuai
diff mbox series

Patch

diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index 0866f79aec54..6f827c313639 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -504,7 +504,8 @@  struct aer_err_info {
 	struct pcie_tlp_log tlp;	/* TLP Header */
 };
 
-int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info);
+int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
+			      bool link_healthy);
 void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
 #endif	/* CONFIG_PCIEAER */
 
diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
index 13b8586924ea..97ec1c17b6f4 100644
--- a/drivers/pci/pcie/aer.c
+++ b/drivers/pci/pcie/aer.c
@@ -1200,12 +1200,14 @@  EXPORT_SYMBOL_GPL(aer_recover_queue);
  * aer_get_device_error_info - read error status from dev and store it to info
  * @dev: pointer to the device expected to have a error record
  * @info: pointer to structure to store the error record
+ * @link_healthy: link is healthy or not
  *
  * Return 1 on success, 0 on error.
  *
  * Note that @info is reused among all error devices. Clear fields properly.
  */
-int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
+int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
+			      bool link_healthy)
 {
 	int type = pci_pcie_type(dev);
 	int aer = dev->aer_cap;
@@ -1229,7 +1231,8 @@  int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
 	} else if (type == PCI_EXP_TYPE_ROOT_PORT ||
 		   type == PCI_EXP_TYPE_RC_EC ||
 		   type == PCI_EXP_TYPE_DOWNSTREAM ||
-		   info->severity == AER_NONFATAL) {
+		   info->severity == AER_NONFATAL ||
+		   (info->severity == AER_FATAL && link_healthy)) {
 
 		/* Link is still healthy for IO reads */
 		pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS,
@@ -1258,11 +1261,11 @@  static inline void aer_process_err_devices(struct aer_err_info *e_info)
 
 	/* Report all before handle them, not to lost records by reset etc. */
 	for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
-		if (aer_get_device_error_info(e_info->dev[i], e_info))
+		if (aer_get_device_error_info(e_info->dev[i], e_info, false))
 			aer_print_error(e_info->dev[i], e_info);
 	}
 	for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
-		if (aer_get_device_error_info(e_info->dev[i], e_info))
+		if (aer_get_device_error_info(e_info->dev[i], e_info, false))
 			handle_error_source(e_info->dev[i], e_info);
 	}
 }
diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
index 62a68cde4364..b3f157a00405 100644
--- a/drivers/pci/pcie/dpc.c
+++ b/drivers/pci/pcie/dpc.c
@@ -304,7 +304,7 @@  struct pci_dev *dpc_process_error(struct pci_dev *pdev)
 		dpc_process_rp_pio_error(pdev);
 	else if (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR &&
 		 dpc_get_aer_uncorrect_severity(pdev, &info) &&
-		 aer_get_device_error_info(pdev, &info)) {
+		 aer_get_device_error_info(pdev, &info, false)) {
 		aer_print_error(pdev, &info);
 		pci_aer_clear_nonfatal_status(pdev);
 		pci_aer_clear_fatal_status(pdev);
diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
index 31090770fffc..462577b8d75a 100644
--- a/drivers/pci/pcie/err.c
+++ b/drivers/pci/pcie/err.c
@@ -196,6 +196,7 @@  pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
 	struct pci_dev *bridge;
 	pci_ers_result_t status = PCI_ERS_RESULT_CAN_RECOVER;
 	struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
+	struct aer_err_info info;
 
 	/*
 	 * If the error was detected by a Root Port, Downstream Port, RCEC,
@@ -223,6 +224,13 @@  pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
 			pci_warn(bridge, "subordinate device reset failed\n");
 			goto failed;
 		}
+
+		info.severity = AER_FATAL;
+		/* Link recovered, report fatal errors of RCiEP or EP */
+		if ((type == PCI_EXP_TYPE_ENDPOINT ||
+		     type == PCI_EXP_TYPE_RC_END) &&
+		    aer_get_device_error_info(dev, &info, true))
+			aer_print_error(dev, &info);
 	} else {
 		pci_walk_bridge(bridge, report_normal_detected, &status);
 	}
@@ -259,6 +267,7 @@  pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
 	if (host->native_aer || pcie_ports_native) {
 		pcie_clear_device_status(dev);
 		pci_aer_clear_nonfatal_status(dev);
+		pci_aer_clear_fatal_status(dev);
 	}
 
 	pci_walk_bridge(bridge, pci_pm_runtime_put, NULL);