Message ID | 20180226132112.81447-6-mika.westerberg@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Delegated to: | Bjorn Helgaas |
Headers | show |
On Mon, Feb 26, 2018 at 04:21:12PM +0300, Mika Westerberg wrote: > Following PCIehp mark the unplugged PCI devices disconnected. This makes > sure PCI core code leaves the now missing hardware registers alone. > > Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> > Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> > --- > drivers/pci/hotplug/acpiphp_glue.c | 5 +++++ > 1 file changed, 5 insertions(+) > > diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c > index 5efa21cdddc9..0aef35ee665a 100644 > --- a/drivers/pci/hotplug/acpiphp_glue.c > +++ b/drivers/pci/hotplug/acpiphp_glue.c > @@ -651,6 +651,11 @@ static void trim_stale_devices(struct pci_dev *dev) > alive = pci_device_is_present(dev); > > if (!alive) { > + pci_dev_set_disconnected(dev, NULL); > + if (pci_has_subordinate(dev)) > + pci_walk_bus(dev->subordinate, pci_dev_set_disconnected, > + NULL); I hate this pci_dev_set_disconnected() thing and hope to remove it someday. It's racy and adds additional states that are hard to manage, e.g., "device has been removed and will not respond to PCI accesses, but pci_dev_set_disconnected() hasn't been called yet". If we can handle that case (and we should), we shouldn't need pci_dev_set_disconnected(). But I guess I accepted it other places, so now I have to accept it here. I do wonder whether it should be factored into a "pci_disconnect_dev()" that does both the device itself and the bus walk. And I wonder whether that should be called form pci_stop_and_remove_bus_device(), since some callers do this and others don't. > + > pci_stop_and_remove_bus_device(dev); > if (adev) > acpi_bus_trim(adev); > -- > 2.16.1 > > -- > To unsubscribe from this list: send the line "unsubscribe linux-acpi" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Mar 27, 2018 at 02:45:12PM -0500, Bjorn Helgaas wrote: > I hate this pci_dev_set_disconnected() thing and hope to remove it > someday. It's racy and adds additional states that are hard to > manage, e.g., "device has been removed and will not respond to PCI > accesses, but pci_dev_set_disconnected() hasn't been called yet". If > we can handle that case (and we should), we shouldn't need > pci_dev_set_disconnected(). Absent pci_dev_is_disconnected(), how can the PCI core detect if a device is still there (versus has been hot-removed)? The only valid method I know of is reading the vendor ID from config space. However reading config spaces takes time. To properly handle surprise removal, we ought to check presence of the device frequently lest we clutter the logs with errors, unnecessarily occupy the CPU and bus or in some cases hang the machine. Ideally, we should check presence of the device every time we access it. Now, imagine every time we access config or mmio space, we read the vendor ID from config space, so we essentially double the number of accesses to the device. Does that make sense, performance-wise? If pci_dev_is_disconnected() returns true, the device is either there or was until very recently, so in general checking that instead of reading the vendor ID should be sufficient and should result in acceptable performance. I keep hearing these complaints about the PCI_DEV_DISCONNECTED flag, from Greg but now also from you, but I'm not seeing constructive criticism how checking presence of a device should be handled instead, in a way that doesn't negatively impact performance. IMO it was a mistake to constrain visibility of the flag to the PCI core (at Greg's behest), it should have been made available to drivers so they're afforded a better method to detect surprise removal than just reading the vendor ID every time they access the device. Thanks, Lukas
[+cc Greg] On Tue, Mar 27, 2018 at 10:52:36PM +0200, Lukas Wunner wrote: > On Tue, Mar 27, 2018 at 02:45:12PM -0500, Bjorn Helgaas wrote: > > I hate this pci_dev_set_disconnected() thing and hope to remove it > > someday. It's racy and adds additional states that are hard to > > manage, e.g., "device has been removed and will not respond to PCI > > accesses, but pci_dev_set_disconnected() hasn't been called yet". If > > we can handle that case (and we should), we shouldn't need > > pci_dev_set_disconnected(). > > Absent pci_dev_is_disconnected(), how can the PCI core detect if > a device is still there (versus has been hot-removed)? The only > valid method I know of is reading the vendor ID from config space. > However reading config spaces takes time. To properly handle > surprise removal, we ought to check presence of the device frequently > lest we clutter the logs with errors, unnecessarily occupy the CPU > and bus or in some cases hang the machine. Ideally, we should check > presence of the device every time we access it. I'm not proposing a change in pci_dev_is_disconnected() for *this* patch, so we can have a discussion here, but I don't think resolving this is a precondition for this patch. I was just expressing remorse, which I agree is not constructive, so I should have kept quiet. I know you understand the problem, but to try to make amends, I'll recap it here for any onlookers: - driver calls pci_dev_is_disconnected(), learns device is present - device is removed - driver reads register, believing device is still present Obviously the register read fails, because the device is gone. So the driver has to be prepared for that failure regardless of whether we have pci_dev_is_disconnected(). > Now, imagine every time we access config or mmio space, we read the > vendor ID from config space, so we essentially double the number of > accesses to the device. Does that make sense, performance-wise? No, but I wouldn't advocate that strategy. > If pci_dev_is_disconnected() returns true, the device is either there > or was until very recently, so in general checking that instead of > reading the vendor ID should be sufficient and should result in > acceptable performance. > > I keep hearing these complaints about the PCI_DEV_DISCONNECTED flag, > from Greg but now also from you, but I'm not seeing constructive > criticism how checking presence of a device should be handled instead, > in a way that doesn't negatively impact performance. > > IMO it was a mistake to constrain visibility of the flag to the PCI core > (at Greg's behest), it should have been made available to drivers so > they're afforded a better method to detect surprise removal than just > reading the vendor ID every time they access the device. To avoid the race, drivers have to check for a valid value anyway. If they *also* check pci_dev_is_disconnected(), that clutters the code and gives a false sense of security. When a read fails, it normally returns ~0 as the data to the driver. The driver knows the semantics of the registers it reads, and often it can tell immediately that a successful read of the register could never return that value, so it doesn't need to try any more accesses to Vendor ID or anything else. If ~0 *is* a possible valid value, the driver can read some other register (maybe Vendor ID, or maybe an MMIO register that can be read faster). Bjorn
On Tue, Mar 27, 2018 at 05:13:17PM -0500, Bjorn Helgaas wrote: > > If pci_dev_is_disconnected() returns true, the device is either there > > or was until very recently, so in general checking that instead of > > reading the vendor ID should be sufficient and should result in > > acceptable performance. > > > > I keep hearing these complaints about the PCI_DEV_DISCONNECTED flag, > > from Greg but now also from you, but I'm not seeing constructive > > criticism how checking presence of a device should be handled instead, > > in a way that doesn't negatively impact performance. You can't have things both ways. If you are worried about your device going away (and you have to), then just check all reads and be fine with it. If you have values that can be all 0xff, then just accept that as a valid value and move to the next read where it can't be valid. Don't try to be smart and constantly read other registers like the vendor id, because you will still race with reading them as well (and some devices might not like that, as this is not a codepath that any device expects to be "fast", so odds are the silicon isn't optimised for that at all). > > IMO it was a mistake to constrain visibility of the flag to the PCI core > > (at Greg's behest), it should have been made available to drivers so > > they're afforded a better method to detect surprise removal than just > > reading the vendor ID every time they access the device. No, just check the value you read read and move on. This is something we have had to handle for 20 years now, it's not a new thing at all, why is this even something to be argued about? Is it that difficult for your driver to check this and handle the removal problem gracefully? > To avoid the race, drivers have to check for a valid value anyway. If > they *also* check pci_dev_is_disconnected(), that clutters the code > and gives a false sense of security. > > When a read fails, it normally returns ~0 as the data to the driver. > The driver knows the semantics of the registers it reads, and often it > can tell immediately that a successful read of the register could > never return that value, so it doesn't need to try any more accesses > to Vendor ID or anything else. > > If ~0 *is* a possible valid value, the driver can read some other > register (maybe Vendor ID, or maybe an MMIO register that can be read > faster). I totally agree with Bjorn here. thanks, greg k-h
diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c index 5efa21cdddc9..0aef35ee665a 100644 --- a/drivers/pci/hotplug/acpiphp_glue.c +++ b/drivers/pci/hotplug/acpiphp_glue.c @@ -651,6 +651,11 @@ static void trim_stale_devices(struct pci_dev *dev) alive = pci_device_is_present(dev); if (!alive) { + pci_dev_set_disconnected(dev, NULL); + if (pci_has_subordinate(dev)) + pci_walk_bus(dev->subordinate, pci_dev_set_disconnected, + NULL); + pci_stop_and_remove_bus_device(dev); if (adev) acpi_bus_trim(adev);