Message ID | 1359265003-16166-17-git-send-email-yinghai@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Delegated to: | Bjorn Helgaas |
Headers | show |
On Sat, Jan 26, 2013 at 09:36:37PM -0800, Yinghai Lu wrote: > Signed-off-by: Yinghai Lu <yinghai@kernel.org> > Cc: Russell King <linux@arm.linux.org.uk> > Cc: linux-arm-kernel@lists.infradead.org So... what's this about. This email is all I've recieved, and the only thing that I have to go on is one single subject line and not description about what's going on. I guess there's some other patch introducing this for_each_pci_host_bridge() macro somewhere? I guess that's part of this patch set? Yet... I guess you want an ack for this or something... which would be irresponsible to give without knowing the purpose behind this, the reasoning, or even being able to tell whether the replacement code is functionally equivalent. So, no ack (or nack) at the moment. > --- > arch/arm/kernel/bios32.c | 9 +++------ > 1 file changed, 3 insertions(+), 6 deletions(-) > > diff --git a/arch/arm/kernel/bios32.c b/arch/arm/kernel/bios32.c > index 9b72261..d0befe4 100644 > --- a/arch/arm/kernel/bios32.c > +++ b/arch/arm/kernel/bios32.c > @@ -57,13 +57,10 @@ static void pcibios_bus_report_status(struct pci_bus *bus, u_int status_mask, in > > void pcibios_report_status(u_int status_mask, int warn) > { > - struct list_head *l; > + struct pci_host_bridge *host_bridge = NULL; > > - list_for_each(l, &pci_root_buses) { > - struct pci_bus *bus = pci_bus_b(l); > - > - pcibios_bus_report_status(bus, status_mask, warn); > - } > + for_each_pci_host_bridge(host_bridge) > + pcibios_bus_report_status(host_bridge->bus, status_mask, warn); > } > > /* > -- > 1.7.10.4 > -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sun, Jan 27, 2013 at 9:38 AM, Russell King - ARM Linux <linux@arm.linux.org.uk> wrote: > On Sat, Jan 26, 2013 at 09:36:37PM -0800, Yinghai Lu wrote: >> Signed-off-by: Yinghai Lu <yinghai@kernel.org> >> Cc: Russell King <linux@arm.linux.org.uk> >> Cc: linux-arm-kernel@lists.infradead.org > > So... what's this about. This email is all I've recieved, and the only > thing that I have to go on is one single subject line and not description > about what's going on. I guess there's some other patch introducing this > for_each_pci_host_bridge() macro somewhere? I guess that's part of this > patch set? yes. sorry for not put you in the cc list of the whole patchset. will do that next reversion. > > Yet... I guess you want an ack for this or something... which would be > irresponsible to give without knowing the purpose behind this, the > reasoning, or even being able to tell whether the replacement code is > functionally equivalent. the purpose is to make iteration for root bus to be root-bus hotplug safe. will update change log. > > So, no ack (or nack) at the moment. > >> --- >> arch/arm/kernel/bios32.c | 9 +++------ >> 1 file changed, 3 insertions(+), 6 deletions(-) >> >> diff --git a/arch/arm/kernel/bios32.c b/arch/arm/kernel/bios32.c >> index 9b72261..d0befe4 100644 >> --- a/arch/arm/kernel/bios32.c >> +++ b/arch/arm/kernel/bios32.c >> @@ -57,13 +57,10 @@ static void pcibios_bus_report_status(struct pci_bus *bus, u_int status_mask, in >> >> void pcibios_report_status(u_int status_mask, int warn) >> { >> - struct list_head *l; >> + struct pci_host_bridge *host_bridge = NULL; >> >> - list_for_each(l, &pci_root_buses) { >> - struct pci_bus *bus = pci_bus_b(l); >> - >> - pcibios_bus_report_status(bus, status_mask, warn); >> - } >> + for_each_pci_host_bridge(host_bridge) >> + pcibios_bus_report_status(host_bridge->bus, status_mask, warn); >> } >> >> /* >> -- >> 1.7.10.4 >> -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Now we have pci_root_buses list, and there is lots of iteration with list_of_each of it, that is not safe after we add pci root bus hotplug support after booting stage. Add pci_get_next_host_bridge and use bus_find_device in driver core to iterate host bridge and the same time get root bus. We replace searching root bus with searching host_bridge, as host_bridge->bus is the root bus. After those replacing, we even could kill pci_root_buses list. based on pci/next could get from git://git.kernel.org/pub/scm/linux/kernel/git/yinghai/linux-yinghai.git for-pci-for-each-host-bridge -v2: updated after pci_root_bus_hotplug get into pci/next -v3: update changelog and add cc for pci core change for arch guys. Yinghai Lu (22): PCI: Rename pci_release_bus_bridge_dev to pci_release_host_bridge_dev PCI: Add dummy bus_type for pci_host_bridge PCI, libata: remove find_bridge in acpi_bus_type PCI, ACPI: Update comments for find_bridge in acpi_bus_type PCI: Add for_each_pci_host_bridge() and pci_get_next_host_bridge PCI, hotplug: Kill pci_find_next_bus in sgi_hotplug PCI: Kill pci_find_next_bus in pci_sysfs PCI, edac: Kill pci_find_next_bus in edac PCI, x86: Kill pci_find_next_bus in pcibios_scan_root PCI, x86: Kill pci_root_buses in resources reservations PCI, drm: Kill pci_root_buses in alpha hose setting PCI: Kill pci_root_buses in setup-bus PCI, sparc: Kill pci_find_next_bus PCI, ia64: Kill pci_find_next_bus PCI, alpha: Kill pci_root_buses PCI, arm: Kill pci_root_buses PCI, frv: Kill pci_root_buses in resources reservations PCI, microblaze: Kill pci_root_buses in resources reservations PCI, mn10300: Kill pci_root_buses in resources reservations PCI, powerpc: Kill pci_root_buses in resources reservations PCI: Kill pci_find_next_bus PCI: Kill pci_root_buses arch/alpha/kernel/pci.c | 6 +-- arch/arm/kernel/bios32.c | 9 ++--- arch/frv/mb93090-mb00/pci-frv.c | 37 +++++++++--------- arch/ia64/hp/common/sba_iommu.c | 7 ++-- arch/ia64/sn/kernel/io_common.c | 5 ++- arch/microblaze/pci/pci-common.c | 10 ++--- arch/mn10300/unit-asb2305/pci-asb2305.c | 62 ++++++++++++++++--------------- arch/powerpc/kernel/pci-common.c | 13 +++---- arch/powerpc/kernel/pci_64.c | 8 ++-- arch/sparc/kernel/pci.c | 6 ++- arch/x86/pci/common.c | 9 +++-- arch/x86/pci/i386.c | 20 +++++----- drivers/ata/libata-acpi.c | 6 --- drivers/edac/i7core_edac.c | 6 +-- drivers/gpu/drm/drm_fops.c | 10 +++-- drivers/pci/hotplug/sgi_hotplug.c | 6 ++- drivers/pci/pci-driver.c | 11 +++++- drivers/pci/pci-sysfs.c | 6 +-- drivers/pci/probe.c | 13 ++----- drivers/pci/search.c | 61 +++++++++++++++--------------- drivers/pci/setup-bus.c | 24 ++++++------ include/acpi/acpi_bus.h | 2 +- include/linux/pci.h | 18 +++++---- 23 files changed, 187 insertions(+), 168 deletions(-)
[Trim down CC list, otherwise lkml will drop it, Thanks for David Miller to bring attention to me ] On Sat, Feb 2, 2013 at 1:50 PM, Bjorn Helgaas <bhelgaas@google.com> wrote: > On Sun, Jan 27, 2013 at 12:23 PM, Yinghai Lu <yinghai@kernel.org> wrote: >> Now we have pci_root_buses list, and there is lots of iteration with >> list_of_each of it, that is not safe after we add pci root bus hotplug >> support after booting stage. >> >> Add pci_get_next_host_bridge and use bus_find_device in driver core to >> iterate host bridge and the same time get root bus. >> >> We replace searching root bus with searching host_bridge, >> as host_bridge->bus is the root bus. >> After those replacing, we even could kill pci_root_buses list. > > These are the problems I think you're fixing: > > 1) pci_find_next_bus() is not safe because even though it holds > pci_bus_sem while walking the pci_root_buses list, it doesn't hold a > reference on the bus it returns. The bus could be removed while the > caller is using it. > > 2) "list_for_each_entry(bus, &pci_root_buses, node)" is not safe > because hotplug might modify the pci_root_buses list. Replacing that > with for_each_pci_host_bridge() solves that problem by using > bus_find_device(), which is built on klists, which are designed for > that problem. > > 3) pci_find_next_bus() claims to iterate through all known PCI buses, > but in fact only iterates through root buses. > > So far, so good. Those are problems we need to fix. > > Your solution is to introduce for_each_pci_host_bridge() as an > iterator through the known host bridges. There are two scenarios > where we use something like this: > > 1) We want to perform an operation on every known host bridge. > > 2) We want to initialize something for every host bridge. > > In my opinion, the only instance of scenario 1) is bus_rescan_store(), > where we want to rescan all PCI host bridges. > > In every other case, we're doing some kind of initialization of all > the host bridges. For these cases, for_each_pci_host_bridge() is the > wrong solution because it doesn't work for hot-added bridges. I think > these cases should be changed to use pcibios_root_bridge_prepare() or > something something else called in the host bridge add path. Yes, will check if those for_pci_host_bridge could be converted to adding calling in pcibios_root_bridge_prepare one by one. Thanks Yinghai -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sat, Feb 2, 2013 at 9:05 PM, Yinghai Lu <yinghai@kernel.org> wrote: >> Your solution is to introduce for_each_pci_host_bridge() as an >> iterator through the known host bridges. There are two scenarios >> where we use something like this: >> >> 1) We want to perform an operation on every known host bridge. >> >> 2) We want to initialize something for every host bridge. >> >> In my opinion, the only instance of scenario 1) is bus_rescan_store(), >> where we want to rescan all PCI host bridges. >> >> In every other case, we're doing some kind of initialization of all >> the host bridges. For these cases, for_each_pci_host_bridge() is the >> wrong solution because it doesn't work for hot-added bridges. I think >> these cases should be changed to use pcibios_root_bridge_prepare() or >> something something else called in the host bridge add path. > > Yes, will check if those for_pci_host_bridge could be converted to > adding calling in pcibios_root_bridge_prepare one by one. I went through those patches again, here are findings: 1. run-time iteration includes at least 3 occurrence are addressed. a. in pci-sysfs::bus_rescan_store b. powerpc pci_64.c::sys_pciconfig_iobase system call c. probe.c::pci_create_root_bus/pci_find_bus 2. edac: i7core_edac.c::i7core_pci_lastbus() this one is not support host bridge hotplug, and in i7core_probe it will bail-out if probed before overall. To make edac with i7 to support host-bridge hotadd, will change the structure of that i7core_probe. But I'm not worried about that yet, because Nehalem-EX does not support EDAC, and assume we will NOT have use case with new -EX intel cpu with hotplug support to use EDAC. 3. pcibios_resource_survey/pcibios_assign_unassigned_resource in __init path. it is with x86/frv/microblaze/mn10300. for x86/acpi, we have per root bus calling in pci_root.c::acpi_pci_root_add(). other three does not support pci_root_bridge hotplug yet. 4. looks like all other changes are __init path, and those arch does not support pci_root_bridge hotplug yet. So looks like this patch set is ok now. Also i updated git://git.kernel.org/pub/scm/linux/kernel/git/yinghai/linux-yinghai.git for-pci-for-each-host-bridge with new ordering of patches. BTW, I would address unifying pcibios_reserource_survey between x86/frv/microblaze/mn10300 later. Thanks Yinghai -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Feb 5, 2013 at 4:55 PM, Yinghai Lu <yinghai@kernel.org> wrote: > On Sat, Feb 2, 2013 at 9:05 PM, Yinghai Lu <yinghai@kernel.org> wrote: > >>> Your solution is to introduce for_each_pci_host_bridge() as an >>> iterator through the known host bridges. There are two scenarios >>> where we use something like this: >>> >>> 1) We want to perform an operation on every known host bridge. >>> >>> 2) We want to initialize something for every host bridge. >>> >>> In my opinion, the only instance of scenario 1) is bus_rescan_store(), >>> where we want to rescan all PCI host bridges. >>> >>> In every other case, we're doing some kind of initialization of all >>> the host bridges. For these cases, for_each_pci_host_bridge() is the >>> wrong solution because it doesn't work for hot-added bridges. I think >>> these cases should be changed to use pcibios_root_bridge_prepare() or >>> something something else called in the host bridge add path. >> >> Yes, will check if those for_pci_host_bridge could be converted to >> adding calling in pcibios_root_bridge_prepare one by one. > > I went through those patches again, here are findings: > 1. run-time iteration includes at least 3 occurrence are addressed. > a. in pci-sysfs::bus_rescan_store > b. powerpc pci_64.c::sys_pciconfig_iobase system call > c. probe.c::pci_create_root_bus/pci_find_bus > > 2. edac: i7core_edac.c::i7core_pci_lastbus() > this one is not support host bridge hotplug, and in i7core_probe it will > bail-out if probed before overall. > To make edac with i7 to support host-bridge hotadd, will change the structure of > that i7core_probe. > But I'm not worried about that yet, because Nehalem-EX does not support EDAC, > and assume we will NOT have use case with new -EX intel cpu with > hotplug support to use EDAC. > > 3. pcibios_resource_survey/pcibios_assign_unassigned_resource in __init path. > it is with x86/frv/microblaze/mn10300. > for x86/acpi, we have per root bus calling in pci_root.c::acpi_pci_root_add(). > other three does not support pci_root_bridge hotplug yet. > > 4. looks like all other changes are __init path, and those arch does not support > pci_root_bridge hotplug yet. > > So looks like this patch set is ok now. Maybe. I'd rather not introduce for_each_pci_host_bridge() at all, if we can avoid it. Every place it's used is a place we have to audit to make sure it's safe. I think your audit above is correct and complete, but it relies on way too much architecture knowledge. It's better if we can deduce correctness without knowing which arches support hotplug and which CPUs support EDAC. As soon as for_each_pci_host_bridge() is in the tree, those uses can be copied to even more places. It's a macro, so it's usable by any module, even out-of-tree ones that we'll never see and can't fix. So we won't really have a good way to deprecate and remove it. Bjorn -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Feb 5, 2013 at 4:19 PM, Bjorn Helgaas <bhelgaas@google.com> wrote: > > Maybe. I'd rather not introduce for_each_pci_host_bridge() at all, if > we can avoid it. Every place it's used is a place we have to audit to > make sure it's safe. I think your audit above is correct and > complete, but it relies on way too much architecture knowledge. It's > better if we can deduce correctness without knowing which arches > support hotplug and which CPUs support EDAC. > > As soon as for_each_pci_host_bridge() is in the tree, those uses can > be copied to even more places. It's a macro, so it's usable by any > module, even out-of-tree ones that we'll never see and can't fix. So > we won't really have a good way to deprecate and remove it. Now we only have two references in modules. drivers/edac/i7core_edac.c: for_each_pci_host_bridge(host_bridge) { drivers/pci/hotplug/sgi_hotplug.c: for_each_pci_host_bridge(host_bridge) { for the sgi_hotplug.c, it should be same problem that have for acpiphp and pciehp. need to make it support pci host bridge hotplug anyway. for edac, we need to check Mauro about their plan. After those two are addressed, we can drop that EXPORT_SYMBOL_GPL for pci_get_next_host_bridge. We do have pci_get_domain_bus_and_slot() as export symbol. So we export pci_get_next_host_bridge should be ok now. and it would be better than export root buses list. Thanks Yinghai -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Em Tue, 5 Feb 2013 16:47:10 -0800 Yinghai Lu <yinghai@kernel.org> escreveu: > On Tue, Feb 5, 2013 at 4:19 PM, Bjorn Helgaas <bhelgaas@google.com> wrote: > > > > Maybe. I'd rather not introduce for_each_pci_host_bridge() at all, if > > we can avoid it. Every place it's used is a place we have to audit to > > make sure it's safe. I think your audit above is correct and > > complete, but it relies on way too much architecture knowledge. It's > > better if we can deduce correctness without knowing which arches > > support hotplug and which CPUs support EDAC. > > > > As soon as for_each_pci_host_bridge() is in the tree, those uses can > > be copied to even more places. It's a macro, so it's usable by any > > module, even out-of-tree ones that we'll never see and can't fix. So > > we won't really have a good way to deprecate and remove it. > > Now we only have two references in modules. > > drivers/edac/i7core_edac.c: for_each_pci_host_bridge(host_bridge) { > drivers/pci/hotplug/sgi_hotplug.c: for_each_pci_host_bridge(host_bridge) { > > for the sgi_hotplug.c, it should be same problem that have for acpiphp > and pciehp. > need to make it support pci host bridge hotplug anyway. > > for edac, we need to check Mauro about their plan. The i7core_pci_lastbus() code at i7core_edac is there to make it work with some Nehalem/Nehalem-EP machines that hide the memory controller's PCI ID by using an artificially low last bus. There's no plan to support Nehalem-EX, as the information I got is that there's no way to access the memory controller registers on it without interfering with BIOS. It should be noticed, however, that I got a report of a Sandy Bridge server that is also hiding the memory controller's PCI address. So, maybe we'll need to add later some logic at sb_edac to unhide the memory controller, just like the one in i7core_edac. > > After those two are addressed, we can drop that EXPORT_SYMBOL_GPL for > pci_get_next_host_bridge. > > We do have pci_get_domain_bus_and_slot() as export symbol. > So we export pci_get_next_host_bridge should be ok now. > and it would be better than export root buses list. > > Thanks > > Yinghai
On Wed, Feb 6, 2013 at 1:53 AM, Mauro Carvalho Chehab <mchehab@redhat.com> wrote: > Em Tue, 5 Feb 2013 16:47:10 -0800 > Yinghai Lu <yinghai@kernel.org> escreveu: > >> On Tue, Feb 5, 2013 at 4:19 PM, Bjorn Helgaas <bhelgaas@google.com> wrote: >> > >> > Maybe. I'd rather not introduce for_each_pci_host_bridge() at all, if >> > we can avoid it. Every place it's used is a place we have to audit to >> > make sure it's safe. I think your audit above is correct and >> > complete, but it relies on way too much architecture knowledge. It's >> > better if we can deduce correctness without knowing which arches >> > support hotplug and which CPUs support EDAC. >> > >> > As soon as for_each_pci_host_bridge() is in the tree, those uses can >> > be copied to even more places. It's a macro, so it's usable by any >> > module, even out-of-tree ones that we'll never see and can't fix. So >> > we won't really have a good way to deprecate and remove it. >> >> Now we only have two references in modules. >> >> drivers/edac/i7core_edac.c: for_each_pci_host_bridge(host_bridge) { >> drivers/pci/hotplug/sgi_hotplug.c: for_each_pci_host_bridge(host_bridge) { >> >> for the sgi_hotplug.c, it should be same problem that have for acpiphp >> and pciehp. >> need to make it support pci host bridge hotplug anyway. >> >> for edac, we need to check Mauro about their plan. > > The i7core_pci_lastbus() code at i7core_edac is there to make it work > with some Nehalem/Nehalem-EP machines that hide the memory controller's > PCI ID by using an artificially low last bus. I don't really understand how this helps. An example would probably make it clearer. i7core_edac.c has some very creative use of PCI infrastructure. Normally a driver has a pci_device_id table that identifies the vendor/device IDs of the devices it cares about, and the driver's .probe() method is called for every matching device. But i7core_edac only has two entries in its id_table. When we find a device that matches one of those two entries, we call i7core_probe(), which then gropes around for all the *other* devices related to that original one. This is a bit messy. I'd like it a lot better if the device IDs in pci_dev_descr_i7core_nehalem[], pci_dev_descr_lynnfield[], etc., were just in the pci_device_id table directly. Then i7core_probe() would be called directly for every device you care about, and you could sort them out there. That should work without any need for pci_get_device(), i7core_pci_lastbus(), etc. Bjorn -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Feb 5, 2013 at 5:47 PM, Yinghai Lu <yinghai@kernel.org> wrote: > On Tue, Feb 5, 2013 at 4:19 PM, Bjorn Helgaas <bhelgaas@google.com> wrote: >> >> Maybe. I'd rather not introduce for_each_pci_host_bridge() at all, if >> we can avoid it. Every place it's used is a place we have to audit to >> make sure it's safe. I think your audit above is correct and >> complete, but it relies on way too much architecture knowledge. It's >> better if we can deduce correctness without knowing which arches >> support hotplug and which CPUs support EDAC. >> >> As soon as for_each_pci_host_bridge() is in the tree, those uses can >> be copied to even more places. It's a macro, so it's usable by any >> module, even out-of-tree ones that we'll never see and can't fix. So >> we won't really have a good way to deprecate and remove it. > > Now we only have two references in modules. > > drivers/edac/i7core_edac.c: for_each_pci_host_bridge(host_bridge) { > drivers/pci/hotplug/sgi_hotplug.c: for_each_pci_host_bridge(host_bridge) { > > for the sgi_hotplug.c, it should be same problem that have for acpiphp > and pciehp. > need to make it support pci host bridge hotplug anyway. > > for edac, we need to check Mauro about their plan. > > After those two are addressed, we can drop that EXPORT_SYMBOL_GPL for > pci_get_next_host_bridge. > > We do have pci_get_domain_bus_and_slot() as export symbol. > So we export pci_get_next_host_bridge should be ok now. > and it would be better than export root buses list. I think you're missing the point. Search the tree for uses of "for_each_pci_dev()." Almost every occurrence is a bug because that code doesn't work correctly for hot-added devices. That code gets run for devices that are present at boot, but not for devices hot-added later. You're proposing to add "for_each_pci_host_bridge()." That will have the exact same problem as for_each_pci_dev() already does. Code that uses it will work for host bridges present at boot, but not for bridges hot-added later. Why would we want to add an interface when every use of that interface will be a design bug? I think we should fix the existing users of pci_root_buses by changing their designs so they will work correctly with host bridge hotplug. Bjorn -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Feb 6, 2013 at 9:54 AM, Bjorn Helgaas <bhelgaas@google.com> wrote: > I think you're missing the point. > > Search the tree for uses of "for_each_pci_dev()." Almost every > occurrence is a bug because that code doesn't work correctly for > hot-added devices. That code gets run for devices that are present at > boot, but not for devices hot-added later. > > You're proposing to add "for_each_pci_host_bridge()." That will have > the exact same problem as for_each_pci_dev() already does. Code that > uses it will work for host bridges present at boot, but not for > bridges hot-added later. > > Why would we want to add an interface when every use of that interface > will be a design bug? I think we should fix the existing users of > pci_root_buses by changing their designs so they will work correctly > with host bridge hotplug. I'm a little confused about what you want. In boot stage using for_each_pci_host_bridge or pci_root_buses is fine. For those cases that it should support host-bridge by nature. there are two solutions: 1. use for_each_pci_host_bridge, and it is hotplug-safe. and make sgi_hotplug to use acpi_pci_driver interface. and acpi_pci_root_add() will call .add in the acpi_pci_driver. 2. make them all to be built-in, and those acpi_pci_driver should be registered much early before acpi_pci_root_add is called. then we don't need to call for_each_host_bridge for it. So difference between them: 1. still keep the module support, and register acpi_pci_driver later. 2. built-in support only, and need to register acpi_pci_driver early. Please let me which one you like. Thanks Yinghai -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Feb 6, 2013 at 11:59 AM, Yinghai Lu <yinghai@kernel.org> wrote: > On Wed, Feb 6, 2013 at 9:54 AM, Bjorn Helgaas <bhelgaas@google.com> wrote: >> I think you're missing the point. >> >> Search the tree for uses of "for_each_pci_dev()." Almost every >> occurrence is a bug because that code doesn't work correctly for >> hot-added devices. That code gets run for devices that are present at >> boot, but not for devices hot-added later. >> >> You're proposing to add "for_each_pci_host_bridge()." That will have >> the exact same problem as for_each_pci_dev() already does. Code that >> uses it will work for host bridges present at boot, but not for >> bridges hot-added later. >> >> Why would we want to add an interface when every use of that interface >> will be a design bug? I think we should fix the existing users of >> pci_root_buses by changing their designs so they will work correctly >> with host bridge hotplug. > > I'm a little confused about what you want. > > In boot stage using for_each_pci_host_bridge or pci_root_buses is fine. No, it's not. Boot-time should work exactly the same way as hot-add time, unless there's a reason it can't. There might be corner cases where we can't do that yet, e.g., the pcibios_resource_survey stuff, but in general I don't think there's anything that *forces* us to iterate through host bridges at boot-time. > For those cases that it should support host-bridge by nature. > there are two solutions: > 1. use for_each_pci_host_bridge, and it is hotplug-safe. I'm trying to tell you that for_each_pci_host_bridge() is NOT hotplug-safe. Your series makes it safer than it was, in the sense that it probably fixes stray pointer references when a host bridge hotplug happens while somebody's traversing pci_root_buses. But the whole point of for_each_pci_host_bridge() is to run some code for every bridge we know about *right now*. If a host bridge is added right after the for_each_pci_host_bridge() loop exits, that code doesn't get run. In most cases, that's a bug. The only exception I know about is the /sys/.../rescan interface. > and make sgi_hotplug to use acpi_pci_driver interface. > and acpi_pci_root_add() will call .add in the acpi_pci_driver. > > 2. make them all to be built-in, and those acpi_pci_driver should be registered > much early before acpi_pci_root_add is called. > then we don't need to call for_each_host_bridge for it. > > So difference between them: > 1. still keep the module support, and register acpi_pci_driver later. > 2. built-in support only, and need to register acpi_pci_driver early. acpi_pci_driver is going away, so I don't want to add any more uses of it. Obviously it's only relevant to x86 and ia64 anyway. What I'd like is a change where for_each_pci_host_bridge() is used only inside the PCI core and defined somewhere like drivers/pci/pci.h so it's not even available outside. The other uses should be changed so they use pcibios_root_bridge_prepare() or something similar. That way, it will be obvious that the code supports hot-added bridges, and when it gets copied to other places, we'll be copying a working design instead of a broken one. I don't want to have to audit these places and figure out "this is safe because this arch doesn't support host bridge hotplug" or "this is safe because this CPU doesn't support XYZ." That's not a maintainable solution. The safety should be apparent from the code itself, without requiring extra platform-specific knowledge. Bjorn -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Feb 6, 2013 at 12:50 PM, Bjorn Helgaas <bhelgaas@google.com> wrote: > On Wed, Feb 6, 2013 at 11:59 AM, Yinghai Lu <yinghai@kernel.org> wrote: >> On Wed, Feb 6, 2013 at 9:54 AM, Bjorn Helgaas <bhelgaas@google.com> wrote: >>> I think you're missing the point. >>> >>> Search the tree for uses of "for_each_pci_dev()." Almost every >>> occurrence is a bug because that code doesn't work correctly for >>> hot-added devices. That code gets run for devices that are present at >>> boot, but not for devices hot-added later. >>> >>> You're proposing to add "for_each_pci_host_bridge()." That will have >>> the exact same problem as for_each_pci_dev() already does. Code that >>> uses it will work for host bridges present at boot, but not for >>> bridges hot-added later. >>> >>> Why would we want to add an interface when every use of that interface >>> will be a design bug? I think we should fix the existing users of >>> pci_root_buses by changing their designs so they will work correctly >>> with host bridge hotplug. >> >> I'm a little confused about what you want. >> >> In boot stage using for_each_pci_host_bridge or pci_root_buses is fine. > > No, it's not. Boot-time should work exactly the same way as hot-add > time, unless there's a reason it can't. There might be corner cases > where we can't do that yet, e.g., the pcibios_resource_survey stuff, > but in general I don't think there's anything that *forces* us to > iterate through host bridges at boot-time. > >> For those cases that it should support host-bridge by nature. >> there are two solutions: >> 1. use for_each_pci_host_bridge, and it is hotplug-safe. > > I'm trying to tell you that for_each_pci_host_bridge() is NOT > hotplug-safe. Your series makes it safer than it was, in the sense > that it probably fixes stray pointer references when a host bridge > hotplug happens while somebody's traversing pci_root_buses. But the > whole point of for_each_pci_host_bridge() is to run some code for > every bridge we know about *right now*. If a host bridge is added > right after the for_each_pci_host_bridge() loop exits, that code > doesn't get run. In most cases, that's a bug. The only exception I > know about is the /sys/.../rescan interface. > >> and make sgi_hotplug to use acpi_pci_driver interface. >> and acpi_pci_root_add() will call .add in the acpi_pci_driver. >> >> 2. make them all to be built-in, and those acpi_pci_driver should be registered >> much early before acpi_pci_root_add is called. >> then we don't need to call for_each_host_bridge for it. >> >> So difference between them: >> 1. still keep the module support, and register acpi_pci_driver later. >> 2. built-in support only, and need to register acpi_pci_driver early. > > acpi_pci_driver is going away, so I don't want to add any more uses of > it. Obviously it's only relevant to x86 and ia64 anyway. when? I have ioapic and iommu hotplug patch set that need to use them. > > What I'd like is a change where for_each_pci_host_bridge() is used > only inside the PCI core and defined somewhere like drivers/pci/pci.h > so it's not even available outside. > > The other uses should be changed so they use > pcibios_root_bridge_prepare() or something similar. That way, it will > be obvious that the code supports hot-added bridges, and when it gets > copied to other places, we'll be copying a working design instead of a > broken one. > > I don't want to have to audit these places and figure out "this is > safe because this arch doesn't support host bridge hotplug" or "this > is safe because this CPU doesn't support XYZ." That's not a > maintainable solution. The safety should be apparent from the code > itself, without requiring extra platform-specific knowledge. so you still did not answer you want 1 or 2 yet: for sgi_hotplug, 1. still keep the module support, and register acpi_pci_driver later. 2. built-in support only, and need to register acpi_pci_driver early. Thanks Yinghai -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wednesday, February 06, 2013 01:28:27 PM Yinghai Lu wrote: > On Wed, Feb 6, 2013 at 12:50 PM, Bjorn Helgaas <bhelgaas@google.com> wrote: > > On Wed, Feb 6, 2013 at 11:59 AM, Yinghai Lu <yinghai@kernel.org> wrote: > >> On Wed, Feb 6, 2013 at 9:54 AM, Bjorn Helgaas <bhelgaas@google.com> wrote: > >>> I think you're missing the point. > >>> > >>> Search the tree for uses of "for_each_pci_dev()." Almost every > >>> occurrence is a bug because that code doesn't work correctly for > >>> hot-added devices. That code gets run for devices that are present at > >>> boot, but not for devices hot-added later. > >>> > >>> You're proposing to add "for_each_pci_host_bridge()." That will have > >>> the exact same problem as for_each_pci_dev() already does. Code that > >>> uses it will work for host bridges present at boot, but not for > >>> bridges hot-added later. > >>> > >>> Why would we want to add an interface when every use of that interface > >>> will be a design bug? I think we should fix the existing users of > >>> pci_root_buses by changing their designs so they will work correctly > >>> with host bridge hotplug. > >> > >> I'm a little confused about what you want. > >> > >> In boot stage using for_each_pci_host_bridge or pci_root_buses is fine. > > > > No, it's not. Boot-time should work exactly the same way as hot-add > > time, unless there's a reason it can't. There might be corner cases > > where we can't do that yet, e.g., the pcibios_resource_survey stuff, > > but in general I don't think there's anything that *forces* us to > > iterate through host bridges at boot-time. > > > >> For those cases that it should support host-bridge by nature. > >> there are two solutions: > >> 1. use for_each_pci_host_bridge, and it is hotplug-safe. > > > > I'm trying to tell you that for_each_pci_host_bridge() is NOT > > hotplug-safe. Your series makes it safer than it was, in the sense > > that it probably fixes stray pointer references when a host bridge > > hotplug happens while somebody's traversing pci_root_buses. But the > > whole point of for_each_pci_host_bridge() is to run some code for > > every bridge we know about *right now*. If a host bridge is added > > right after the for_each_pci_host_bridge() loop exits, that code > > doesn't get run. In most cases, that's a bug. The only exception I > > know about is the /sys/.../rescan interface. > > > >> and make sgi_hotplug to use acpi_pci_driver interface. > >> and acpi_pci_root_add() will call .add in the acpi_pci_driver. > >> > >> 2. make them all to be built-in, and those acpi_pci_driver should be registered > >> much early before acpi_pci_root_add is called. > >> then we don't need to call for_each_host_bridge for it. > >> > >> So difference between them: > >> 1. still keep the module support, and register acpi_pci_driver later. > >> 2. built-in support only, and need to register acpi_pci_driver early. > > > > acpi_pci_driver is going away, so I don't want to add any more uses of > > it. Obviously it's only relevant to x86 and ia64 anyway. > > when? > > I have ioapic and iommu hotplug patch set that need to use them. I suppose it would be a bit simpler if you tried to fix things up before adding more stuff on top of them, which only makes it more difficult to clean them up when this additional stuff is applied. A patchset to remove acpi_pci_driver has been posted already and it is going to be submitted again after 3.9-rc1. So this is the time frame. > > What I'd like is a change where for_each_pci_host_bridge() is used > > only inside the PCI core and defined somewhere like drivers/pci/pci.h > > so it's not even available outside. > > > > The other uses should be changed so they use > > pcibios_root_bridge_prepare() or something similar. That way, it will > > be obvious that the code supports hot-added bridges, and when it gets > > copied to other places, we'll be copying a working design instead of a > > broken one. > > > > I don't want to have to audit these places and figure out "this is > > safe because this arch doesn't support host bridge hotplug" or "this > > is safe because this CPU doesn't support XYZ." That's not a > > maintainable solution. The safety should be apparent from the code > > itself, without requiring extra platform-specific knowledge. > > so you still did not answer you want 1 or 2 yet: > > for sgi_hotplug, > > 1. still keep the module support, and register acpi_pci_driver later. > 2. built-in support only, and need to register acpi_pci_driver early. Please work with the assumption that acpi_pci_driver is not going to be there any more. Thanks, Rafael
On Wed, Feb 6, 2013 at 1:43 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote: > On Wednesday, February 06, 2013 01:28:27 PM Yinghai Lu wrote: >> so you still did not answer you want 1 or 2 yet: >> >> for sgi_hotplug, >> >> 1. still keep the module support, and register acpi_pci_driver later. >> 2. built-in support only, and need to register acpi_pci_driver early. > > Please work with the assumption that acpi_pci_driver is not going to be there > any more. > I think I could change ioapic and iommu hotplug to weak add/remove because they should be built-in by nature. but how about others like sgi_hotplug etc? Thanks Yinghai -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wednesday, February 06, 2013 01:53:50 PM Yinghai Lu wrote: > On Wed, Feb 6, 2013 at 1:43 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote: > > On Wednesday, February 06, 2013 01:28:27 PM Yinghai Lu wrote: > >> so you still did not answer you want 1 or 2 yet: > >> > >> for sgi_hotplug, > >> > >> 1. still keep the module support, and register acpi_pci_driver later. > >> 2. built-in support only, and need to register acpi_pci_driver early. > > > > Please work with the assumption that acpi_pci_driver is not going to be there > > any more. > > > > I think I could change ioapic and iommu hotplug to weak add/remove because they > should be built-in by nature. > > but how about others like sgi_hotplug etc? I'd really prefer to wait for the patchset removing acpi_pci_driver (from Myron) before proceeding with more changes in that area. Myron, do you have a prototype based on the current linux-next? Rafael
On Wed, Feb 6, 2013 at 3:05 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote: > On Wednesday, February 06, 2013 01:53:50 PM Yinghai Lu wrote: >> On Wed, Feb 6, 2013 at 1:43 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote: >> > On Wednesday, February 06, 2013 01:28:27 PM Yinghai Lu wrote: >> >> so you still did not answer you want 1 or 2 yet: >> >> >> >> for sgi_hotplug, >> >> >> >> 1. still keep the module support, and register acpi_pci_driver later. >> >> 2. built-in support only, and need to register acpi_pci_driver early. >> > >> > Please work with the assumption that acpi_pci_driver is not going to be there >> > any more. >> > >> >> I think I could change ioapic and iommu hotplug to weak add/remove because they >> should be built-in by nature. >> >> but how about others like sgi_hotplug etc? I think that could be handled with pcibios_root_bridge_prepare() or something similar -- something that happens in the "add host bridge" path. I'm not saying we necessarily have all the right hooks in place yet. It's just that I'd rather figure out what the right hooks *are* and add them if necessary, than just convert pci_root_buses to for_each_pci_host_bridge(). > I'd really prefer to wait for the patchset removing acpi_pci_driver (from > Myron) before proceeding with more changes in that area. > > Myron, do you have a prototype based on the current linux-next? I think it's really the acpiphp/pciehp issue that's holding up the removal of acpi_pci_driver. I'm not sure if anybody's working on that. -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Feb 6, 2013 at 3:02 PM, Bjorn Helgaas <bhelgaas@google.com> wrote: > On Wed, Feb 6, 2013 at 3:05 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote: >> On Wednesday, February 06, 2013 01:53:50 PM Yinghai Lu wrote: >>> On Wed, Feb 6, 2013 at 1:43 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote: >>> > On Wednesday, February 06, 2013 01:28:27 PM Yinghai Lu wrote: >>> >> so you still did not answer you want 1 or 2 yet: >>> >> >>> >> for sgi_hotplug, >>> >> >>> >> 1. still keep the module support, and register acpi_pci_driver later. >>> >> 2. built-in support only, and need to register acpi_pci_driver early. >>> > >>> > Please work with the assumption that acpi_pci_driver is not going to be there >>> > any more. >>> > >>> >>> I think I could change ioapic and iommu hotplug to weak add/remove because they >>> should be built-in by nature. >>> >>> but how about others like sgi_hotplug etc? > > I think that could be handled with pcibios_root_bridge_prepare() or > something similar -- something that happens in the "add host bridge" > path. I'm not saying we necessarily have all the right hooks in place > yet. It's just that I'd rather figure out what the right hooks *are* > and add them if necessary, than just convert pci_root_buses to > for_each_pci_host_bridge(). so we will have all built-in for apci_pci_driver and call add/remove directly? BTW, looks like sgi_hotplug may need some time to convert ... Also not sure who can test it. > >> I'd really prefer to wait for the patchset removing acpi_pci_driver (from >> Myron) before proceeding with more changes in that area. >> >> Myron, do you have a prototype based on the current linux-next? > > I think it's really the acpiphp/pciehp issue that's holding up the > removal of acpi_pci_driver. I'm not sure if anybody's working on > that. Myron's last Dec's version would split all acpi_pci_driver's add/remove into acpi_pci_root_add/remove. other way is use host-bridge nodifier, looks like that is not favorable. BTW, looks like I could put add/remove calling directly in acpi_pci_root_add/remove for ioapic and iommu. Thanks Yinghai -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 2013-2-7 7:02, Bjorn Helgaas wrote: > On Wed, Feb 6, 2013 at 3:05 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote: >> On Wednesday, February 06, 2013 01:53:50 PM Yinghai Lu wrote: >>> On Wed, Feb 6, 2013 at 1:43 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote: >>>> On Wednesday, February 06, 2013 01:28:27 PM Yinghai Lu wrote: >>>>> so you still did not answer you want 1 or 2 yet: >>>>> >>>>> for sgi_hotplug, >>>>> >>>>> 1. still keep the module support, and register acpi_pci_driver later. >>>>> 2. built-in support only, and need to register acpi_pci_driver early. >>>> >>>> Please work with the assumption that acpi_pci_driver is not going to be there >>>> any more. >>>> >>> >>> I think I could change ioapic and iommu hotplug to weak add/remove because they >>> should be built-in by nature. >>> >>> but how about others like sgi_hotplug etc? > > I think that could be handled with pcibios_root_bridge_prepare() or > something similar -- something that happens in the "add host bridge" > path. I'm not saying we necessarily have all the right hooks in place > yet. It's just that I'd rather figure out what the right hooks *are* > and add them if necessary, than just convert pci_root_buses to > for_each_pci_host_bridge(). > >> I'd really prefer to wait for the patchset removing acpi_pci_driver (from >> Myron) before proceeding with more changes in that area. >> >> Myron, do you have a prototype based on the current linux-next? > > I think it's really the acpiphp/pciehp issue that's holding up the > removal of acpi_pci_driver. I'm not sure if anybody's working on > that. Hi Bjorn, I'm working on this topic, but a little busy with other stuff in last few days. I guess I could should out a version in coming days. Thanks! > > . > -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Em Wed, 6 Feb 2013 10:45:16 -0700 Bjorn Helgaas <bhelgaas@google.com> escreveu: > On Wed, Feb 6, 2013 at 1:53 AM, Mauro Carvalho Chehab > <mchehab@redhat.com> wrote: > > Em Tue, 5 Feb 2013 16:47:10 -0800 > > Yinghai Lu <yinghai@kernel.org> escreveu: > > > >> On Tue, Feb 5, 2013 at 4:19 PM, Bjorn Helgaas <bhelgaas@google.com> wrote: > >> > > >> > Maybe. I'd rather not introduce for_each_pci_host_bridge() at all, if > >> > we can avoid it. Every place it's used is a place we have to audit to > >> > make sure it's safe. I think your audit above is correct and > >> > complete, but it relies on way too much architecture knowledge. It's > >> > better if we can deduce correctness without knowing which arches > >> > support hotplug and which CPUs support EDAC. > >> > > >> > As soon as for_each_pci_host_bridge() is in the tree, those uses can > >> > be copied to even more places. It's a macro, so it's usable by any > >> > module, even out-of-tree ones that we'll never see and can't fix. So > >> > we won't really have a good way to deprecate and remove it. > >> > >> Now we only have two references in modules. > >> > >> drivers/edac/i7core_edac.c: for_each_pci_host_bridge(host_bridge) { > >> drivers/pci/hotplug/sgi_hotplug.c: for_each_pci_host_bridge(host_bridge) { > >> > >> for the sgi_hotplug.c, it should be same problem that have for acpiphp > >> and pciehp. > >> need to make it support pci host bridge hotplug anyway. > >> > >> for edac, we need to check Mauro about their plan. > > > > The i7core_pci_lastbus() code at i7core_edac is there to make it work > > with some Nehalem/Nehalem-EP machines that hide the memory controller's > > PCI ID by using an artificially low last bus. > > I don't really understand how this helps. An example would probably > make it clearer. > > i7core_edac.c has some very creative use of PCI infrastructure. > Normally a driver has a pci_device_id table that identifies the > vendor/device IDs of the devices it cares about, and the driver's > .probe() method is called for every matching device. > > But i7core_edac only has two entries in its id_table. When we find a > device that matches one of those two entries, we call i7core_probe(), > which then gropes around for all the *other* devices related to that > original one. This is a bit messy. > > I'd like it a lot better if the device IDs in > pci_dev_descr_i7core_nehalem[], pci_dev_descr_lynnfield[], etc., were > just in the pci_device_id table directly. Then i7core_probe() would > be called directly for every device you care about, and you could sort > them out there. That should work without any need for > pci_get_device(), i7core_pci_lastbus(), etc. Bjorn, On almost all Intel memory controllers since 2002, the memory controller handling function were split into several different PCI devices and PCI functions. So, for example, even if you look on old driver like i5000_edac.c, you'll see 5 different PCI IDs that are required to control a single device: #ifndef PCI_DEVICE_ID_INTEL_FBD_0 #define PCI_DEVICE_ID_INTEL_FBD_0 0x25F5 #endif #ifndef PCI_DEVICE_ID_INTEL_FBD_1 #define PCI_DEVICE_ID_INTEL_FBD_1 0x25F6 #endif /* Device 16, * Function 0: System Address * Function 1: Memory Branch Map, Control, Errors Register * Function 2: FSB Error Registers * * All 3 functions of Device 16 (0,1,2) share the SAME DID */ #define PCI_DEVICE_ID_INTEL_I5000_DEV16 0x25F0 /* OFFSETS for Function 1 */ (a long list of registers there) /* * Device 21, * Function 0: Memory Map Branch 0 * * Device 22, * Function 0: Memory Map Branch 1 */ #define PCI_DEVICE_ID_I5000_BRANCH_0 0x25F5 #define PCI_DEVICE_ID_I5000_BRANCH_1 0x25F6 (another long list or registers there) I've no idea why the hardware engineers there decided on that way, but the number of different PCI devices required for the functionality to work has been increased on their newer chipsets. At the Sandy Bridge driver (sb_edac.c), all those PCI devices need to be opened at the same time, in order to allow controlling a single memory controller (and one system have one memory controller per socket): #define PCI_DEVICE_ID_INTEL_SBRIDGE_SAD0 0x3cf4 /* 12.6 */ #define PCI_DEVICE_ID_INTEL_SBRIDGE_SAD1 0x3cf6 /* 12.7 */ #define PCI_DEVICE_ID_INTEL_SBRIDGE_BR 0x3cf5 /* 13.6 */ #define PCI_DEVICE_ID_INTEL_SBRIDGE_IMC_HA0 0x3ca0 /* 14.0 */ #define PCI_DEVICE_ID_INTEL_SBRIDGE_IMC_TA 0x3ca8 /* 15.0 */ #define PCI_DEVICE_ID_INTEL_SBRIDGE_IMC_RAS 0x3c71 /* 15.1 */ #define PCI_DEVICE_ID_INTEL_SBRIDGE_IMC_TAD0 0x3caa /* 15.2 */ #define PCI_DEVICE_ID_INTEL_SBRIDGE_IMC_TAD1 0x3cab /* 15.3 */ #define PCI_DEVICE_ID_INTEL_SBRIDGE_IMC_TAD2 0x3cac /* 15.4 */ #define PCI_DEVICE_ID_INTEL_SBRIDGE_IMC_TAD3 0x3cad /* 15.5 */ #define PCI_DEVICE_ID_INTEL_SBRIDGE_IMC_DDRIO 0x3cb8 /* 17.0 */ The first thing that any EDAC driver needs to do is to get the memory configuration (number of DIMMs, channels filled, DIMM size, etc). In the case of sb_edac, the logic is at get_dimm_config(). It needs to read data on _several_ of the above PCI IDs: static int get_dimm_config(struct mem_ctl_info *mci) { ... pci_read_config_dword(pvt->pci_br, SAD_TARGET, ®); // reads from PCI_DEVICE_ID_INTEL_SBRIDGE_BR ... pci_read_config_dword(pvt->pci_br, SAD_CONTROL, ®); // reads from PCI_DEVICE_ID_INTEL_SBRIDGE_BR ... pci_read_config_dword(pvt->pci_ras, RASENABLES, ®); // reads from PCI_DEVICE_ID_INTEL_SBRIDGE_IMC_RAS ... pci_read_config_dword(pvt->pci_ta, MCMTR, &pvt->info.mcmtr); // reads from PCI_DEVICE_ID_INTEL_SBRIDGE_IMC_TA ... pci_read_config_dword(pvt->pci_ddrio, RANK_CFG_A, ®); // reads from PCI_DEVICE_ID_INTEL_SBRIDGE_IMC_DDRIO ... for (i = 0; i < NUM_CHANNELS; i++) { ... for (j = 0; j < ARRAY_SIZE(mtr_regs); j++) { ... pci_read_config_dword(pvt->pci_tad[i], mtr_regs[j], &mtr); // PCI_DEVICE_ID_INTEL_SBRIDGE_IMC_TAD0 to PCI_DEVICE_ID_INTEL_SBRIDGE_IMC_TAD3 So, while the usual design of having one PCI ID entry for each device works for the vast majority of drivers, as each different PCI ID/function are typically independent and pci_driver.probe() can be called for every entry inside the PCI ID table, that's not the case of EDAC. On EDAC drivers, the probing routine can be called only after calling pci_get_device() for the entire set of PCI devices that belongs to the memory controller. To make things worse, the PCI IDs for the memory controllers are sometimes after PCI lastbus. So, the logic used on almost all drivers there is to use one PCI ID "detect" device at the table, used to discover if the system has a supported memory controller chipset. If it matches, the pci_driver.probe() will seek for the actual PCI devices that are required for it to work, with may or may not be the same as the "detect" device. Also, the highest bus corresponds to the first memory controller. So, bus=255 matches the memory controller for the CPU socket 0, bus=254 matches the one for CPU socket 1 and so on. That forced the driver to probe all devices at the same time, on all CPU sockets, in order to reverse the order when initializing the memory controller EDAC structures. There are some other odd details there... In the case of i7core_edac, it supports 3 different versions of memory controllers; each version has its own set of PCI ID's. So, its "real" PCI ID table set has 3 entries: static const struct pci_id_table pci_dev_table[] = { PCI_ID_TABLE_ENTRY(pci_dev_descr_i7core_nehalem), PCI_ID_TABLE_ENTRY(pci_dev_descr_lynnfield), PCI_ID_TABLE_ENTRY(pci_dev_descr_i7core_westmere), {0,} /* 0 terminated list. */ }; while its PCI ID "detect" table has only two, as the PCI device 8086:342e (PCI_DEVICE_ID_INTEL_X58_HUB_MGMT) is found on both Nehalem and Westmere: static DEFINE_PCI_DEVICE_TABLE(i7core_pci_tbl) = { {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_X58_HUB_MGMT)}, {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_LYNNFIELD_QPI_LINK0)}, {0,} /* 0 terminated list. */
diff --git a/arch/arm/kernel/bios32.c b/arch/arm/kernel/bios32.c index 9b72261..d0befe4 100644 --- a/arch/arm/kernel/bios32.c +++ b/arch/arm/kernel/bios32.c @@ -57,13 +57,10 @@ static void pcibios_bus_report_status(struct pci_bus *bus, u_int status_mask, in void pcibios_report_status(u_int status_mask, int warn) { - struct list_head *l; + struct pci_host_bridge *host_bridge = NULL; - list_for_each(l, &pci_root_buses) { - struct pci_bus *bus = pci_bus_b(l); - - pcibios_bus_report_status(bus, status_mask, warn); - } + for_each_pci_host_bridge(host_bridge) + pcibios_bus_report_status(host_bridge->bus, status_mask, warn); } /*
Signed-off-by: Yinghai Lu <yinghai@kernel.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: linux-arm-kernel@lists.infradead.org --- arch/arm/kernel/bios32.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-)