Message ID | 20210720205009.111806-3-nirmal.patel@linux.intel.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Lorenzo Pieralisi |
Headers | show |
Series | Issue secondary bus reset and domain window reset | expand |
On Tue, Jul 20, 2021 at 01:50:09PM -0700, Nirmal Patel wrote: > In order to properly re-initialize the VMD domain during repetitive driver > attachment or reboot tests, ensure that the VMD root ports are re-initialized > to a blank state that can be re-enumerated appropriately by the PCI core. > This is performed by re-initializing all of the bridge windows to ensure > that PCI core enumeration does not detect potentially invalid bridge windows > and misinterpret them as firmware-assigned windows, when they simply may be > invalid bridge window information from a previous boot. Rewrap commit log to fit in 75 columns. No problem about v2 vs v1. > Signed-off-by: Nirmal Patel <nirmal.patel@linux.intel.com> > Reviewed-by: Jon Derrick <jonathan.derrick@intel.com> > --- > drivers/pci/controller/vmd.c | 35 +++++++++++++++++++++++++++++++++++ > 1 file changed, 35 insertions(+) > > diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c > index 6e1c60048774..e52bdb95218e 100644 > --- a/drivers/pci/controller/vmd.c > +++ b/drivers/pci/controller/vmd.c > @@ -651,6 +651,39 @@ static int vmd_alloc_irqs(struct vmd_dev *vmd) > return 0; > } > > + Remove spurious blank line here. > +static void vmd_domain_reset_windows(struct vmd_dev *vmd) > +{ > + u8 hdr_type; > + char __iomem *addr; > + int dev_seq; > + u8 functions; > + u8 fn_seq; > + int max_devs = resource_size(&vmd->resources[0]) * 32; > + > + for (dev_seq = 0; dev_seq < max_devs; dev_seq++) { > + addr = VMD_DEVICE_BASE(vmd, dev_seq) + PCI_VENDOR_ID; > + if (readw(addr) != PCI_VENDOR_ID_INTEL) > + continue; > + > + addr = VMD_DEVICE_BASE(vmd, dev_seq) + PCI_HEADER_TYPE; > + hdr_type = readb(addr) & PCI_HEADER_TYPE_MASK; > + if (hdr_type != PCI_HEADER_TYPE_BRIDGE) > + continue; > + > + functions = !!(hdr_type & 0x80) ? 8 : 1; > + for (fn_seq = 0; fn_seq < functions; fn_seq++) > + { Put "{" on previous line. Looks quite parallel to vmd_domain_sbr(), except that here we iterate through functions as well. Why does vmd_domain_sbr() not need to iterate through functions? > + addr = VMD_FUNCTION_BASE(vmd, dev_seq, fn_seq) + PCI_VENDOR_ID; > + if (readw(addr) != PCI_VENDOR_ID_INTEL) > + continue; > + > + memset_io((VMD_FUNCTION_BASE(vmd, dev_seq, fn_seq) + PCI_IO_BASE), > + 0, PCI_ROM_ADDRESS1 - PCI_IO_BASE); Make the lines above fit in 80 columns. > + } > + } > +} > + > static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) > { > struct pci_sysdata *sd = &vmd->sysdata; > @@ -741,6 +774,8 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) > .parent = res, > }; > > + vmd_domain_reset_windows(vmd); > + > sd->vmd_dev = vmd->dev; > sd->domain = vmd_find_free_domain(); > if (sd->domain < 0) > -- > 2.27.0 >
On 7/20/2021 3:42 PM, Bjorn Helgaas wrote: > On Tue, Jul 20, 2021 at 01:50:09PM -0700, Nirmal Patel wrote: >> In order to properly re-initialize the VMD domain during repetitive driver >> attachment or reboot tests, ensure that the VMD root ports are re-initialized >> to a blank state that can be re-enumerated appropriately by the PCI core. >> This is performed by re-initializing all of the bridge windows to ensure >> that PCI core enumeration does not detect potentially invalid bridge windows >> and misinterpret them as firmware-assigned windows, when they simply may be >> invalid bridge window information from a previous boot. > Rewrap commit log to fit in 75 columns. No problem about v2 vs v1. I will take care of it. > >> Signed-off-by: Nirmal Patel <nirmal.patel@linux.intel.com> >> Reviewed-by: Jon Derrick <jonathan.derrick@intel.com> >> --- >> drivers/pci/controller/vmd.c | 35 +++++++++++++++++++++++++++++++++++ >> 1 file changed, 35 insertions(+) >> >> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c >> index 6e1c60048774..e52bdb95218e 100644 >> --- a/drivers/pci/controller/vmd.c >> +++ b/drivers/pci/controller/vmd.c >> @@ -651,6 +651,39 @@ static int vmd_alloc_irqs(struct vmd_dev *vmd) >> return 0; >> } >> >> + > Remove spurious blank line here. Sure. > >> +static void vmd_domain_reset_windows(struct vmd_dev *vmd) >> +{ >> + u8 hdr_type; >> + char __iomem *addr; >> + int dev_seq; >> + u8 functions; >> + u8 fn_seq; >> + int max_devs = resource_size(&vmd->resources[0]) * 32; >> + >> + for (dev_seq = 0; dev_seq < max_devs; dev_seq++) { >> + addr = VMD_DEVICE_BASE(vmd, dev_seq) + PCI_VENDOR_ID; >> + if (readw(addr) != PCI_VENDOR_ID_INTEL) >> + continue; >> + >> + addr = VMD_DEVICE_BASE(vmd, dev_seq) + PCI_HEADER_TYPE; >> + hdr_type = readb(addr) & PCI_HEADER_TYPE_MASK; >> + if (hdr_type != PCI_HEADER_TYPE_BRIDGE) >> + continue; >> + >> + functions = !!(hdr_type & 0x80) ? 8 : 1; >> + for (fn_seq = 0; fn_seq < functions; fn_seq++) >> + { > Put "{" on previous line. > > Looks quite parallel to vmd_domain_sbr(), except that here we iterate > through functions as well. Why does vmd_domain_sbr() not need to > iterate through functions? I am not sure if there is VMD hardware with non zero functions. > >> + addr = VMD_FUNCTION_BASE(vmd, dev_seq, fn_seq) + PCI_VENDOR_ID; >> + if (readw(addr) != PCI_VENDOR_ID_INTEL) >> + continue; >> + >> + memset_io((VMD_FUNCTION_BASE(vmd, dev_seq, fn_seq) + PCI_IO_BASE), >> + 0, PCI_ROM_ADDRESS1 - PCI_IO_BASE); > Make the lines above fit in 80 columns. Sure. > >> + } >> + } >> +} >> + >> static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) >> { >> struct pci_sysdata *sd = &vmd->sysdata; >> @@ -741,6 +774,8 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) >> .parent = res, >> }; >> >> + vmd_domain_reset_windows(vmd); >> + >> sd->vmd_dev = vmd->dev; >> sd->domain = vmd_find_free_domain(); >> if (sd->domain < 0) >> -- >> 2.27.0 >>
On Thu, Jul 22, 2021 at 11:47:06AM -0700, Patel, Nirmal wrote: > On 7/20/2021 3:42 PM, Bjorn Helgaas wrote: > > On Tue, Jul 20, 2021 at 01:50:09PM -0700, Nirmal Patel wrote: > >> In order to properly re-initialize the VMD domain during repetitive driver > >> attachment or reboot tests, ensure that the VMD root ports are re-initialized > >> to a blank state that can be re-enumerated appropriately by the PCI core. > >> This is performed by re-initializing all of the bridge windows to ensure > >> that PCI core enumeration does not detect potentially invalid bridge windows > >> and misinterpret them as firmware-assigned windows, when they simply may be > >> invalid bridge window information from a previous boot. > >> +static void vmd_domain_reset_windows(struct vmd_dev *vmd) > >> +{ > >> + u8 hdr_type; > >> + char __iomem *addr; > >> + int dev_seq; > >> + u8 functions; > >> + u8 fn_seq; > >> + int max_devs = resource_size(&vmd->resources[0]) * 32; > >> + > >> + for (dev_seq = 0; dev_seq < max_devs; dev_seq++) { > >> + addr = VMD_DEVICE_BASE(vmd, dev_seq) + PCI_VENDOR_ID; > >> + if (readw(addr) != PCI_VENDOR_ID_INTEL) > >> + continue; > >> + > >> + addr = VMD_DEVICE_BASE(vmd, dev_seq) + PCI_HEADER_TYPE; > >> + hdr_type = readb(addr) & PCI_HEADER_TYPE_MASK; > >> + if (hdr_type != PCI_HEADER_TYPE_BRIDGE) > >> + continue; > >> + > >> + functions = !!(hdr_type & 0x80) ? 8 : 1; > >> + for (fn_seq = 0; fn_seq < functions; fn_seq++) > >> + { > > > > Looks quite parallel to vmd_domain_sbr(), except that here we iterate > > through functions as well. Why does vmd_domain_sbr() not need to > > iterate through functions? > > I am not sure if there is VMD hardware with non zero functions. I'm not sure either ;) Hopefully you can resolve this one way or the other. It would be good to either make them the same or add a comment about why they are different. Otherwise it just looks like a possible bug.
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c index 6e1c60048774..e52bdb95218e 100644 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c @@ -651,6 +651,39 @@ static int vmd_alloc_irqs(struct vmd_dev *vmd) return 0; } + +static void vmd_domain_reset_windows(struct vmd_dev *vmd) +{ + u8 hdr_type; + char __iomem *addr; + int dev_seq; + u8 functions; + u8 fn_seq; + int max_devs = resource_size(&vmd->resources[0]) * 32; + + for (dev_seq = 0; dev_seq < max_devs; dev_seq++) { + addr = VMD_DEVICE_BASE(vmd, dev_seq) + PCI_VENDOR_ID; + if (readw(addr) != PCI_VENDOR_ID_INTEL) + continue; + + addr = VMD_DEVICE_BASE(vmd, dev_seq) + PCI_HEADER_TYPE; + hdr_type = readb(addr) & PCI_HEADER_TYPE_MASK; + if (hdr_type != PCI_HEADER_TYPE_BRIDGE) + continue; + + functions = !!(hdr_type & 0x80) ? 8 : 1; + for (fn_seq = 0; fn_seq < functions; fn_seq++) + { + addr = VMD_FUNCTION_BASE(vmd, dev_seq, fn_seq) + PCI_VENDOR_ID; + if (readw(addr) != PCI_VENDOR_ID_INTEL) + continue; + + memset_io((VMD_FUNCTION_BASE(vmd, dev_seq, fn_seq) + PCI_IO_BASE), + 0, PCI_ROM_ADDRESS1 - PCI_IO_BASE); + } + } +} + static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) { struct pci_sysdata *sd = &vmd->sysdata; @@ -741,6 +774,8 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) .parent = res, }; + vmd_domain_reset_windows(vmd); + sd->vmd_dev = vmd->dev; sd->domain = vmd_find_free_domain(); if (sd->domain < 0)