Message ID | 1642622346-22861-1-git-send-email-longli@linuxonhyperv.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Lorenzo Pieralisi |
Headers | show |
Series | [v3] PCI: hv: Fix NUMA node assignment when kernel boots with custom NUMA topology | expand |
From: longli@linuxonhyperv.com <longli@linuxonhyperv.com> Sent: Wednesday, January 19, 2022 11:59 AM > > When kernel boots with a NUMA topology with some NUMA nodes offline, the PCI > driver should only set an online NUMA node on the device. This can happen > during KDUMP where some NUMA nodes are not made online by the KDUMP kernel. > > This patch also fixes the case where kernel is booting with "numa=off". > > Fixes: 999dd956d838 ("PCI: hv: Add support for protocol 1.3 and support PCI_BUS_RELATIONS2") > > Signed-off-by: Long Li <longli@microsoft.com> > > Change log: > v2: use numa_map_to_online_node() to assign a node to device (suggested by > Michael Kelly <mikelley@microsoft.com>) > > v3: add "Fixes" and check for num_possible_nodes() > --- > drivers/pci/controller/pci-hyperv.c | 13 +++++++++++-- > 1 file changed, 11 insertions(+), 2 deletions(-) > > diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c > index 6c9efeefae1b..b5276e81bb44 100644 > --- a/drivers/pci/controller/pci-hyperv.c > +++ b/drivers/pci/controller/pci-hyperv.c > @@ -2129,8 +2129,17 @@ static void hv_pci_assign_numa_node(struct > hv_pcibus_device *hbus) > if (!hv_dev) > continue; > > - if (hv_dev->desc.flags & HV_PCI_DEVICE_FLAG_NUMA_AFFINITY) > - set_dev_node(&dev->dev, hv_dev->desc.virtual_numa_node); > + if (hv_dev->desc.flags & HV_PCI_DEVICE_FLAG_NUMA_AFFINITY && > + hv_dev->desc.virtual_numa_node < num_possible_nodes()) > + /* > + * The kernel may boot with some NUMA nodes offline > + * (e.g. in a KDUMP kernel) or with NUMA disabled via > + * "numa=off". In those cases, adjust the host provided > + * NUMA node to a valid NUMA node used by the kernel. > + */ > + set_dev_node(&dev->dev, > + numa_map_to_online_node( > + hv_dev->desc.virtual_numa_node)); > > put_pcichild(hv_dev); > } > -- > 2.25.1 Reviewed-by: Michael Kelley <mikelley@microsoft.com>
On Wed, Jan 19, 2022 at 11:59:06AM -0800, longli@linuxonhyperv.com wrote: > From: Long Li <longli@microsoft.com> > > When kernel boots with a NUMA topology with some NUMA nodes offline, the PCI > driver should only set an online NUMA node on the device. This can happen > during KDUMP where some NUMA nodes are not made online by the KDUMP kernel. > > This patch also fixes the case where kernel is booting with "numa=off". > > Fixes: 999dd956d838 ("PCI: hv: Add support for protocol 1.3 and support PCI_BUS_RELATIONS2") > No blank line here, please > Signed-off-by: Long Li <longli@microsoft.com> Everything below needs to be under "---" marker. Thanks > > Change log: > v2: use numa_map_to_online_node() to assign a node to device (suggested by > Michael Kelly <mikelley@microsoft.com>) > > v3: add "Fixes" and check for num_possible_nodes() > --- > drivers/pci/controller/pci-hyperv.c | 13 +++++++++++-- > 1 file changed, 11 insertions(+), 2 deletions(-) > > diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c > index 6c9efeefae1b..b5276e81bb44 100644 > --- a/drivers/pci/controller/pci-hyperv.c > +++ b/drivers/pci/controller/pci-hyperv.c > @@ -2129,8 +2129,17 @@ static void hv_pci_assign_numa_node(struct hv_pcibus_device *hbus) > if (!hv_dev) > continue; > > - if (hv_dev->desc.flags & HV_PCI_DEVICE_FLAG_NUMA_AFFINITY) > - set_dev_node(&dev->dev, hv_dev->desc.virtual_numa_node); > + if (hv_dev->desc.flags & HV_PCI_DEVICE_FLAG_NUMA_AFFINITY && > + hv_dev->desc.virtual_numa_node < num_possible_nodes()) > + /* > + * The kernel may boot with some NUMA nodes offline > + * (e.g. in a KDUMP kernel) or with NUMA disabled via > + * "numa=off". In those cases, adjust the host provided > + * NUMA node to a valid NUMA node used by the kernel. > + */ > + set_dev_node(&dev->dev, > + numa_map_to_online_node( > + hv_dev->desc.virtual_numa_node)); > > put_pcichild(hv_dev); > } > -- > 2.25.1 >
> Subject: Re: [Patch v3] PCI: hv: Fix NUMA node assignment when kernel boots > with custom NUMA topology > > On Wed, Jan 19, 2022 at 11:59:06AM -0800, longli@linuxonhyperv.com wrote: > > From: Long Li <longli@microsoft.com> > > > > When kernel boots with a NUMA topology with some NUMA nodes offline, > > the PCI driver should only set an online NUMA node on the device. This > > can happen during KDUMP where some NUMA nodes are not made online by > the KDUMP kernel. > > > > This patch also fixes the case where kernel is booting with "numa=off". > > > > Fixes: 999dd956d838 ("PCI: hv: Add support for protocol 1.3 and > > support PCI_BUS_RELATIONS2") > > > > No blank line here, please > > > Signed-off-by: Long Li <longli@microsoft.com> > > Everything below needs to be under "---" marker. I'm sending v4 to address the comments. Long > > Thanks > > > > > Change log: > > v2: use numa_map_to_online_node() to assign a node to device > > (suggested by Michael Kelly <mikelley@microsoft.com>) > > > > v3: add "Fixes" and check for num_possible_nodes() > > --- > > drivers/pci/controller/pci-hyperv.c | 13 +++++++++++-- > > 1 file changed, 11 insertions(+), 2 deletions(-) > > > > diff --git a/drivers/pci/controller/pci-hyperv.c > > b/drivers/pci/controller/pci-hyperv.c > > index 6c9efeefae1b..b5276e81bb44 100644 > > --- a/drivers/pci/controller/pci-hyperv.c > > +++ b/drivers/pci/controller/pci-hyperv.c > > @@ -2129,8 +2129,17 @@ static void hv_pci_assign_numa_node(struct > hv_pcibus_device *hbus) > > if (!hv_dev) > > continue; > > > > - if (hv_dev->desc.flags & > HV_PCI_DEVICE_FLAG_NUMA_AFFINITY) > > - set_dev_node(&dev->dev, hv_dev- > >desc.virtual_numa_node); > > + if (hv_dev->desc.flags & > HV_PCI_DEVICE_FLAG_NUMA_AFFINITY && > > + hv_dev->desc.virtual_numa_node < num_possible_nodes()) > > + /* > > + * The kernel may boot with some NUMA nodes offline > > + * (e.g. in a KDUMP kernel) or with NUMA disabled via > > + * "numa=off". In those cases, adjust the host provided > > + * NUMA node to a valid NUMA node used by the kernel. > > + */ > > + set_dev_node(&dev->dev, > > + numa_map_to_online_node( > > + hv_dev->desc.virtual_numa_node)); > > > > put_pcichild(hv_dev); > > } > > -- > > 2.25.1 > >
diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c index 6c9efeefae1b..b5276e81bb44 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -2129,8 +2129,17 @@ static void hv_pci_assign_numa_node(struct hv_pcibus_device *hbus) if (!hv_dev) continue; - if (hv_dev->desc.flags & HV_PCI_DEVICE_FLAG_NUMA_AFFINITY) - set_dev_node(&dev->dev, hv_dev->desc.virtual_numa_node); + if (hv_dev->desc.flags & HV_PCI_DEVICE_FLAG_NUMA_AFFINITY && + hv_dev->desc.virtual_numa_node < num_possible_nodes()) + /* + * The kernel may boot with some NUMA nodes offline + * (e.g. in a KDUMP kernel) or with NUMA disabled via + * "numa=off". In those cases, adjust the host provided + * NUMA node to a valid NUMA node used by the kernel. + */ + set_dev_node(&dev->dev, + numa_map_to_online_node( + hv_dev->desc.virtual_numa_node)); put_pcichild(hv_dev); }