Message ID | 1527768879-88161-2-git-send-email-xiexiuqi@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Thu, May 31, 2018 at 08:14:38PM +0800, Xie XiuQi wrote: > A numa system may return node which is not online. > For example, a numa node: > 1) without memory > 2) NR_CPUS is very small, and the cpus on the node are not brought up > > In this situation, we use NUMA_NO_NODE to avoid oops. > > [ 25.732905] Unable to handle kernel NULL pointer dereference at virtual address 00001988 > [ 25.740982] Mem abort info: > [ 25.743762] ESR = 0x96000005 > [ 25.746803] Exception class = DABT (current EL), IL = 32 bits > [ 25.752711] SET = 0, FnV = 0 > [ 25.755751] EA = 0, S1PTW = 0 > [ 25.758878] Data abort info: > [ 25.761745] ISV = 0, ISS = 0x00000005 > [ 25.765568] CM = 0, WnR = 0 > [ 25.768521] [0000000000001988] user address but active_mm is swapper > [ 25.774861] Internal error: Oops: 96000005 [#1] SMP > [ 25.779724] Modules linked in: > [ 25.782768] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.17.0-rc6-mpam+ #115 > [ 25.789714] Hardware name: Huawei D06/D06, BIOS Hisilicon D06 EC UEFI Nemo 2.0 RC0 - B305 05/28/2018 > [ 25.798831] pstate: 80c00009 (Nzcv daif +PAN +UAO) > [ 25.803612] pc : __alloc_pages_nodemask+0xf0/0xe70 > [ 25.808389] lr : __alloc_pages_nodemask+0x184/0xe70 > [ 25.813252] sp : ffff00000996f660 > [ 25.816553] x29: ffff00000996f660 x28: 0000000000000000 > [ 25.821852] x27: 00000000014012c0 x26: 0000000000000000 > [ 25.827150] x25: 0000000000000003 x24: ffff000008099eac > [ 25.832449] x23: 0000000000400000 x22: 0000000000000000 > [ 25.837747] x21: 0000000000000001 x20: 0000000000000000 > [ 25.843045] x19: 0000000000400000 x18: 0000000000010e00 > [ 25.848343] x17: 000000000437f790 x16: 0000000000000020 > [ 25.853641] x15: 0000000000000000 x14: 6549435020524541 > [ 25.858939] x13: 20454d502067756c x12: 0000000000000000 > [ 25.864237] x11: ffff00000996f6f0 x10: 0000000000000006 > [ 25.869536] x9 : 00000000000012a4 x8 : ffff8023c000ff90 > [ 25.874834] x7 : 0000000000000000 x6 : ffff000008d73c08 > [ 25.880132] x5 : 0000000000000000 x4 : 0000000000000081 > [ 25.885430] x3 : 0000000000000000 x2 : 0000000000000000 > [ 25.890728] x1 : 0000000000000001 x0 : 0000000000001980 > [ 25.896027] Process swapper/0 (pid: 1, stack limit = 0x (ptrval)) > [ 25.902712] Call trace: > [ 25.905146] __alloc_pages_nodemask+0xf0/0xe70 > [ 25.909577] allocate_slab+0x94/0x590 > [ 25.913225] new_slab+0x68/0xc8 > [ 25.916353] ___slab_alloc+0x444/0x4f8 > [ 25.920088] __slab_alloc+0x50/0x68 > [ 25.923562] kmem_cache_alloc_node_trace+0xe8/0x230 > [ 25.928426] pci_acpi_scan_root+0x94/0x278 > [ 25.932510] acpi_pci_root_add+0x228/0x4b0 > [ 25.936593] acpi_bus_attach+0x10c/0x218 > [ 25.940501] acpi_bus_attach+0xac/0x218 > [ 25.944323] acpi_bus_attach+0xac/0x218 > [ 25.948144] acpi_bus_scan+0x5c/0xc0 > [ 25.951708] acpi_scan_init+0xf8/0x254 > [ 25.955443] acpi_init+0x310/0x37c > [ 25.958831] do_one_initcall+0x54/0x208 > [ 25.962653] kernel_init_freeable+0x244/0x340 > [ 25.966999] kernel_init+0x18/0x118 > [ 25.970474] ret_from_fork+0x10/0x1c > [ 25.974036] Code: 7100047f 321902a4 1a950095 b5000602 (b9400803) > [ 25.980162] ---[ end trace 64f0893eb21ec283 ]--- > [ 25.984765] Kernel panic - not syncing: Fatal exception > > Signed-off-by: Xie XiuQi <xiexiuqi@huawei.com> > Tested-by: Huiqiang Wang <wanghuiqiang@huawei.com> > Cc: Hanjun Guo <hanjun.guo@linaro.org> > Cc: Tomasz Nowicki <Tomasz.Nowicki@caviumnetworks.com> > Cc: Xishi Qiu <qiuxishi@huawei.com> > --- > arch/arm64/kernel/pci.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/arch/arm64/kernel/pci.c b/arch/arm64/kernel/pci.c > index 0e2ea1c..e17cc45 100644 > --- a/arch/arm64/kernel/pci.c > +++ b/arch/arm64/kernel/pci.c > @@ -170,6 +170,9 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) > struct pci_bus *bus, *child; > struct acpi_pci_root_ops *root_ops; > > + if (node != NUMA_NO_NODE && !node_online(node)) > + node = NUMA_NO_NODE; > + This really feels like a bodge, but it does appear to be what other architectures do, so: Acked-by: Will Deacon <will.deacon@arm.com> Will
[+cc akpm, linux-mm, linux-pci] On Wed, Jun 6, 2018 at 10:44 AM Will Deacon <will.deacon@arm.com> wrote: > > On Thu, May 31, 2018 at 08:14:38PM +0800, Xie XiuQi wrote: > > A numa system may return node which is not online. > > For example, a numa node: > > 1) without memory > > 2) NR_CPUS is very small, and the cpus on the node are not brought up > > > > In this situation, we use NUMA_NO_NODE to avoid oops. > > > > [ 25.732905] Unable to handle kernel NULL pointer dereference at virtual address 00001988 > > [ 25.740982] Mem abort info: > > [ 25.743762] ESR = 0x96000005 > > [ 25.746803] Exception class = DABT (current EL), IL = 32 bits > > [ 25.752711] SET = 0, FnV = 0 > > [ 25.755751] EA = 0, S1PTW = 0 > > [ 25.758878] Data abort info: > > [ 25.761745] ISV = 0, ISS = 0x00000005 > > [ 25.765568] CM = 0, WnR = 0 > > [ 25.768521] [0000000000001988] user address but active_mm is swapper > > [ 25.774861] Internal error: Oops: 96000005 [#1] SMP > > [ 25.779724] Modules linked in: > > [ 25.782768] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.17.0-rc6-mpam+ #115 > > [ 25.789714] Hardware name: Huawei D06/D06, BIOS Hisilicon D06 EC UEFI Nemo 2.0 RC0 - B305 05/28/2018 > > [ 25.798831] pstate: 80c00009 (Nzcv daif +PAN +UAO) > > [ 25.803612] pc : __alloc_pages_nodemask+0xf0/0xe70 > > [ 25.808389] lr : __alloc_pages_nodemask+0x184/0xe70 > > [ 25.813252] sp : ffff00000996f660 > > [ 25.816553] x29: ffff00000996f660 x28: 0000000000000000 > > [ 25.821852] x27: 00000000014012c0 x26: 0000000000000000 > > [ 25.827150] x25: 0000000000000003 x24: ffff000008099eac > > [ 25.832449] x23: 0000000000400000 x22: 0000000000000000 > > [ 25.837747] x21: 0000000000000001 x20: 0000000000000000 > > [ 25.843045] x19: 0000000000400000 x18: 0000000000010e00 > > [ 25.848343] x17: 000000000437f790 x16: 0000000000000020 > > [ 25.853641] x15: 0000000000000000 x14: 6549435020524541 > > [ 25.858939] x13: 20454d502067756c x12: 0000000000000000 > > [ 25.864237] x11: ffff00000996f6f0 x10: 0000000000000006 > > [ 25.869536] x9 : 00000000000012a4 x8 : ffff8023c000ff90 > > [ 25.874834] x7 : 0000000000000000 x6 : ffff000008d73c08 > > [ 25.880132] x5 : 0000000000000000 x4 : 0000000000000081 > > [ 25.885430] x3 : 0000000000000000 x2 : 0000000000000000 > > [ 25.890728] x1 : 0000000000000001 x0 : 0000000000001980 > > [ 25.896027] Process swapper/0 (pid: 1, stack limit = 0x (ptrval)) > > [ 25.902712] Call trace: > > [ 25.905146] __alloc_pages_nodemask+0xf0/0xe70 > > [ 25.909577] allocate_slab+0x94/0x590 > > [ 25.913225] new_slab+0x68/0xc8 > > [ 25.916353] ___slab_alloc+0x444/0x4f8 > > [ 25.920088] __slab_alloc+0x50/0x68 > > [ 25.923562] kmem_cache_alloc_node_trace+0xe8/0x230 > > [ 25.928426] pci_acpi_scan_root+0x94/0x278 > > [ 25.932510] acpi_pci_root_add+0x228/0x4b0 > > [ 25.936593] acpi_bus_attach+0x10c/0x218 > > [ 25.940501] acpi_bus_attach+0xac/0x218 > > [ 25.944323] acpi_bus_attach+0xac/0x218 > > [ 25.948144] acpi_bus_scan+0x5c/0xc0 > > [ 25.951708] acpi_scan_init+0xf8/0x254 > > [ 25.955443] acpi_init+0x310/0x37c > > [ 25.958831] do_one_initcall+0x54/0x208 > > [ 25.962653] kernel_init_freeable+0x244/0x340 > > [ 25.966999] kernel_init+0x18/0x118 > > [ 25.970474] ret_from_fork+0x10/0x1c > > [ 25.974036] Code: 7100047f 321902a4 1a950095 b5000602 (b9400803) > > [ 25.980162] ---[ end trace 64f0893eb21ec283 ]--- > > [ 25.984765] Kernel panic - not syncing: Fatal exception > > > > Signed-off-by: Xie XiuQi <xiexiuqi@huawei.com> > > Tested-by: Huiqiang Wang <wanghuiqiang@huawei.com> > > Cc: Hanjun Guo <hanjun.guo@linaro.org> > > Cc: Tomasz Nowicki <Tomasz.Nowicki@caviumnetworks.com> > > Cc: Xishi Qiu <qiuxishi@huawei.com> > > --- > > arch/arm64/kernel/pci.c | 3 +++ > > 1 file changed, 3 insertions(+) > > > > diff --git a/arch/arm64/kernel/pci.c b/arch/arm64/kernel/pci.c > > index 0e2ea1c..e17cc45 100644 > > --- a/arch/arm64/kernel/pci.c > > +++ b/arch/arm64/kernel/pci.c > > @@ -170,6 +170,9 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) > > struct pci_bus *bus, *child; > > struct acpi_pci_root_ops *root_ops; > > > > + if (node != NUMA_NO_NODE && !node_online(node)) > > + node = NUMA_NO_NODE; > > + > > This really feels like a bodge, but it does appear to be what other > architectures do, so: > > Acked-by: Will Deacon <will.deacon@arm.com> I agree, this doesn't feel like something we should be avoiding in the caller of kzalloc_node(). I would not expect kzalloc_node() to return memory that's offline, no matter what node we told it to allocate from. I could imagine it returning failure, or returning memory from a node that *is* online, but returning a pointer to offline memory seems broken. Are we putting memory that's offline in the free list? I don't know where to look to figure this out. Bjorn
On Wed 06-06-18 15:39:34, Bjorn Helgaas wrote: > [+cc akpm, linux-mm, linux-pci] > > On Wed, Jun 6, 2018 at 10:44 AM Will Deacon <will.deacon@arm.com> wrote: > > > > On Thu, May 31, 2018 at 08:14:38PM +0800, Xie XiuQi wrote: > > > A numa system may return node which is not online. > > > For example, a numa node: > > > 1) without memory > > > 2) NR_CPUS is very small, and the cpus on the node are not brought up > > > > > > In this situation, we use NUMA_NO_NODE to avoid oops. > > > > > > [ 25.732905] Unable to handle kernel NULL pointer dereference at virtual address 00001988 > > > [ 25.740982] Mem abort info: > > > [ 25.743762] ESR = 0x96000005 > > > [ 25.746803] Exception class = DABT (current EL), IL = 32 bits > > > [ 25.752711] SET = 0, FnV = 0 > > > [ 25.755751] EA = 0, S1PTW = 0 > > > [ 25.758878] Data abort info: > > > [ 25.761745] ISV = 0, ISS = 0x00000005 > > > [ 25.765568] CM = 0, WnR = 0 > > > [ 25.768521] [0000000000001988] user address but active_mm is swapper > > > [ 25.774861] Internal error: Oops: 96000005 [#1] SMP > > > [ 25.779724] Modules linked in: > > > [ 25.782768] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.17.0-rc6-mpam+ #115 > > > [ 25.789714] Hardware name: Huawei D06/D06, BIOS Hisilicon D06 EC UEFI Nemo 2.0 RC0 - B305 05/28/2018 > > > [ 25.798831] pstate: 80c00009 (Nzcv daif +PAN +UAO) > > > [ 25.803612] pc : __alloc_pages_nodemask+0xf0/0xe70 > > > [ 25.808389] lr : __alloc_pages_nodemask+0x184/0xe70 > > > [ 25.813252] sp : ffff00000996f660 > > > [ 25.816553] x29: ffff00000996f660 x28: 0000000000000000 > > > [ 25.821852] x27: 00000000014012c0 x26: 0000000000000000 > > > [ 25.827150] x25: 0000000000000003 x24: ffff000008099eac > > > [ 25.832449] x23: 0000000000400000 x22: 0000000000000000 > > > [ 25.837747] x21: 0000000000000001 x20: 0000000000000000 > > > [ 25.843045] x19: 0000000000400000 x18: 0000000000010e00 > > > [ 25.848343] x17: 000000000437f790 x16: 0000000000000020 > > > [ 25.853641] x15: 0000000000000000 x14: 6549435020524541 > > > [ 25.858939] x13: 20454d502067756c x12: 0000000000000000 > > > [ 25.864237] x11: ffff00000996f6f0 x10: 0000000000000006 > > > [ 25.869536] x9 : 00000000000012a4 x8 : ffff8023c000ff90 > > > [ 25.874834] x7 : 0000000000000000 x6 : ffff000008d73c08 > > > [ 25.880132] x5 : 0000000000000000 x4 : 0000000000000081 > > > [ 25.885430] x3 : 0000000000000000 x2 : 0000000000000000 > > > [ 25.890728] x1 : 0000000000000001 x0 : 0000000000001980 > > > [ 25.896027] Process swapper/0 (pid: 1, stack limit = 0x (ptrval)) > > > [ 25.902712] Call trace: > > > [ 25.905146] __alloc_pages_nodemask+0xf0/0xe70 > > > [ 25.909577] allocate_slab+0x94/0x590 > > > [ 25.913225] new_slab+0x68/0xc8 > > > [ 25.916353] ___slab_alloc+0x444/0x4f8 > > > [ 25.920088] __slab_alloc+0x50/0x68 > > > [ 25.923562] kmem_cache_alloc_node_trace+0xe8/0x230 > > > [ 25.928426] pci_acpi_scan_root+0x94/0x278 > > > [ 25.932510] acpi_pci_root_add+0x228/0x4b0 > > > [ 25.936593] acpi_bus_attach+0x10c/0x218 > > > [ 25.940501] acpi_bus_attach+0xac/0x218 > > > [ 25.944323] acpi_bus_attach+0xac/0x218 > > > [ 25.948144] acpi_bus_scan+0x5c/0xc0 > > > [ 25.951708] acpi_scan_init+0xf8/0x254 > > > [ 25.955443] acpi_init+0x310/0x37c > > > [ 25.958831] do_one_initcall+0x54/0x208 > > > [ 25.962653] kernel_init_freeable+0x244/0x340 > > > [ 25.966999] kernel_init+0x18/0x118 > > > [ 25.970474] ret_from_fork+0x10/0x1c > > > [ 25.974036] Code: 7100047f 321902a4 1a950095 b5000602 (b9400803) > > > [ 25.980162] ---[ end trace 64f0893eb21ec283 ]--- > > > [ 25.984765] Kernel panic - not syncing: Fatal exception > > > > > > Signed-off-by: Xie XiuQi <xiexiuqi@huawei.com> > > > Tested-by: Huiqiang Wang <wanghuiqiang@huawei.com> > > > Cc: Hanjun Guo <hanjun.guo@linaro.org> > > > Cc: Tomasz Nowicki <Tomasz.Nowicki@caviumnetworks.com> > > > Cc: Xishi Qiu <qiuxishi@huawei.com> > > > --- > > > arch/arm64/kernel/pci.c | 3 +++ > > > 1 file changed, 3 insertions(+) > > > > > > diff --git a/arch/arm64/kernel/pci.c b/arch/arm64/kernel/pci.c > > > index 0e2ea1c..e17cc45 100644 > > > --- a/arch/arm64/kernel/pci.c > > > +++ b/arch/arm64/kernel/pci.c > > > @@ -170,6 +170,9 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) > > > struct pci_bus *bus, *child; > > > struct acpi_pci_root_ops *root_ops; > > > > > > + if (node != NUMA_NO_NODE && !node_online(node)) > > > + node = NUMA_NO_NODE; > > > + > > > > This really feels like a bodge, but it does appear to be what other > > architectures do, so: > > > > Acked-by: Will Deacon <will.deacon@arm.com> > > I agree, this doesn't feel like something we should be avoiding in the > caller of kzalloc_node(). > > I would not expect kzalloc_node() to return memory that's offline, no > matter what node we told it to allocate from. I could imagine it > returning failure, or returning memory from a node that *is* online, > but returning a pointer to offline memory seems broken. > > Are we putting memory that's offline in the free list? I don't know > where to look to figure this out. I am not sure I have the full context but pci_acpi_scan_root calls kzalloc_node(sizeof(*info), GFP_KERNEL, node) and that should fall back to whatever node that is online. Offline node shouldn't keep any pages behind. So there must be something else going on here and the patch is not the right way to handle it. What does faddr2line __alloc_pages_nodemask+0xf0 tells on this kernel?
On 2018/6/7 18:55, Michal Hocko wrote: > On Wed 06-06-18 15:39:34, Bjorn Helgaas wrote: >> [+cc akpm, linux-mm, linux-pci] >> >> On Wed, Jun 6, 2018 at 10:44 AM Will Deacon <will.deacon@arm.com> wrote: >>> >>> On Thu, May 31, 2018 at 08:14:38PM +0800, Xie XiuQi wrote: >>>> A numa system may return node which is not online. >>>> For example, a numa node: >>>> 1) without memory >>>> 2) NR_CPUS is very small, and the cpus on the node are not brought up >>>> >>>> In this situation, we use NUMA_NO_NODE to avoid oops. >>>> >>>> [ 25.732905] Unable to handle kernel NULL pointer dereference at virtual address 00001988 >>>> [ 25.740982] Mem abort info: >>>> [ 25.743762] ESR = 0x96000005 >>>> [ 25.746803] Exception class = DABT (current EL), IL = 32 bits >>>> [ 25.752711] SET = 0, FnV = 0 >>>> [ 25.755751] EA = 0, S1PTW = 0 >>>> [ 25.758878] Data abort info: >>>> [ 25.761745] ISV = 0, ISS = 0x00000005 >>>> [ 25.765568] CM = 0, WnR = 0 >>>> [ 25.768521] [0000000000001988] user address but active_mm is swapper >>>> [ 25.774861] Internal error: Oops: 96000005 [#1] SMP >>>> [ 25.779724] Modules linked in: >>>> [ 25.782768] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.17.0-rc6-mpam+ #115 >>>> [ 25.789714] Hardware name: Huawei D06/D06, BIOS Hisilicon D06 EC UEFI Nemo 2.0 RC0 - B305 05/28/2018 >>>> [ 25.798831] pstate: 80c00009 (Nzcv daif +PAN +UAO) >>>> [ 25.803612] pc : __alloc_pages_nodemask+0xf0/0xe70 >>>> [ 25.808389] lr : __alloc_pages_nodemask+0x184/0xe70 >>>> [ 25.813252] sp : ffff00000996f660 >>>> [ 25.816553] x29: ffff00000996f660 x28: 0000000000000000 >>>> [ 25.821852] x27: 00000000014012c0 x26: 0000000000000000 >>>> [ 25.827150] x25: 0000000000000003 x24: ffff000008099eac >>>> [ 25.832449] x23: 0000000000400000 x22: 0000000000000000 >>>> [ 25.837747] x21: 0000000000000001 x20: 0000000000000000 >>>> [ 25.843045] x19: 0000000000400000 x18: 0000000000010e00 >>>> [ 25.848343] x17: 000000000437f790 x16: 0000000000000020 >>>> [ 25.853641] x15: 0000000000000000 x14: 6549435020524541 >>>> [ 25.858939] x13: 20454d502067756c x12: 0000000000000000 >>>> [ 25.864237] x11: ffff00000996f6f0 x10: 0000000000000006 >>>> [ 25.869536] x9 : 00000000000012a4 x8 : ffff8023c000ff90 >>>> [ 25.874834] x7 : 0000000000000000 x6 : ffff000008d73c08 >>>> [ 25.880132] x5 : 0000000000000000 x4 : 0000000000000081 >>>> [ 25.885430] x3 : 0000000000000000 x2 : 0000000000000000 >>>> [ 25.890728] x1 : 0000000000000001 x0 : 0000000000001980 >>>> [ 25.896027] Process swapper/0 (pid: 1, stack limit = 0x (ptrval)) >>>> [ 25.902712] Call trace: >>>> [ 25.905146] __alloc_pages_nodemask+0xf0/0xe70 >>>> [ 25.909577] allocate_slab+0x94/0x590 >>>> [ 25.913225] new_slab+0x68/0xc8 >>>> [ 25.916353] ___slab_alloc+0x444/0x4f8 >>>> [ 25.920088] __slab_alloc+0x50/0x68 >>>> [ 25.923562] kmem_cache_alloc_node_trace+0xe8/0x230 >>>> [ 25.928426] pci_acpi_scan_root+0x94/0x278 >>>> [ 25.932510] acpi_pci_root_add+0x228/0x4b0 >>>> [ 25.936593] acpi_bus_attach+0x10c/0x218 >>>> [ 25.940501] acpi_bus_attach+0xac/0x218 >>>> [ 25.944323] acpi_bus_attach+0xac/0x218 >>>> [ 25.948144] acpi_bus_scan+0x5c/0xc0 >>>> [ 25.951708] acpi_scan_init+0xf8/0x254 >>>> [ 25.955443] acpi_init+0x310/0x37c >>>> [ 25.958831] do_one_initcall+0x54/0x208 >>>> [ 25.962653] kernel_init_freeable+0x244/0x340 >>>> [ 25.966999] kernel_init+0x18/0x118 >>>> [ 25.970474] ret_from_fork+0x10/0x1c >>>> [ 25.974036] Code: 7100047f 321902a4 1a950095 b5000602 (b9400803) >>>> [ 25.980162] ---[ end trace 64f0893eb21ec283 ]--- >>>> [ 25.984765] Kernel panic - not syncing: Fatal exception >>>> >>>> Signed-off-by: Xie XiuQi <xiexiuqi@huawei.com> >>>> Tested-by: Huiqiang Wang <wanghuiqiang@huawei.com> >>>> Cc: Hanjun Guo <hanjun.guo@linaro.org> >>>> Cc: Tomasz Nowicki <Tomasz.Nowicki@caviumnetworks.com> >>>> Cc: Xishi Qiu <qiuxishi@huawei.com> >>>> --- >>>> arch/arm64/kernel/pci.c | 3 +++ >>>> 1 file changed, 3 insertions(+) >>>> >>>> diff --git a/arch/arm64/kernel/pci.c b/arch/arm64/kernel/pci.c >>>> index 0e2ea1c..e17cc45 100644 >>>> --- a/arch/arm64/kernel/pci.c >>>> +++ b/arch/arm64/kernel/pci.c >>>> @@ -170,6 +170,9 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) >>>> struct pci_bus *bus, *child; >>>> struct acpi_pci_root_ops *root_ops; >>>> >>>> + if (node != NUMA_NO_NODE && !node_online(node)) >>>> + node = NUMA_NO_NODE; >>>> + >>> >>> This really feels like a bodge, but it does appear to be what other >>> architectures do, so: >>> >>> Acked-by: Will Deacon <will.deacon@arm.com> >> >> I agree, this doesn't feel like something we should be avoiding in the >> caller of kzalloc_node(). >> >> I would not expect kzalloc_node() to return memory that's offline, no >> matter what node we told it to allocate from. I could imagine it >> returning failure, or returning memory from a node that *is* online, >> but returning a pointer to offline memory seems broken. >> >> Are we putting memory that's offline in the free list? I don't know >> where to look to figure this out. > > I am not sure I have the full context but pci_acpi_scan_root calls > kzalloc_node(sizeof(*info), GFP_KERNEL, node) > and that should fall back to whatever node that is online. Offline node > shouldn't keep any pages behind. So there must be something else going > on here and the patch is not the right way to handle it. What does > faddr2line __alloc_pages_nodemask+0xf0 tells on this kernel? The whole context is: The system is booted with a NUMA node has no memory attaching to it (memory-less NUMA node), also with NR_CPUS less than CPUs presented in MADT, so CPUs on this memory-less node are not brought up, and this NUMA node will not be online (but SRAT presents this NUMA node); Devices attaching to this NUMA node such as PCI host bridge still return the valid NUMA node via _PXM, but actually that valid NUMA node is not online which lead to this issue. Thanks Hanjun >
On Thu 07-06-18 19:55:53, Hanjun Guo wrote: > On 2018/6/7 18:55, Michal Hocko wrote: [...] > > I am not sure I have the full context but pci_acpi_scan_root calls > > kzalloc_node(sizeof(*info), GFP_KERNEL, node) > > and that should fall back to whatever node that is online. Offline node > > shouldn't keep any pages behind. So there must be something else going > > on here and the patch is not the right way to handle it. What does > > faddr2line __alloc_pages_nodemask+0xf0 tells on this kernel? > > The whole context is: > > The system is booted with a NUMA node has no memory attaching to it > (memory-less NUMA node), also with NR_CPUS less than CPUs presented > in MADT, so CPUs on this memory-less node are not brought up, and > this NUMA node will not be online (but SRAT presents this NUMA node); > > Devices attaching to this NUMA node such as PCI host bridge still > return the valid NUMA node via _PXM, but actually that valid NUMA node > is not online which lead to this issue. But we should have other numa nodes on the zonelists so the allocator should fall back to other node. If the zonelist is not intiailized properly, though, then this can indeed show up as a problem. Knowing which exact place has blown up would help get a better picture...
Hi Michal, On 2018/6/7 20:21, Michal Hocko wrote: > On Thu 07-06-18 19:55:53, Hanjun Guo wrote: >> On 2018/6/7 18:55, Michal Hocko wrote: > [...] >>> I am not sure I have the full context but pci_acpi_scan_root calls >>> kzalloc_node(sizeof(*info), GFP_KERNEL, node) >>> and that should fall back to whatever node that is online. Offline node >>> shouldn't keep any pages behind. So there must be something else going >>> on here and the patch is not the right way to handle it. What does >>> faddr2line __alloc_pages_nodemask+0xf0 tells on this kernel? >> >> The whole context is: >> >> The system is booted with a NUMA node has no memory attaching to it >> (memory-less NUMA node), also with NR_CPUS less than CPUs presented >> in MADT, so CPUs on this memory-less node are not brought up, and >> this NUMA node will not be online (but SRAT presents this NUMA node); >> >> Devices attaching to this NUMA node such as PCI host bridge still >> return the valid NUMA node via _PXM, but actually that valid NUMA node >> is not online which lead to this issue. > > But we should have other numa nodes on the zonelists so the allocator > should fall back to other node. If the zonelist is not intiailized > properly, though, then this can indeed show up as a problem. Knowing > which exact place has blown up would help get a better picture... > I specific a non-exist node to allocate memory using kzalloc_node, and got this following error message. And I found out there is just a VM_WARN, but it does not prevent the memory allocation continue. This nid would be use to access NODE_DADA(nid), so if nid is invalid, it would cause oops here. 459 /* 460 * Allocate pages, preferring the node given as nid. The node must be valid and 461 * online. For more general interface, see alloc_pages_node(). 462 */ 463 static inline struct page * 464 __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) 465 { 466 VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); 467 VM_WARN_ON(!node_online(nid)); 468 469 return __alloc_pages(gfp_mask, order, nid); 470 } 471 (I wrote a ko, to allocate memory on a non-exist node using kzalloc_node().) [ 120.061693] WARNING: CPU: 6 PID: 3966 at ./include/linux/gfp.h:467 allocate_slab+0x5fd/0x7e0 [ 120.070095] Modules linked in: bench(OE+) nls_utf8 isofs loop xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack libcrc32c ipt_REJECT nf_reject_ipv4 tun bridge stp llc ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter dm_mirror dm_region_hash dm_log dm_mod intel_rapl skx_edac nfit vfat libnvdimm fat x86_pkg_temp_thermal coretemp kvm_intel kvm irqbypass iTCO_wdt crct10dif_pclmul iTCO_vendor_support crc32_pclmul ghash_clmulni_intel ses pcbc enclosure aesni_intel scsi_transport_sas crypto_simd cryptd sg glue_helper ipmi_si joydev mei_me i2c_i801 ipmi_devintf ioatdma shpchp pcspkr ipmi_msghandler mei dca i2c_core lpc_ich acpi_power_meter nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables [ 120.140992] ext4 mbcache jbd2 sd_mod crc32c_intel i40e ahci libahci megaraid_sas libata [ 120.149053] CPU: 6 PID: 3966 Comm: insmod Tainted: G OE 4.17.0-rc2-RHEL74+ #5 [ 120.157369] Hardware name: Huawei 2288H V5/BC11SPSCB0, BIOS 0.62 03/26/2018 [ 120.164303] RIP: 0010:allocate_slab+0x5fd/0x7e0 [ 120.168817] RSP: 0018:ffff881196947af0 EFLAGS: 00010246 [ 120.174022] RAX: 0000000000000000 RBX: 00000000014012c0 RCX: ffffffffb4bc8173 [ 120.181126] RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffff8817aefa7868 [ 120.188233] RBP: 00000000014000c0 R08: ffffed02f5df4f0e R09: ffffed02f5df4f0e [ 120.195338] R10: ffffed02f5df4f0d R11: ffff8817aefa786f R12: 0000000000000055 [ 120.202444] R13: 0000000000000003 R14: ffff880107c0f800 R15: 0000000000000000 [ 120.209550] FS: 00007f6935d8c740(0000) GS:ffff8817aef80000(0000) knlGS:0000000000000000 [ 120.217606] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 120.223330] CR2: 0000000000c21b88 CR3: 0000001197fd0006 CR4: 00000000007606e0 [ 120.230435] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 120.237541] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 120.244646] PKRU: 55555554 [ 120.247346] Call Trace: [ 120.249791] ? __kasan_slab_free+0xff/0x150 [ 120.253960] ? mpidr_init+0x20/0x30 [bench] [ 120.258129] new_slab+0x3d/0x90 [ 120.261262] ___slab_alloc+0x371/0x640 [ 120.265002] ? __wake_up_common+0x8a/0x150 [ 120.269085] ? mpidr_init+0x20/0x30 [bench] [ 120.273254] ? mpidr_init+0x20/0x30 [bench] [ 120.277423] __slab_alloc+0x40/0x66 [ 120.280901] kmem_cache_alloc_node_trace+0xbc/0x270 [ 120.285762] ? mpidr_init+0x20/0x30 [bench] [ 120.289931] ? 0xffffffffc0740000 [ 120.293236] mpidr_init+0x20/0x30 [bench] [ 120.297236] do_one_initcall+0x4b/0x1f5 [ 120.301062] ? do_init_module+0x22/0x233 [ 120.304972] ? kmem_cache_alloc_trace+0xfe/0x220 [ 120.309571] ? do_init_module+0x22/0x233 [ 120.313481] do_init_module+0x77/0x233 [ 120.317218] load_module+0x21ea/0x2960 [ 120.320955] ? m_show+0x1d0/0x1d0 [ 120.324264] ? security_capable+0x39/0x50 [ 120.328261] __do_sys_finit_module+0x94/0xe0 [ 120.332516] do_syscall_64+0x55/0x180 [ 120.336171] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 120.341203] RIP: 0033:0x7f69352627f9 [ 120.344767] RSP: 002b:00007ffd7d73f718 EFLAGS: 00000206 ORIG_RAX: 0000000000000139 [ 120.352305] RAX: ffffffffffffffda RBX: 0000000000c201d0 RCX: 00007f69352627f9 [ 120.359411] RDX: 0000000000000000 RSI: 000000000041a2d8 RDI: 0000000000000003 [ 120.366517] RBP: 000000000041a2d8 R08: 0000000000000000 R09: 00007ffd7d73f8b8 [ 120.373622] R10: 0000000000000003 R11: 0000000000000206 R12: 0000000000000000 [ 120.380727] R13: 0000000000c20130 R14: 0000000000000000 R15: 0000000000000000 [ 120.387833] Code: 4b e8 ac 97 eb ff e9 e1 fc ff ff 89 de 89 ef e8 7a 35 ff ff 49 89 c7 4d 85 ff 74 71 0f 1f 44 00 00 e9 f1 fa ff ff e8 cf 54 00 00 <0f> 0b 90 e9 c4 fa ff ff 45 89 e8 b9 b1 05 00 00 48 c7 c2 10 79 [ 120.406620] ---[ end trace 89f801c36550734e ]--- [ 120.411234] BUG: unable to handle kernel paging request at 0000000000002088 [ 120.418168] PGD 8000001197c75067 P4D 8000001197c75067 PUD 119858f067 PMD 0 [ 120.425103] Oops: 0000 [#1] SMP KASAN PTI [ 120.429097] Modules linked in: bench(OE+) nls_utf8 isofs loop xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack libcrc32c ipt_REJECT nf_reject_ipv4 tun bridge stp llc ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter dm_mirror dm_region_hash dm_log dm_mod intel_rapl skx_edac nfit vfat libnvdimm fat x86_pkg_temp_thermal coretemp kvm_intel kvm irqbypass iTCO_wdt crct10dif_pclmul iTCO_vendor_support crc32_pclmul ghash_clmulni_intel ses pcbc enclosure aesni_intel scsi_transport_sas crypto_simd cryptd sg glue_helper ipmi_si joydev mei_me i2c_i801 ipmi_devintf ioatdma shpchp pcspkr ipmi_msghandler mei dca i2c_core lpc_ich acpi_power_meter nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables [ 120.499986] ext4 mbcache jbd2 sd_mod crc32c_intel i40e ahci libahci megaraid_sas libata [ 120.508045] CPU: 6 PID: 3966 Comm: insmod Tainted: G W OE 4.17.0-rc2-RHEL74+ #5 [ 120.516359] Hardware name: Huawei 2288H V5/BC11SPSCB0, BIOS 0.62 03/26/2018 [ 120.523296] RIP: 0010:__alloc_pages_nodemask+0x10d/0x2c0 [ 120.528586] RSP: 0018:ffff881196947a90 EFLAGS: 00010246 [ 120.533790] RAX: 0000000000000001 RBX: 00000000014012c0 RCX: 0000000000000000 [ 120.540895] RDX: 0000000000000000 RSI: 0000000000000002 RDI: 0000000000002080 [ 120.548000] RBP: 00000000014012c0 R08: ffffed0233ccb8f4 R09: ffffed0233ccb8f4 [ 120.555105] R10: ffffed0233ccb8f3 R11: ffff88119e65c79f R12: 0000000000000000 [ 120.562210] R13: 0000000000000001 R14: 0000000000000000 R15: 0000000000000000 [ 120.569316] FS: 00007f6935d8c740(0000) GS:ffff8817aef80000(0000) knlGS:0000000000000000 [ 120.577374] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 120.583095] CR2: 0000000000002088 CR3: 0000001197fd0006 CR4: 00000000007606e0 [ 120.590200] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 120.597307] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 120.604412] PKRU: 55555554 [ 120.607111] Call Trace: [ 120.609554] allocate_slab+0xd8/0x7e0 [ 120.613205] ? __kasan_slab_free+0xff/0x150 [ 120.617376] ? mpidr_init+0x20/0x30 [bench] [ 120.621545] new_slab+0x3d/0x90 [ 120.624678] ___slab_alloc+0x371/0x640 [ 120.628415] ? __wake_up_common+0x8a/0x150 [ 120.632498] ? mpidr_init+0x20/0x30 [bench] [ 120.636667] ? mpidr_init+0x20/0x30 [bench] [ 120.640836] __slab_alloc+0x40/0x66 [ 120.644315] kmem_cache_alloc_node_trace+0xbc/0x270 [ 120.649175] ? mpidr_init+0x20/0x30 [bench] [ 120.653343] ? 0xffffffffc0740000 [ 120.656649] mpidr_init+0x20/0x30 [bench] [ 120.660645] do_one_initcall+0x4b/0x1f5 [ 120.664469] ? do_init_module+0x22/0x233 [ 120.668379] ? kmem_cache_alloc_trace+0xfe/0x220 [ 120.672978] ? do_init_module+0x22/0x233 [ 120.676887] do_init_module+0x77/0x233 [ 120.680624] load_module+0x21ea/0x2960 [ 120.684360] ? m_show+0x1d0/0x1d0 [ 120.687667] ? security_capable+0x39/0x50 [ 120.691663] __do_sys_finit_module+0x94/0xe0 [ 120.695920] do_syscall_64+0x55/0x180 [ 120.699571] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 120.704603] RIP: 0033:0x7f69352627f9 [ 120.708166] RSP: 002b:00007ffd7d73f718 EFLAGS: 00000206 ORIG_RAX: 0000000000000139 [ 120.715704] RAX: ffffffffffffffda RBX: 0000000000c201d0 RCX: 00007f69352627f9 [ 120.722808] RDX: 0000000000000000 RSI: 000000000041a2d8 RDI: 0000000000000003 [ 120.729913] RBP: 000000000041a2d8 R08: 0000000000000000 R09: 00007ffd7d73f8b8 [ 120.737019] R10: 0000000000000003 R11: 0000000000000206 R12: 0000000000000000 [ 120.744123] R13: 0000000000c20130 R14: 0000000000000000 R15: 0000000000000000 [ 120.751230] Code: 89 c6 74 0d e8 55 ab 5e 00 8b 74 24 1c 48 8b 3c 24 48 8b 54 24 08 89 d9 c1 e9 17 83 e1 01 48 85 d2 88 4c 24 20 0f 85 25 01 00 00 <3b> 77 08 0f 82 1c 01 00 00 48 89 f8 44 89 ea 48 89 e1 44 89 e6 [ 120.770020] RIP: __alloc_pages_nodemask+0x10d/0x2c0 RSP: ffff881196947a90 [ 120.776780] CR2: 0000000000002088 [ 120.780116] ---[ end trace 89f801c36550734f ]--- [ 120.978922] Kernel panic - not syncing: Fatal exception [ 120.984186] Kernel Offset: 0x33800000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff) [ 121.209501] ---[ end Kernel panic - not syncing: Fatal exception ]---
On Mon 11-06-18 11:23:18, Xie XiuQi wrote: > Hi Michal, > > On 2018/6/7 20:21, Michal Hocko wrote: > > On Thu 07-06-18 19:55:53, Hanjun Guo wrote: > >> On 2018/6/7 18:55, Michal Hocko wrote: > > [...] > >>> I am not sure I have the full context but pci_acpi_scan_root calls > >>> kzalloc_node(sizeof(*info), GFP_KERNEL, node) > >>> and that should fall back to whatever node that is online. Offline node > >>> shouldn't keep any pages behind. So there must be something else going > >>> on here and the patch is not the right way to handle it. What does > >>> faddr2line __alloc_pages_nodemask+0xf0 tells on this kernel? > >> > >> The whole context is: > >> > >> The system is booted with a NUMA node has no memory attaching to it > >> (memory-less NUMA node), also with NR_CPUS less than CPUs presented > >> in MADT, so CPUs on this memory-less node are not brought up, and > >> this NUMA node will not be online (but SRAT presents this NUMA node); > >> > >> Devices attaching to this NUMA node such as PCI host bridge still > >> return the valid NUMA node via _PXM, but actually that valid NUMA node > >> is not online which lead to this issue. > > > > But we should have other numa nodes on the zonelists so the allocator > > should fall back to other node. If the zonelist is not intiailized > > properly, though, then this can indeed show up as a problem. Knowing > > which exact place has blown up would help get a better picture... > > > > I specific a non-exist node to allocate memory using kzalloc_node, > and got this following error message. > > And I found out there is just a VM_WARN, but it does not prevent the memory > allocation continue. > > This nid would be use to access NODE_DADA(nid), so if nid is invalid, > it would cause oops here. > > 459 /* > 460 * Allocate pages, preferring the node given as nid. The node must be valid and > 461 * online. For more general interface, see alloc_pages_node(). > 462 */ > 463 static inline struct page * > 464 __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) > 465 { > 466 VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); > 467 VM_WARN_ON(!node_online(nid)); > 468 > 469 return __alloc_pages(gfp_mask, order, nid); > 470 } > 471 > > (I wrote a ko, to allocate memory on a non-exist node using kzalloc_node().) OK, so this is an artificialy broken code, right. You shouldn't get a non-existent node via standard APIs AFAICS. The original report was about an existing node which is offline AFAIU. That would be a different case. If I am missing something and there are legitimate users that try to allocate from non-existing nodes then we should handle that in node_zonelist. [...]
diff --git a/arch/arm64/kernel/pci.c b/arch/arm64/kernel/pci.c index 0e2ea1c..e17cc45 100644 --- a/arch/arm64/kernel/pci.c +++ b/arch/arm64/kernel/pci.c @@ -170,6 +170,9 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) struct pci_bus *bus, *child; struct acpi_pci_root_ops *root_ops; + if (node != NUMA_NO_NODE && !node_online(node)) + node = NUMA_NO_NODE; + ri = kzalloc_node(sizeof(*ri), GFP_KERNEL, node); if (!ri) return NULL;