Message ID | 20220208220509.4180389-3-song@kernel.org (mailing list archive) |
---|---|
State | Accepted |
Commit | c1b13a9451ab9d46eefb80a2cc4b8b3206460829 |
Delegated to: | BPF |
Headers | show |
Series | fix bpf_prog_pack build errors | expand |
Context | Check | Description |
---|---|---|
bpf/vmtest-bpf-next-PR | success | PR summary |
netdev/tree_selection | success | Clearly marked for bpf-next |
netdev/apply | fail | Patch does not apply to bpf-next |
bpf/vmtest-bpf-next | success | VM_Test |
On 2/8/22 14:05, Song Liu wrote: > Fix build with CONFIG_TRANSPARENT_HUGEPAGE=n with BPF_PROG_PACK_SIZE as > PAGE_SIZE. > > Fixes: 57631054fae6 ("bpf: Introduce bpf_prog_pack allocator") > Reported-by: kernel test robot <lkp@intel.com> > Signed-off-by: Song Liu <song@kernel.org> > --- > kernel/bpf/core.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c > index 306aa63fa58e..9519264ab1ee 100644 > --- a/kernel/bpf/core.c > +++ b/kernel/bpf/core.c > @@ -814,7 +814,11 @@ int bpf_jit_add_poke_descriptor(struct bpf_prog *prog, > * allocator. The prog_pack allocator uses HPAGE_PMD_SIZE page (2MB on x86) > * to host BPF programs. > */ > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > #define BPF_PROG_PACK_SIZE HPAGE_PMD_SIZE > +#else > +#define BPF_PROG_PACK_SIZE PAGE_SIZE > +#endif > #define BPF_PROG_CHUNK_SHIFT 6 > #define BPF_PROG_CHUNK_SIZE (1 << BPF_PROG_CHUNK_SHIFT) > #define BPF_PROG_CHUNK_MASK (~(BPF_PROG_CHUNK_SIZE - 1)) BTW, I do not understand with module_alloc(HPAGE_PMD_SIZE) would necessarily allocate a huge page. I am pretty sure it does not on x86_64 and dual socket host (NUMA) It seems you need to multiply this by num_online_nodes() or change the way __vmalloc_node_range() works, because it currently does: if (vmap_allow_huge && !(vm_flags & VM_NO_HUGE_VMAP)) { unsigned long size_per_node; /* * Try huge pages. Only try for PAGE_KERNEL allocations, * others like modules don't yet expect huge pages in * their allocations due to apply_to_page_range not * supporting them. */ size_per_node = size; if (node == NUMA_NO_NODE) <*> size_per_node /= num_online_nodes(); if (arch_vmap_pmd_supported(prot) && size_per_node >= PMD_SIZE) shift = PMD_SHIFT; else shift = arch_vmap_pte_supported_shift(size_per_node);
Hi Eric, > On Feb 9, 2022, at 5:13 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote: > > > On 2/8/22 14:05, Song Liu wrote: >> Fix build with CONFIG_TRANSPARENT_HUGEPAGE=n with BPF_PROG_PACK_SIZE as >> PAGE_SIZE. >> >> Fixes: 57631054fae6 ("bpf: Introduce bpf_prog_pack allocator") >> Reported-by: kernel test robot <lkp@intel.com> >> Signed-off-by: Song Liu <song@kernel.org> >> --- >> kernel/bpf/core.c | 4 ++++ >> 1 file changed, 4 insertions(+) >> >> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c >> index 306aa63fa58e..9519264ab1ee 100644 >> --- a/kernel/bpf/core.c >> +++ b/kernel/bpf/core.c >> @@ -814,7 +814,11 @@ int bpf_jit_add_poke_descriptor(struct bpf_prog *prog, >> * allocator. The prog_pack allocator uses HPAGE_PMD_SIZE page (2MB on x86) >> * to host BPF programs. >> */ >> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE >> #define BPF_PROG_PACK_SIZE HPAGE_PMD_SIZE >> +#else >> +#define BPF_PROG_PACK_SIZE PAGE_SIZE >> +#endif >> #define BPF_PROG_CHUNK_SHIFT 6 >> #define BPF_PROG_CHUNK_SIZE (1 << BPF_PROG_CHUNK_SHIFT) >> #define BPF_PROG_CHUNK_MASK (~(BPF_PROG_CHUNK_SIZE - 1)) > > BTW, I do not understand with module_alloc(HPAGE_PMD_SIZE) would necessarily allocate a huge page. > > I am pretty sure it does not on x86_64 and dual socket host (NUMA) > > It seems you need to multiply this by num_online_nodes() or change the way __vmalloc_node_range() > > works, because it currently does: > > if (vmap_allow_huge && !(vm_flags & VM_NO_HUGE_VMAP)) { > unsigned long size_per_node; > > /* > * Try huge pages. Only try for PAGE_KERNEL allocations, > * others like modules don't yet expect huge pages in > * their allocations due to apply_to_page_range not > * supporting them. > */ > > size_per_node = size; > if (node == NUMA_NO_NODE) > <*> size_per_node /= num_online_nodes(); > if (arch_vmap_pmd_supported(prot) && size_per_node >= PMD_SIZE) > shift = PMD_SHIFT; > else > shift = arch_vmap_pte_supported_shift(size_per_node); Thanks for highlighting this issue! I will address this in a follow up commit. Regards, Song
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 306aa63fa58e..9519264ab1ee 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -814,7 +814,11 @@ int bpf_jit_add_poke_descriptor(struct bpf_prog *prog, * allocator. The prog_pack allocator uses HPAGE_PMD_SIZE page (2MB on x86) * to host BPF programs. */ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE #define BPF_PROG_PACK_SIZE HPAGE_PMD_SIZE +#else +#define BPF_PROG_PACK_SIZE PAGE_SIZE +#endif #define BPF_PROG_CHUNK_SHIFT 6 #define BPF_PROG_CHUNK_SIZE (1 << BPF_PROG_CHUNK_SHIFT) #define BPF_PROG_CHUNK_MASK (~(BPF_PROG_CHUNK_SIZE - 1))
Fix build with CONFIG_TRANSPARENT_HUGEPAGE=n with BPF_PROG_PACK_SIZE as PAGE_SIZE. Fixes: 57631054fae6 ("bpf: Introduce bpf_prog_pack allocator") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Song Liu <song@kernel.org> --- kernel/bpf/core.c | 4 ++++ 1 file changed, 4 insertions(+)