Message ID | 20220409213053.3117305-1-toke@redhat.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 425d239379db03d514cb1c476bfe7c320bb89dfc |
Delegated to: | BPF |
Headers | show |
Series | [bpf] bpf: Fix release of page_pool in BPF_PROG_RUN | expand |
On Sat, Apr 9, 2022 at 2:31 PM Toke Høiland-Jørgensen <toke@redhat.com> wrote: > > The live packet mode in BPF_PROG_RUN allocates a page_pool instance for > each test run instance and uses it for the packet data. On setup it creates > the page_pool, and calls xdp_reg_mem_model() to allow pages to be returned > properly from the XDP data path. However, xdp_reg_mem_model() also raises > the reference count of the page_pool itself, so the single > page_pool_destroy() count on teardown was not enough to actually release > the pool. To fix this, add an additional xdp_unreg_mem_model() call on > teardown. > > Fixes: b530e9e1063e ("bpf: Add "live packet" mode for XDP in BPF_PROG_RUN") > Reported-by: Freysteinn Alfredsson <freysteinn.alfredsson@kau.se> > Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Acked-by: Song Liu <songliubraving@fb.com> > --- > net/bpf/test_run.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c > index e7b9c2636d10..af709c182674 100644 > --- a/net/bpf/test_run.c > +++ b/net/bpf/test_run.c > @@ -108,6 +108,7 @@ struct xdp_test_data { > struct page_pool *pp; > struct xdp_frame **frames; > struct sk_buff **skbs; > + struct xdp_mem_info mem; > u32 batch_size; > u32 frame_cnt; > }; > @@ -147,7 +148,6 @@ static void xdp_test_run_init_page(struct page *page, void *arg) > > static int xdp_test_run_setup(struct xdp_test_data *xdp, struct xdp_buff *orig_ctx) > { > - struct xdp_mem_info mem = {}; > struct page_pool *pp; > int err = -ENOMEM; > struct page_pool_params pp_params = { > @@ -174,7 +174,7 @@ static int xdp_test_run_setup(struct xdp_test_data *xdp, struct xdp_buff *orig_c > } > > /* will copy 'mem.id' into pp->xdp_mem_id */ > - err = xdp_reg_mem_model(&mem, MEM_TYPE_PAGE_POOL, pp); > + err = xdp_reg_mem_model(&xdp->mem, MEM_TYPE_PAGE_POOL, pp); > if (err) > goto err_mmodel; > > @@ -202,6 +202,7 @@ static int xdp_test_run_setup(struct xdp_test_data *xdp, struct xdp_buff *orig_c > > static void xdp_test_run_teardown(struct xdp_test_data *xdp) > { > + xdp_unreg_mem_model(&xdp->mem); > page_pool_destroy(xdp->pp); > kfree(xdp->frames); > kfree(xdp->skbs); > -- > 2.35.1 >
Hello: This patch was applied to bpf/bpf.git (master) by Daniel Borkmann <daniel@iogearbox.net>: On Sat, 9 Apr 2022 23:30:53 +0200 you wrote: > The live packet mode in BPF_PROG_RUN allocates a page_pool instance for > each test run instance and uses it for the packet data. On setup it creates > the page_pool, and calls xdp_reg_mem_model() to allow pages to be returned > properly from the XDP data path. However, xdp_reg_mem_model() also raises > the reference count of the page_pool itself, so the single > page_pool_destroy() count on teardown was not enough to actually release > the pool. To fix this, add an additional xdp_unreg_mem_model() call on > teardown. > > [...] Here is the summary with links: - [bpf] bpf: Fix release of page_pool in BPF_PROG_RUN https://git.kernel.org/bpf/bpf/c/425d239379db You are awesome, thank you!
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index e7b9c2636d10..af709c182674 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -108,6 +108,7 @@ struct xdp_test_data { struct page_pool *pp; struct xdp_frame **frames; struct sk_buff **skbs; + struct xdp_mem_info mem; u32 batch_size; u32 frame_cnt; }; @@ -147,7 +148,6 @@ static void xdp_test_run_init_page(struct page *page, void *arg) static int xdp_test_run_setup(struct xdp_test_data *xdp, struct xdp_buff *orig_ctx) { - struct xdp_mem_info mem = {}; struct page_pool *pp; int err = -ENOMEM; struct page_pool_params pp_params = { @@ -174,7 +174,7 @@ static int xdp_test_run_setup(struct xdp_test_data *xdp, struct xdp_buff *orig_c } /* will copy 'mem.id' into pp->xdp_mem_id */ - err = xdp_reg_mem_model(&mem, MEM_TYPE_PAGE_POOL, pp); + err = xdp_reg_mem_model(&xdp->mem, MEM_TYPE_PAGE_POOL, pp); if (err) goto err_mmodel; @@ -202,6 +202,7 @@ static int xdp_test_run_setup(struct xdp_test_data *xdp, struct xdp_buff *orig_c static void xdp_test_run_teardown(struct xdp_test_data *xdp) { + xdp_unreg_mem_model(&xdp->mem); page_pool_destroy(xdp->pp); kfree(xdp->frames); kfree(xdp->skbs);
The live packet mode in BPF_PROG_RUN allocates a page_pool instance for each test run instance and uses it for the packet data. On setup it creates the page_pool, and calls xdp_reg_mem_model() to allow pages to be returned properly from the XDP data path. However, xdp_reg_mem_model() also raises the reference count of the page_pool itself, so the single page_pool_destroy() count on teardown was not enough to actually release the pool. To fix this, add an additional xdp_unreg_mem_model() call on teardown. Fixes: b530e9e1063e ("bpf: Add "live packet" mode for XDP in BPF_PROG_RUN") Reported-by: Freysteinn Alfredsson <freysteinn.alfredsson@kau.se> Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> --- net/bpf/test_run.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)