Message ID | 20220310225621.53374-1-toke@redhat.com (mailing list archive) |
---|---|
State | Accepted |
Commit | b6f1f780b3932ae497ed85e79bc8a1e513883624 |
Delegated to: | BPF |
Headers | show |
Series | [bpf-next,1/2] bpf, test_run: Fix packet size check for live packet mode | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Clearly marked for bpf-next |
netdev/fixes_present | success | Fixes tag not required for -next series |
netdev/subject_prefix | success | Link |
netdev/cover_letter | success | Single patches do not need cover letters |
netdev/patch_count | success | Link |
netdev/header_inline | success | No static functions without inline keyword in header files |
netdev/build_32bit | success | Errors and warnings before: 26 this patch: 26 |
netdev/cc_maintainers | success | CCed 14 of 14 maintainers |
netdev/build_clang | success | Errors and warnings before: 18 this patch: 18 |
netdev/module_param | success | Was 0 now: 0 |
netdev/verify_signedoff | success | Signed-off-by tag matches author and committer |
netdev/verify_fixes | success | Fixes tag looks correct |
netdev/build_allmodconfig_warn | success | Errors and warnings before: 31 this patch: 31 |
netdev/checkpatch | success | total: 0 errors, 0 warnings, 0 checks, 24 lines checked |
netdev/kdoc | success | Errors and warnings before: 0 this patch: 0 |
netdev/source_inline | success | Was 0 now: 0 |
bpf/vmtest-bpf-next-PR | success | PR summary |
bpf/vmtest-bpf-next | success | VM_Test |
On Thu, Mar 10, 2022 at 11:56:20PM +0100, Toke Høiland-Jørgensen wrote: > The live packet mode uses some extra space at the start of each page to > cache data structures so they don't have to be rebuilt at every repetition. > This space wasn't correctly accounted for in the size checking of the > arguments supplied to userspace. In addition, the definition of the frame > size should include the size of the skb_shared_info (as there is other > logic that subtracts the size of this). > > Together, these mistakes resulted in userspace being able to trip the > XDP_WARN() in xdp_update_frame_from_buff(), which syzbot discovered in > short order. Fix this by changing the frame size define and adding the > extra headroom to the bpf_prog_test_run_xdp() function. Also drop the > max_len parameter to the page_pool init, since this is related to DMA which > is not used for the page pool instance in PROG_TEST_RUN. > > Reported-by: syzbot+0e91362d99386dc5de99@syzkaller.appspotmail.com > Fixes: b530e9e1063e ("bpf: Add "live packet" mode for XDP in BPF_PROG_RUN") > Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> > --- > net/bpf/test_run.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c > index 24405a280a9b..e7b9c2636d10 100644 > --- a/net/bpf/test_run.c > +++ b/net/bpf/test_run.c > @@ -112,8 +112,7 @@ struct xdp_test_data { > u32 frame_cnt; > }; > > -#define TEST_XDP_FRAME_SIZE (PAGE_SIZE - sizeof(struct xdp_page_head) \ > - - sizeof(struct skb_shared_info)) > +#define TEST_XDP_FRAME_SIZE (PAGE_SIZE - sizeof(struct xdp_page_head)) > #define TEST_XDP_MAX_BATCH 256 > > static void xdp_test_run_init_page(struct page *page, void *arg) > @@ -156,7 +155,6 @@ static int xdp_test_run_setup(struct xdp_test_data *xdp, struct xdp_buff *orig_c > .flags = 0, > .pool_size = xdp->batch_size, > .nid = NUMA_NO_NODE, > - .max_len = TEST_XDP_FRAME_SIZE, > .init_callback = xdp_test_run_init_page, > .init_arg = xdp, > }; > @@ -1230,6 +1228,8 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, > batch_size = NAPI_POLL_WEIGHT; > else if (batch_size > TEST_XDP_MAX_BATCH) > return -E2BIG; > + > + headroom += sizeof(struct xdp_page_head); The orig_ctx->data_end will ensure there is a sizeof(struct skb_shared_info) tailroom ? It is quite tricky to read but I don't have a better idea either. Acked-by: Martin KaFai Lau <kafai@fb.com> > } else if (batch_size) { > return -EINVAL; > } > -- > 2.35.1 >
Martin KaFai Lau <kafai@fb.com> writes: > On Thu, Mar 10, 2022 at 11:56:20PM +0100, Toke Høiland-Jørgensen wrote: >> The live packet mode uses some extra space at the start of each page to >> cache data structures so they don't have to be rebuilt at every repetition. >> This space wasn't correctly accounted for in the size checking of the >> arguments supplied to userspace. In addition, the definition of the frame >> size should include the size of the skb_shared_info (as there is other >> logic that subtracts the size of this). >> >> Together, these mistakes resulted in userspace being able to trip the >> XDP_WARN() in xdp_update_frame_from_buff(), which syzbot discovered in >> short order. Fix this by changing the frame size define and adding the >> extra headroom to the bpf_prog_test_run_xdp() function. Also drop the >> max_len parameter to the page_pool init, since this is related to DMA which >> is not used for the page pool instance in PROG_TEST_RUN. >> >> Reported-by: syzbot+0e91362d99386dc5de99@syzkaller.appspotmail.com >> Fixes: b530e9e1063e ("bpf: Add "live packet" mode for XDP in BPF_PROG_RUN") >> Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> >> --- >> net/bpf/test_run.c | 6 +++--- >> 1 file changed, 3 insertions(+), 3 deletions(-) >> >> diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c >> index 24405a280a9b..e7b9c2636d10 100644 >> --- a/net/bpf/test_run.c >> +++ b/net/bpf/test_run.c >> @@ -112,8 +112,7 @@ struct xdp_test_data { >> u32 frame_cnt; >> }; >> >> -#define TEST_XDP_FRAME_SIZE (PAGE_SIZE - sizeof(struct xdp_page_head) \ >> - - sizeof(struct skb_shared_info)) >> +#define TEST_XDP_FRAME_SIZE (PAGE_SIZE - sizeof(struct xdp_page_head)) >> #define TEST_XDP_MAX_BATCH 256 >> >> static void xdp_test_run_init_page(struct page *page, void *arg) >> @@ -156,7 +155,6 @@ static int xdp_test_run_setup(struct xdp_test_data *xdp, struct xdp_buff *orig_c >> .flags = 0, >> .pool_size = xdp->batch_size, >> .nid = NUMA_NO_NODE, >> - .max_len = TEST_XDP_FRAME_SIZE, >> .init_callback = xdp_test_run_init_page, >> .init_arg = xdp, >> }; >> @@ -1230,6 +1228,8 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, >> batch_size = NAPI_POLL_WEIGHT; >> else if (batch_size > TEST_XDP_MAX_BATCH) >> return -E2BIG; >> + >> + headroom += sizeof(struct xdp_page_head); > The orig_ctx->data_end will ensure there is a sizeof(struct skb_shared_info) > tailroom ? It is quite tricky to read but I don't have a better idea > either. Yeah, the length checks are all done for the non-live data case (in bpf_test_init()), so seemed simpler to just account the extra headroom to those checks instead of adding an extra check to the live-packet code. > Acked-by: Martin KaFai Lau <kafai@fb.com> Thanks! -Toke
Hello: This series was applied to bpf/bpf-next.git (master) by Daniel Borkmann <daniel@iogearbox.net>: On Thu, 10 Mar 2022 23:56:20 +0100 you wrote: > The live packet mode uses some extra space at the start of each page to > cache data structures so they don't have to be rebuilt at every repetition. > This space wasn't correctly accounted for in the size checking of the > arguments supplied to userspace. In addition, the definition of the frame > size should include the size of the skb_shared_info (as there is other > logic that subtracts the size of this). > > [...] Here is the summary with links: - [bpf-next,1/2] bpf, test_run: Fix packet size check for live packet mode https://git.kernel.org/bpf/bpf-next/c/b6f1f780b393 - [bpf-next,2/2] selftests/bpf: Add a test for maximum packet size in xdp_do_redirect https://git.kernel.org/bpf/bpf-next/c/c09df4bd3a91 You are awesome, thank you!
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 24405a280a9b..e7b9c2636d10 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -112,8 +112,7 @@ struct xdp_test_data { u32 frame_cnt; }; -#define TEST_XDP_FRAME_SIZE (PAGE_SIZE - sizeof(struct xdp_page_head) \ - - sizeof(struct skb_shared_info)) +#define TEST_XDP_FRAME_SIZE (PAGE_SIZE - sizeof(struct xdp_page_head)) #define TEST_XDP_MAX_BATCH 256 static void xdp_test_run_init_page(struct page *page, void *arg) @@ -156,7 +155,6 @@ static int xdp_test_run_setup(struct xdp_test_data *xdp, struct xdp_buff *orig_c .flags = 0, .pool_size = xdp->batch_size, .nid = NUMA_NO_NODE, - .max_len = TEST_XDP_FRAME_SIZE, .init_callback = xdp_test_run_init_page, .init_arg = xdp, }; @@ -1230,6 +1228,8 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, batch_size = NAPI_POLL_WEIGHT; else if (batch_size > TEST_XDP_MAX_BATCH) return -E2BIG; + + headroom += sizeof(struct xdp_page_head); } else if (batch_size) { return -EINVAL; }
The live packet mode uses some extra space at the start of each page to cache data structures so they don't have to be rebuilt at every repetition. This space wasn't correctly accounted for in the size checking of the arguments supplied to userspace. In addition, the definition of the frame size should include the size of the skb_shared_info (as there is other logic that subtracts the size of this). Together, these mistakes resulted in userspace being able to trip the XDP_WARN() in xdp_update_frame_from_buff(), which syzbot discovered in short order. Fix this by changing the frame size define and adding the extra headroom to the bpf_prog_test_run_xdp() function. Also drop the max_len parameter to the page_pool init, since this is related to DMA which is not used for the page pool instance in PROG_TEST_RUN. Reported-by: syzbot+0e91362d99386dc5de99@syzkaller.appspotmail.com Fixes: b530e9e1063e ("bpf: Add "live packet" mode for XDP in BPF_PROG_RUN") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> --- net/bpf/test_run.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)