Message ID | 20210922075613.12186-9-magnus.karlsson@gmail.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 872a1184dbf2b6ed9f435d6a37ad8007126da982 |
Delegated to: | BPF |
Headers | show |
Series | xsk: i40e: ice: introduce batching for Rx buffer allocation | expand |
Context | Check | Description |
---|---|---|
netdev/cover_letter | success | Link |
netdev/fixes_present | success | Link |
netdev/patch_count | success | Link |
netdev/tree_selection | success | Clearly marked for bpf-next |
netdev/subject_prefix | success | Link |
netdev/cc_maintainers | warning | 11 maintainers not CCed: davem@davemloft.net kpsingh@kernel.org hawk@kernel.org john.fastabend@gmail.com yhs@fb.com linux-kselftest@vger.kernel.org shuah@kernel.org kuba@kernel.org andrii@kernel.org songliubraving@fb.com kafai@fb.com |
netdev/source_inline | success | Was 0 now: 0 |
netdev/verify_signedoff | success | Link |
netdev/module_param | success | Was 0 now: 0 |
netdev/build_32bit | success | Errors and warnings before: 0 this patch: 0 |
netdev/kdoc | success | Errors and warnings before: 0 this patch: 0 |
netdev/verify_fixes | success | Link |
netdev/checkpatch | success | total: 0 errors, 0 warnings, 0 checks, 36 lines checked |
netdev/build_allmodconfig_warn | success | Errors and warnings before: 0 this patch: 0 |
netdev/header_inline | success | Link |
bpf/vmtest-bpf-next | success | VM_Test |
bpf/vmtest-bpf-next-PR | success | PR summary |
diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c index 97591e2a69f7..c5c68b860ae0 100644 --- a/tools/testing/selftests/bpf/xdpxceiver.c +++ b/tools/testing/selftests/bpf/xdpxceiver.c @@ -977,13 +977,18 @@ static void *worker_testapp_validate_tx(void *arg) static void xsk_populate_fill_ring(struct xsk_umem_info *umem, struct pkt_stream *pkt_stream) { - u32 idx = 0, i; + u32 idx = 0, i, buffers_to_fill; int ret; - ret = xsk_ring_prod__reserve(&umem->fq, XSK_RING_PROD__DEFAULT_NUM_DESCS, &idx); - if (ret != XSK_RING_PROD__DEFAULT_NUM_DESCS) + if (umem->num_frames < XSK_RING_PROD__DEFAULT_NUM_DESCS) + buffers_to_fill = umem->num_frames; + else + buffers_to_fill = XSK_RING_PROD__DEFAULT_NUM_DESCS; + + ret = xsk_ring_prod__reserve(&umem->fq, buffers_to_fill, &idx); + if (ret != buffers_to_fill) exit_with_error(ENOSPC); - for (i = 0; i < XSK_RING_PROD__DEFAULT_NUM_DESCS; i++) { + for (i = 0; i < buffers_to_fill; i++) { u64 addr; if (pkt_stream->use_addr_for_fill) { @@ -993,12 +998,12 @@ static void xsk_populate_fill_ring(struct xsk_umem_info *umem, struct pkt_stream break; addr = pkt->addr; } else { - addr = (i % umem->num_frames) * umem->frame_size + DEFAULT_OFFSET; + addr = i * umem->frame_size + DEFAULT_OFFSET; } *xsk_ring_prod__fill_addr(&umem->fq, idx++) = addr; } - xsk_ring_prod__submit(&umem->fq, XSK_RING_PROD__DEFAULT_NUM_DESCS); + xsk_ring_prod__submit(&umem->fq, buffers_to_fill); } static void *worker_testapp_validate_rx(void *arg)