Message ID | cover.1639162845.git.lorenzo@kernel.org (mailing list archive) |
---|---|
Headers | show |
Series | mvneta: introduce XDP multi-buffer support | expand |
Lorenzo Bianconi wrote: > This series introduce XDP multi-buffer support. The mvneta driver is > the first to support these new "non-linear" xdp_{buff,frame}. Reviewers > please focus on how these new types of xdp_{buff,frame} packets > traverse the different layers and the layout design. It is on purpose > that BPF-helpers are kept simple, as we don't want to expose the > internal layout to allow later changes. > > The main idea for the new multi-buffer layout is to reuse the same > structure used for non-linear SKB. This rely on the "skb_shared_info" > struct at the end of the first buffer to link together subsequent > buffers. Keeping the layout compatible with SKBs is also done to ease > and speedup creating a SKB from an xdp_{buff,frame}. > Converting xdp_frame to SKB and deliver it to the network stack is shown > in patch 05/18 (e.g. cpumaps). > > A multi-buffer bit (mb) has been introduced in the flags field of xdp_{buff,frame} > structure to notify the bpf/network layer if this is a xdp multi-buffer frame > (mb = 1) or not (mb = 0). > The mb bit will be set by a xdp multi-buffer capable driver only for > non-linear frames maintaining the capability to receive linear frames > without any extra cost since the skb_shared_info structure at the end > of the first buffer will be initialized only if mb is set. > Moreover the flags field in xdp_{buff,frame} will be reused even for > xdp rx csum offloading in future series. > > Typical use cases for this series are: > - Jumbo-frames > - Packet header split (please see Google���s use-case @ NetDevConf 0x14, [0]) > - TSO/GRO for XDP_REDIRECT > > The three following ebpf helpers (and related selftests) has been introduced: > - bpf_xdp_load_bytes: > This helper is provided as an easy way to load data from a xdp buffer. It > can be used to load len bytes from offset from the frame associated to > xdp_md, into the buffer pointed by buf. > - bpf_xdp_store_bytes: > Store len bytes from buffer buf into the frame associated to xdp_md, at > offset. > - bpf_xdp_get_buff_len: > Return the total frame size (linear + paged parts) > > bpf_xdp_adjust_tail and bpf_xdp_copy helpers have been modified to take into > account xdp multi-buff frames. > Moreover, similar to skb_header_pointer, we introduced bpf_xdp_pointer utility > routine to return a pointer to a given position in the xdp_buff if the > requested area (offset + len) is contained in a contiguous memory area > otherwise it must be copied in a bounce buffer provided by the caller running > bpf_xdp_copy_buf(). > > BPF_F_XDP_MB flag for bpf_attr has been introduced to notify the kernel the > eBPF program fully support xdp multi-buffer. > SEC("xdp_mb/"), SEC_DEF("xdp_devmap_mb/") and SEC_DEF("xdp_cpumap_mb/" have been > introduced to declare xdp multi-buffer support. > The NIC driver is expected to reject an eBPF program if it is running in XDP > multi-buffer mode and the program does not support XDP multi-buffer. > In the same way it is not possible to mix xdp multi-buffer and xdp legacy > programs in a CPUMAP/DEVMAP or tailcall a xdp multi-buffer/legacy program from > a legacy/multi-buff one. > > More info about the main idea behind this approach can be found here [1][2]. Thanks for sticking with this. OK for the series, I really want to see this on some other hardware though, preferably 40Gbps or more ASAP... Acked-by: John Fastabend <john.fastabend@gmail.com>
Lorenzo Bianconi <lorenzo@kernel.org> writes: > This series introduce XDP multi-buffer support. The mvneta driver is > the first to support these new "non-linear" xdp_{buff,frame}. Reviewers > please focus on how these new types of xdp_{buff,frame} packets > traverse the different layers and the layout design. It is on purpose > that BPF-helpers are kept simple, as we don't want to expose the > internal layout to allow later changes. > > The main idea for the new multi-buffer layout is to reuse the same > structure used for non-linear SKB. This rely on the "skb_shared_info" > struct at the end of the first buffer to link together subsequent > buffers. Keeping the layout compatible with SKBs is also done to ease > and speedup creating a SKB from an xdp_{buff,frame}. > Converting xdp_frame to SKB and deliver it to the network stack is shown > in patch 05/18 (e.g. cpumaps). > > A multi-buffer bit (mb) has been introduced in the flags field of xdp_{buff,frame} > structure to notify the bpf/network layer if this is a xdp multi-buffer frame > (mb = 1) or not (mb = 0). > The mb bit will be set by a xdp multi-buffer capable driver only for > non-linear frames maintaining the capability to receive linear frames > without any extra cost since the skb_shared_info structure at the end > of the first buffer will be initialized only if mb is set. > Moreover the flags field in xdp_{buff,frame} will be reused even for > xdp rx csum offloading in future series. > > Typical use cases for this series are: > - Jumbo-frames > - Packet header split (please see Google’s use-case @ NetDevConf 0x14, [0]) > - TSO/GRO for XDP_REDIRECT > > The three following ebpf helpers (and related selftests) has been introduced: > - bpf_xdp_load_bytes: > This helper is provided as an easy way to load data from a xdp buffer. It > can be used to load len bytes from offset from the frame associated to > xdp_md, into the buffer pointed by buf. > - bpf_xdp_store_bytes: > Store len bytes from buffer buf into the frame associated to xdp_md, at > offset. > - bpf_xdp_get_buff_len: > Return the total frame size (linear + paged parts) > > bpf_xdp_adjust_tail and bpf_xdp_copy helpers have been modified to take into > account xdp multi-buff frames. > Moreover, similar to skb_header_pointer, we introduced bpf_xdp_pointer utility > routine to return a pointer to a given position in the xdp_buff if the > requested area (offset + len) is contained in a contiguous memory area > otherwise it must be copied in a bounce buffer provided by the caller running > bpf_xdp_copy_buf(). > > BPF_F_XDP_MB flag for bpf_attr has been introduced to notify the kernel the > eBPF program fully support xdp multi-buffer. > SEC("xdp_mb/"), SEC_DEF("xdp_devmap_mb/") and SEC_DEF("xdp_cpumap_mb/" have been > introduced to declare xdp multi-buffer support. > The NIC driver is expected to reject an eBPF program if it is running in XDP > multi-buffer mode and the program does not support XDP multi-buffer. > In the same way it is not possible to mix xdp multi-buffer and xdp legacy > programs in a CPUMAP/DEVMAP or tailcall a xdp multi-buffer/legacy program from > a legacy/multi-buff one. > > More info about the main idea behind this approach can be found here > [1][2]. Great to see this converging; as John said, thanks for sticking with it! Nice round number on the series version as well ;) For the series: Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
[...] > > Eelco Chaudron (3): > bpf: add multi-buff support to the bpf_xdp_adjust_tail() API > bpf: add multi-buffer support to xdp copy helpers > bpf: selftests: update xdp_adjust_tail selftest to include > multi-buffer > > Lorenzo Bianconi (19): > net: skbuff: add size metadata to skb_shared_info for xdp > xdp: introduce flags field in xdp_buff/xdp_frame > net: mvneta: update mb bit before passing the xdp buffer to eBPF layer > net: mvneta: simplify mvneta_swbm_add_rx_fragment management > net: xdp: add xdp_update_skb_shared_info utility routine > net: marvell: rely on xdp_update_skb_shared_info utility routine > xdp: add multi-buff support to xdp_return_{buff/frame} > net: mvneta: add multi buffer support to XDP_TX > bpf: introduce BPF_F_XDP_MB flag in prog_flags loading the ebpf > program > net: mvneta: enable jumbo frames if the loaded XDP program support mb > bpf: introduce bpf_xdp_get_buff_len helper > bpf: move user_size out of bpf_test_init > bpf: introduce multibuff support to bpf_prog_test_run_xdp() > bpf: test_run: add xdp_shared_info pointer in bpf_test_finish > signature > libbpf: Add SEC name for xdp_mb programs > net: xdp: introduce bpf_xdp_pointer utility routine > bpf: selftests: introduce bpf_xdp_{load,store}_bytes selftest > bpf: selftests: add CPUMAP/DEVMAP selftests for xdp multi-buff > xdp: disable XDP_REDIRECT for xdp multi-buff > > Toke Hoiland-Jorgensen (1): > bpf: generalise tail call map compatibility check Hi Alexei and Daniel, I noticed this series's state is now set to "New, archived" in patchwork. Is it due to conflicts? Do I need to repost? Regards, Lorenzo > > drivers/net/ethernet/marvell/mvneta.c | 204 +++++++++------ > include/linux/bpf.h | 32 ++- > include/linux/skbuff.h | 1 + > include/net/xdp.h | 108 +++++++- > include/uapi/linux/bpf.h | 30 +++ > kernel/bpf/arraymap.c | 4 +- > kernel/bpf/core.c | 28 +- > kernel/bpf/cpumap.c | 8 +- > kernel/bpf/devmap.c | 3 +- > kernel/bpf/syscall.c | 25 +- > kernel/trace/bpf_trace.c | 3 + > net/bpf/test_run.c | 115 +++++++-- > net/core/filter.c | 244 +++++++++++++++++- > net/core/xdp.c | 78 +++++- > tools/include/uapi/linux/bpf.h | 30 +++ > tools/lib/bpf/libbpf.c | 8 + > .../bpf/prog_tests/xdp_adjust_frags.c | 103 ++++++++ > .../bpf/prog_tests/xdp_adjust_tail.c | 131 ++++++++++ > .../selftests/bpf/prog_tests/xdp_bpf2bpf.c | 151 ++++++++--- > .../bpf/prog_tests/xdp_cpumap_attach.c | 65 ++++- > .../bpf/prog_tests/xdp_devmap_attach.c | 56 ++++ > .../bpf/progs/test_xdp_adjust_tail_grow.c | 10 +- > .../bpf/progs/test_xdp_adjust_tail_shrink.c | 32 ++- > .../selftests/bpf/progs/test_xdp_bpf2bpf.c | 2 +- > .../bpf/progs/test_xdp_update_frags.c | 42 +++ > .../bpf/progs/test_xdp_with_cpumap_helpers.c | 6 + > .../progs/test_xdp_with_cpumap_mb_helpers.c | 27 ++ > .../bpf/progs/test_xdp_with_devmap_helpers.c | 7 + > .../progs/test_xdp_with_devmap_mb_helpers.c | 27 ++ > 29 files changed, 1368 insertions(+), 212 deletions(-) > create mode 100644 tools/testing/selftests/bpf/prog_tests/xdp_adjust_frags.c > create mode 100644 tools/testing/selftests/bpf/progs/test_xdp_update_frags.c > create mode 100644 tools/testing/selftests/bpf/progs/test_xdp_with_cpumap_mb_helpers.c > create mode 100644 tools/testing/selftests/bpf/progs/test_xdp_with_devmap_mb_helpers.c > > -- > 2.33.1 >
On Tue, Dec 28, 2021 at 6:45 AM Lorenzo Bianconi <lorenzo.bianconi@redhat.com> wrote: > > [...] > > > > Eelco Chaudron (3): > > bpf: add multi-buff support to the bpf_xdp_adjust_tail() API > > bpf: add multi-buffer support to xdp copy helpers > > bpf: selftests: update xdp_adjust_tail selftest to include > > multi-buffer > > > > Lorenzo Bianconi (19): > > net: skbuff: add size metadata to skb_shared_info for xdp > > xdp: introduce flags field in xdp_buff/xdp_frame > > net: mvneta: update mb bit before passing the xdp buffer to eBPF layer > > net: mvneta: simplify mvneta_swbm_add_rx_fragment management > > net: xdp: add xdp_update_skb_shared_info utility routine > > net: marvell: rely on xdp_update_skb_shared_info utility routine > > xdp: add multi-buff support to xdp_return_{buff/frame} > > net: mvneta: add multi buffer support to XDP_TX > > bpf: introduce BPF_F_XDP_MB flag in prog_flags loading the ebpf > > program > > net: mvneta: enable jumbo frames if the loaded XDP program support mb > > bpf: introduce bpf_xdp_get_buff_len helper > > bpf: move user_size out of bpf_test_init > > bpf: introduce multibuff support to bpf_prog_test_run_xdp() > > bpf: test_run: add xdp_shared_info pointer in bpf_test_finish > > signature > > libbpf: Add SEC name for xdp_mb programs > > net: xdp: introduce bpf_xdp_pointer utility routine > > bpf: selftests: introduce bpf_xdp_{load,store}_bytes selftest > > bpf: selftests: add CPUMAP/DEVMAP selftests for xdp multi-buff > > xdp: disable XDP_REDIRECT for xdp multi-buff > > > > Toke Hoiland-Jorgensen (1): > > bpf: generalise tail call map compatibility check > > Hi Alexei and Daniel, > > I noticed this series's state is now set to "New, archived" in patchwork. > Is it due to conflicts? Do I need to repost? I believe Daniel had some comments, but please repost anyway. The fresh rebase will be easier to review.