Message ID | 20230725162654.912897-1-adam@wowsignal.io (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | BPF |
Headers | show |
Series | libbpf: Expose API to consume one ring at a time | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Not a local patch |
bpf/vmtest-bpf-next-PR | success | PR summary |
bpf/vmtest-bpf-next-VM_Test-1 | success | Logs for ${{ matrix.test }} on ${{ matrix.arch }} with ${{ matrix.toolchain_full }} |
bpf/vmtest-bpf-next-VM_Test-2 | success | Logs for ShellCheck |
bpf/vmtest-bpf-next-VM_Test-3 | fail | Logs for build for aarch64 with gcc |
bpf/vmtest-bpf-next-VM_Test-4 | fail | Logs for build for s390x with gcc |
bpf/vmtest-bpf-next-VM_Test-5 | fail | Logs for build for x86_64 with gcc |
bpf/vmtest-bpf-next-VM_Test-6 | fail | Logs for build for x86_64 with llvm-16 |
bpf/vmtest-bpf-next-VM_Test-7 | success | Logs for set-matrix |
bpf/vmtest-bpf-next-VM_Test-8 | success | Logs for veristat |
Hi, On 7/26/2023 12:26 AM, Adam Sindelar wrote: > We already provide ring_buffer__epoll_fd to enable use of external > polling systems. However, the only API available to consume the ring > buffer is ring_buffer__consume, which always checks all rings. When > polling for many events, this can be wasteful. > > Signed-off-by: Adam Sindelar <adam@wowsignal.io> > --- > tools/lib/bpf/libbpf.h | 1 + > tools/lib/bpf/ringbuf.c | 15 +++++++++++++++ > 2 files changed, 16 insertions(+) > > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h > index 55b97b2087540..20ccc65eb3f9d 100644 > --- a/tools/lib/bpf/libbpf.h > +++ b/tools/lib/bpf/libbpf.h > @@ -1195,6 +1195,7 @@ LIBBPF_API int ring_buffer__add(struct ring_buffer *rb, int map_fd, > ring_buffer_sample_fn sample_cb, void *ctx); > LIBBPF_API int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms); > LIBBPF_API int ring_buffer__consume(struct ring_buffer *rb); > +LIBBPF_API int ring_buffer__consume_ring(struct ring_buffer *rb, uint32_t ring_id); > LIBBPF_API int ring_buffer__epoll_fd(const struct ring_buffer *rb); > > struct user_ring_buffer_opts { > diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c > index 02199364db136..8d087bfc7d005 100644 > --- a/tools/lib/bpf/ringbuf.c > +++ b/tools/lib/bpf/ringbuf.c > @@ -290,6 +290,21 @@ int ring_buffer__consume(struct ring_buffer *rb) > return res; > } > > +/* Consume available data from a single RINGBUF map identified by its ID. > + * The ring ID is returned in epoll_data by epoll_wait when called with > + * ring_buffer__epoll_fd. > + */ > +int ring_buffer__consume_ring(struct ring_buffer *rb, uint32_t ring_id) > +{ > + struct ring *ring; > + > + if (ring_id >= rb->ring_cnt) > + return libbpf_err(-EINVAL); > + > + ring = &rb->rings[ring_id]; > + return ringbuf_process_ring(ring); When ringbuf_process_ring() returns an error, we need to use libbpf_err() to set the errno accordingly. > +} > + > /* Poll for available data and consume records, if any are available. > * Returns number of records consumed (or INT_MAX, whichever is less), or > * negative number, if any of the registered callbacks returned error.
Hi, On 7/27/2023 9:06 AM, Hou Tao wrote: > Hi, > > On 7/26/2023 12:26 AM, Adam Sindelar wrote: >> We already provide ring_buffer__epoll_fd to enable use of external >> polling systems. However, the only API available to consume the ring >> buffer is ring_buffer__consume, which always checks all rings. When >> polling for many events, this can be wasteful. >> >> Signed-off-by: Adam Sindelar <adam@wowsignal.io> >> --- >> tools/lib/bpf/libbpf.h | 1 + >> tools/lib/bpf/ringbuf.c | 15 +++++++++++++++ >> 2 files changed, 16 insertions(+) >> >> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h >> index 55b97b2087540..20ccc65eb3f9d 100644 >> --- a/tools/lib/bpf/libbpf.h >> +++ b/tools/lib/bpf/libbpf.h >> @@ -1195,6 +1195,7 @@ LIBBPF_API int ring_buffer__add(struct ring_buffer *rb, int map_fd, >> ring_buffer_sample_fn sample_cb, void *ctx); >> LIBBPF_API int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms); >> LIBBPF_API int ring_buffer__consume(struct ring_buffer *rb); >> +LIBBPF_API int ring_buffer__consume_ring(struct ring_buffer *rb, uint32_t ring_id); >> LIBBPF_API int ring_buffer__epoll_fd(const struct ring_buffer *rb); >> >> struct user_ring_buffer_opts { >> diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c >> index 02199364db136..8d087bfc7d005 100644 >> --- a/tools/lib/bpf/ringbuf.c >> +++ b/tools/lib/bpf/ringbuf.c >> @@ -290,6 +290,21 @@ int ring_buffer__consume(struct ring_buffer *rb) >> return res; >> } >> >> +/* Consume available data from a single RINGBUF map identified by its ID. >> + * The ring ID is returned in epoll_data by epoll_wait when called with >> + * ring_buffer__epoll_fd. >> + */ >> +int ring_buffer__consume_ring(struct ring_buffer *rb, uint32_t ring_id) >> +{ >> + struct ring *ring; >> + >> + if (ring_id >= rb->ring_cnt) >> + return libbpf_err(-EINVAL); >> + >> + ring = &rb->rings[ring_id]; >> + return ringbuf_process_ring(ring); > When ringbuf_process_ring() returns an error, we need to use > libbpf_err() to set the errno accordingly. It seems that even when ringbuf_process_ring() returns a positive result, we also need to cap it under INT_MAX, otherwise it may be cast into a negative error. >> +} >> + >> /* Poll for available data and consume records, if any are available. >> * Returns number of records consumed (or INT_MAX, whichever is less), or >> * negative number, if any of the registered callbacks returned error. > > > .
On Thu, Jul 27, 2023 at 02:17:12PM +0800, Hou Tao wrote: > Hi, > > On 7/27/2023 9:06 AM, Hou Tao wrote: > > Hi, > > > > On 7/26/2023 12:26 AM, Adam Sindelar wrote: > >> We already provide ring_buffer__epoll_fd to enable use of external > >> polling systems. However, the only API available to consume the ring > >> buffer is ring_buffer__consume, which always checks all rings. When > >> polling for many events, this can be wasteful. > >> > >> Signed-off-by: Adam Sindelar <adam@wowsignal.io> > >> --- > >> tools/lib/bpf/libbpf.h | 1 + > >> tools/lib/bpf/ringbuf.c | 15 +++++++++++++++ > >> 2 files changed, 16 insertions(+) > >> > >> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h > >> index 55b97b2087540..20ccc65eb3f9d 100644 > >> --- a/tools/lib/bpf/libbpf.h > >> +++ b/tools/lib/bpf/libbpf.h > >> @@ -1195,6 +1195,7 @@ LIBBPF_API int ring_buffer__add(struct ring_buffer *rb, int map_fd, > >> ring_buffer_sample_fn sample_cb, void *ctx); > >> LIBBPF_API int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms); > >> LIBBPF_API int ring_buffer__consume(struct ring_buffer *rb); > >> +LIBBPF_API int ring_buffer__consume_ring(struct ring_buffer *rb, uint32_t ring_id); > >> LIBBPF_API int ring_buffer__epoll_fd(const struct ring_buffer *rb); > >> > >> struct user_ring_buffer_opts { > >> diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c > >> index 02199364db136..8d087bfc7d005 100644 > >> --- a/tools/lib/bpf/ringbuf.c > >> +++ b/tools/lib/bpf/ringbuf.c > >> @@ -290,6 +290,21 @@ int ring_buffer__consume(struct ring_buffer *rb) > >> return res; > >> } > >> > >> +/* Consume available data from a single RINGBUF map identified by its ID. > >> + * The ring ID is returned in epoll_data by epoll_wait when called with > >> + * ring_buffer__epoll_fd. > >> + */ > >> +int ring_buffer__consume_ring(struct ring_buffer *rb, uint32_t ring_id) > >> +{ > >> + struct ring *ring; > >> + > >> + if (ring_id >= rb->ring_cnt) > >> + return libbpf_err(-EINVAL); > >> + > >> + ring = &rb->rings[ring_id]; > >> + return ringbuf_process_ring(ring); > > When ringbuf_process_ring() returns an error, we need to use > > libbpf_err() to set the errno accordingly. > > It seems that even when ringbuf_process_ring() returns a positive > result, we also need to cap it under INT_MAX, otherwise it may be cast > into a negative error. Ah, sorry I missed that. Fixed in v3, going out in a few moments. > >> +} > >> + > >> /* Poll for available data and consume records, if any are available. > >> * Returns number of records consumed (or INT_MAX, whichever is less), or > >> * negative number, if any of the registered callbacks returned error. > > > > > > . > Thanks!
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index 55b97b2087540..20ccc65eb3f9d 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -1195,6 +1195,7 @@ LIBBPF_API int ring_buffer__add(struct ring_buffer *rb, int map_fd, ring_buffer_sample_fn sample_cb, void *ctx); LIBBPF_API int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms); LIBBPF_API int ring_buffer__consume(struct ring_buffer *rb); +LIBBPF_API int ring_buffer__consume_ring(struct ring_buffer *rb, uint32_t ring_id); LIBBPF_API int ring_buffer__epoll_fd(const struct ring_buffer *rb); struct user_ring_buffer_opts { diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c index 02199364db136..8d087bfc7d005 100644 --- a/tools/lib/bpf/ringbuf.c +++ b/tools/lib/bpf/ringbuf.c @@ -290,6 +290,21 @@ int ring_buffer__consume(struct ring_buffer *rb) return res; } +/* Consume available data from a single RINGBUF map identified by its ID. + * The ring ID is returned in epoll_data by epoll_wait when called with + * ring_buffer__epoll_fd. + */ +int ring_buffer__consume_ring(struct ring_buffer *rb, uint32_t ring_id) +{ + struct ring *ring; + + if (ring_id >= rb->ring_cnt) + return libbpf_err(-EINVAL); + + ring = &rb->rings[ring_id]; + return ringbuf_process_ring(ring); +} + /* Poll for available data and consume records, if any are available. * Returns number of records consumed (or INT_MAX, whichever is less), or * negative number, if any of the registered callbacks returned error.
We already provide ring_buffer__epoll_fd to enable use of external polling systems. However, the only API available to consume the ring buffer is ring_buffer__consume, which always checks all rings. When polling for many events, this can be wasteful. Signed-off-by: Adam Sindelar <adam@wowsignal.io> --- tools/lib/bpf/libbpf.h | 1 + tools/lib/bpf/ringbuf.c | 15 +++++++++++++++ 2 files changed, 16 insertions(+)