diff mbox series

[bpf-next,v2,1/1] docs: BPF_MAP_TYPE_CPUMAP

Message ID 20221102124416.2820268-2-mtahhan@redhat.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series docs: BPF_MAP_TYPE_CPUMAP | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for bpf-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 0 this patch: 0
netdev/cc_maintainers warning 16 maintainers not CCed: kuba@kernel.org sdf@google.com john.fastabend@gmail.com davem@davemloft.net andrii@kernel.org yhs@fb.com ast@kernel.org hawk@kernel.org netdev@vger.kernel.org haoluo@google.com corbet@lwn.net jolsa@kernel.org kpsingh@kernel.org song@kernel.org daniel@iogearbox.net martin.lau@linux.dev
netdev/build_clang success Errors and warnings before: 0 this patch: 0
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 0 this patch: 0
netdev/checkpatch warning WARNING: Co-developed-by and Signed-off-by: name/email do not match WARNING: added, moved or deleted file(s), does MAINTAINERS need updating?
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-PR pending PR summary
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-28 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-29 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-27 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-1 pending Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-6 success Logs for llvm-toolchain
bpf/vmtest-bpf-next-VM_Test-7 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-5 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-10 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-16 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-19 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs_no_alu32_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-20 success Logs for test_progs_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-8 success Logs for test_maps on s390x with gcc

Commit Message

Maryam Tahhan Nov. 2, 2022, 12:44 p.m. UTC
From: Maryam Tahhan <mtahhan@redhat.com>

Add documentation for BPF_MAP_TYPE_CPUMAP including
kernel version introduced, usage and examples.

Signed-off-by: Maryam Tahhan <mtahhan@redhat.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Co-developed-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 Documentation/bpf/map_cpumap.rst | 140 +++++++++++++++++++++++++++++++
 1 file changed, 140 insertions(+)
 create mode 100644 Documentation/bpf/map_cpumap.rst

Comments

Donald Hunter Nov. 3, 2022, 10:24 a.m. UTC | #1
mtahhan@redhat.com writes:

> From: Maryam Tahhan <mtahhan@redhat.com>
>
> Add documentation for BPF_MAP_TYPE_CPUMAP including
> kernel version introduced, usage and examples.
>
> Signed-off-by: Maryam Tahhan <mtahhan@redhat.com>
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> Co-developed-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---
>  Documentation/bpf/map_cpumap.rst | 140 +++++++++++++++++++++++++++++++
>  1 file changed, 140 insertions(+)
>  create mode 100644 Documentation/bpf/map_cpumap.rst
>
> diff --git a/Documentation/bpf/map_cpumap.rst b/Documentation/bpf/map_cpumap.rst
> new file mode 100644
> index 000000000000..23320fb61bf7
> --- /dev/null
> +++ b/Documentation/bpf/map_cpumap.rst
> @@ -0,0 +1,140 @@
> +.. SPDX-License-Identifier: GPL-2.0-only
> +.. Copyright (C) 2022 Red Hat, Inc.
> +
> +===================
> +BPF_MAP_TYPE_CPUMAP
> +===================
> +
> +.. note::
> +   - ``BPF_MAP_TYPE_CPUMAP`` was introduced in kernel version 4.15
> +
> +``BPF_MAP_TYPE_CPUMAP`` is primarily used as a backend map for the XDP BPF
> +helpers ``bpf_redirect_map()`` and ``XDP_REDIRECT`` action. This map type redirects raw
> +XDP frames to another CPU.
> +
> +A CPUMAP is a scalability and isolation mechanism that allows the steering of packets
> +to dedicated CPUs for processing. An example use-case for this map type is software
> +based Receive Side Scaling (RSS).
> +
> +The CPUMAP represents the CPUs in the system indexed as the map-key, and the
> +map-value is the config setting (per CPUMAP entry). Each CPUMAP entry has a dedicated
> +kernel thread bound to the given CPU to represent the remote CPU execution unit.
> +
> +Starting from Linux kernel version 5.9 the CPUMAP can run a second XDP program
> +on the remote CPU. This allows an XDP program to split its processing across
> +multiple CPUs. For example, a scenario where the initial CPU (that sees/receives
> +the packets) needs to do minimal packet processing and the remote CPU (to which
> +the packet is directed) can afford to spend more cycles processing the frame. The
> +initial CPU is where the XDP redirect program is executed. The remote CPU
> +receives raw``xdp_frame`` objects.

Nit - missing space between raw and ``xdp_frame`` is breaking formatting.

> +
> +Usage
> +=====

Can you add subheadings for "Kernel BPF" and "Userspace" and move
update, lookup, delete under "Userspace".

> +.. c:function::
> +   long bpf_map_update_elem(struct bpf_map *map, const void *key, const void *value, u64 flags)

This function signature is for the BPF helper. If it can only be used
from userspace then this should be the libbpf function signature.

> +
> + CPU entries can be added or updated using the ``bpf_map_update_elem()``
> + helper. This helper replaces existing elements atomically. The ``value`` parameter
> + can be ``struct bpf_cpumap_val``.

I think this needs to be a stronger statement that says the value must
either be a __u32 or a struct bpf_cpumap_val.

> + .. note::
> +    The maps can only be updated from user space and not from a BPF program.

Suggest moving this note to the start of the usage section.

> + .. code-block:: c
> +
> +    struct bpf_cpumap_val {
> +        __u32 qsize;  /* queue size to remote target CPU */
> +        union {
> +            int   fd; /* prog fd on map write */
> +            __u32 id; /* prog id on map read */
> +        } bpf_prog;
> +    };

Should also state the valid use of flags, which I think is BPF_ANY or
BPF_EXIST due to the array semantics.

> +.. c:function::
> +   void *bpf_map_lookup_elem(struct bpf_map *map, const void *key)

This needs to be the libbpf function signature.

> + CPU entries can be retrieved using the ``bpf_map_lookup_elem()``
> + helper.
> +
> +.. c:function::
> +   long bpf_map_delete_elem(struct bpf_map *map, const void *key)

This needs to be the libbpf function signature.

> + CPU entries can be deleted using the ``bpf_map_delete_elem()``
> + helper. This helper will return 0 on success, or negative error in case of
> + failure.
> +
> +.. c:function::
> +     long bpf_redirect_map(struct bpf_map *map, u32 key, u64 flags)

Can you put this under a "Kernel BPF" subheading.

> + Redirect the packet to the endpoint referenced by ``map`` at index ``key``.
> + For ``BPF_MAP_TYPE_CPUMAP`` this map contains references to CPUs.
> +
> + The lower two bits of *flags* are used as the return code if the map lookup

Nit - should that be ``flags``

> + fails. This is so that the return value can be one of the XDP program return
> + codes up to ``XDP_TX``, as chosen by the caller.
> +
> +Examples
> +========
> +Kernel
> +------
> +
> +The following code snippet shows how to declare a BPF_MAP_TYPE_CPUMAP called
> +cpu_map and how to redirect packets to a remote CPU using a round robin scheme.

Nit - ``BPF_MAP_TYPE_CPUMAP`` called ``cpu_map``

> +.. code-block:: c
> +
> +   struct {
> +        __uint(type, BPF_MAP_TYPE_CPUMAP);
> +        __type(key, u32);
> +        __type(value, struct bpf_cpumap_val);
> +        __uint(max_entries, 12);
> +    } cpu_map SEC(".maps");
> +
> +    struct {
> +        __uint(type, BPF_MAP_TYPE_ARRAY);
> +        __type(key, u32);
> +        __type(value, u32);
> +        __uint(max_entries, 12);
> +    } cpus_available SEC(".maps");
> +
> +    struct {
> +        __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
> +        __type(key, u32);
> +        __type(value, u32);
> +        __uint(max_entries, 1);
> +       } cpus_iterator SEC(".maps");

Nit - closing brace indentation.

> +    SEC("xdp")
> +    int  xdp_redir_cpu_round_robin(struct xdp_md *ctx)
> +    {
> +        u32 key = 0;
> +        u32 cpu_dest = 0;
> +        u32 *cpu_selected, *cpu_iterator;
> +        u32 cpu_idx;
> +
> +        cpu_iterator = bpf_map_lookup_elem(&cpus_iterator, &key);
> +        if (!cpu_iterator)
> +            return XDP_ABORTED;
> +        cpu_idx = *cpu_iterator;
> +
> +        *cpu_iterator += 1;
> +        if (*cpu_iterator == bpf_num_possible_cpus())
> +            *cpu_iterator = 0;
> +
> +        cpu_selected = bpf_map_lookup_elem(&cpus_available, &cpu_idx);
> +        if (!cpu_selected)
> +            return XDP_ABORTED;
> +        cpu_dest = *cpu_selected;
> +
> +        if (cpu_dest >= bpf_num_possible_cpus())
> +            return XDP_ABORTED;
> +
> +        return bpf_redirect_map(&cpu_map, cpu_dest, 0);
> +    }

I think the above example should use __u32 instead of u32 because it
should use UAPI definitions, but we should verify this.

> +References
> +===========
> +
> +- https://elixir.bootlin.com/linux/v6.0.1/source/kernel/bpf/cpumap.c
> +- https://developers.redhat.com/blog/2021/05/13/receive-side-scaling-rss-with-ebpf-and-cpumap#redirecting_into_a_cpumap
diff mbox series

Patch

diff --git a/Documentation/bpf/map_cpumap.rst b/Documentation/bpf/map_cpumap.rst
new file mode 100644
index 000000000000..23320fb61bf7
--- /dev/null
+++ b/Documentation/bpf/map_cpumap.rst
@@ -0,0 +1,140 @@ 
+.. SPDX-License-Identifier: GPL-2.0-only
+.. Copyright (C) 2022 Red Hat, Inc.
+
+===================
+BPF_MAP_TYPE_CPUMAP
+===================
+
+.. note::
+   - ``BPF_MAP_TYPE_CPUMAP`` was introduced in kernel version 4.15
+
+``BPF_MAP_TYPE_CPUMAP`` is primarily used as a backend map for the XDP BPF
+helpers ``bpf_redirect_map()`` and ``XDP_REDIRECT`` action. This map type redirects raw
+XDP frames to another CPU.
+
+A CPUMAP is a scalability and isolation mechanism that allows the steering of packets
+to dedicated CPUs for processing. An example use-case for this map type is software
+based Receive Side Scaling (RSS).
+
+The CPUMAP represents the CPUs in the system indexed as the map-key, and the
+map-value is the config setting (per CPUMAP entry). Each CPUMAP entry has a dedicated
+kernel thread bound to the given CPU to represent the remote CPU execution unit.
+
+Starting from Linux kernel version 5.9 the CPUMAP can run a second XDP program
+on the remote CPU. This allows an XDP program to split its processing across
+multiple CPUs. For example, a scenario where the initial CPU (that sees/receives
+the packets) needs to do minimal packet processing and the remote CPU (to which
+the packet is directed) can afford to spend more cycles processing the frame. The
+initial CPU is where the XDP redirect program is executed. The remote CPU
+receives raw``xdp_frame`` objects.
+
+Usage
+=====
+
+.. c:function::
+   long bpf_map_update_elem(struct bpf_map *map, const void *key, const void *value, u64 flags)
+
+ CPU entries can be added or updated using the ``bpf_map_update_elem()``
+ helper. This helper replaces existing elements atomically. The ``value`` parameter
+ can be ``struct bpf_cpumap_val``.
+
+ .. note::
+    The maps can only be updated from user space and not from a BPF program.
+
+ .. code-block:: c
+
+    struct bpf_cpumap_val {
+        __u32 qsize;  /* queue size to remote target CPU */
+        union {
+            int   fd; /* prog fd on map write */
+            __u32 id; /* prog id on map read */
+        } bpf_prog;
+    };
+
+.. c:function::
+   void *bpf_map_lookup_elem(struct bpf_map *map, const void *key)
+
+ CPU entries can be retrieved using the ``bpf_map_lookup_elem()``
+ helper.
+
+.. c:function::
+   long bpf_map_delete_elem(struct bpf_map *map, const void *key)
+
+ CPU entries can be deleted using the ``bpf_map_delete_elem()``
+ helper. This helper will return 0 on success, or negative error in case of
+ failure.
+
+.. c:function::
+     long bpf_redirect_map(struct bpf_map *map, u32 key, u64 flags)
+
+ Redirect the packet to the endpoint referenced by ``map`` at index ``key``.
+ For ``BPF_MAP_TYPE_CPUMAP`` this map contains references to CPUs.
+
+ The lower two bits of *flags* are used as the return code if the map lookup
+ fails. This is so that the return value can be one of the XDP program return
+ codes up to ``XDP_TX``, as chosen by the caller.
+
+Examples
+========
+Kernel
+------
+
+The following code snippet shows how to declare a BPF_MAP_TYPE_CPUMAP called
+cpu_map and how to redirect packets to a remote CPU using a round robin scheme.
+
+.. code-block:: c
+
+   struct {
+        __uint(type, BPF_MAP_TYPE_CPUMAP);
+        __type(key, u32);
+        __type(value, struct bpf_cpumap_val);
+        __uint(max_entries, 12);
+    } cpu_map SEC(".maps");
+
+    struct {
+        __uint(type, BPF_MAP_TYPE_ARRAY);
+        __type(key, u32);
+        __type(value, u32);
+        __uint(max_entries, 12);
+    } cpus_available SEC(".maps");
+
+    struct {
+        __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
+        __type(key, u32);
+        __type(value, u32);
+        __uint(max_entries, 1);
+       } cpus_iterator SEC(".maps");
+
+    SEC("xdp")
+    int  xdp_redir_cpu_round_robin(struct xdp_md *ctx)
+    {
+        u32 key = 0;
+        u32 cpu_dest = 0;
+        u32 *cpu_selected, *cpu_iterator;
+        u32 cpu_idx;
+
+        cpu_iterator = bpf_map_lookup_elem(&cpus_iterator, &key);
+        if (!cpu_iterator)
+            return XDP_ABORTED;
+        cpu_idx = *cpu_iterator;
+
+        *cpu_iterator += 1;
+        if (*cpu_iterator == bpf_num_possible_cpus())
+            *cpu_iterator = 0;
+
+        cpu_selected = bpf_map_lookup_elem(&cpus_available, &cpu_idx);
+        if (!cpu_selected)
+            return XDP_ABORTED;
+        cpu_dest = *cpu_selected;
+
+        if (cpu_dest >= bpf_num_possible_cpus())
+            return XDP_ABORTED;
+
+        return bpf_redirect_map(&cpu_map, cpu_dest, 0);
+    }
+
+References
+===========
+
+- https://elixir.bootlin.com/linux/v6.0.1/source/kernel/bpf/cpumap.c
+- https://developers.redhat.com/blog/2021/05/13/receive-side-scaling-rss-with-ebpf-and-cpumap#redirecting_into_a_cpumap