mbox series

[RFC,bpf-next,0/3] support string key for hash-table

Message ID 20211219052245.791605-1-houtao1@huawei.com (mailing list archive)
Headers show
Series support string key for hash-table | expand

Message

Hou Tao Dec. 19, 2021, 5:22 a.m. UTC
Hi,

In order to use string as hash-table key, key_size must be the storage
size of longest string. If there are large differencies in string
length, the hash distribution will be sub-optimal due to the unused
zero bytes in shorter strings and the lookup will be inefficient due to
unnecessary memcpy().

Also it is possible the unused part of string key returned from bpf helper
(e.g. bpf_d_path) is not mem-zeroed and if using it directly as lookup key,
the lookup will fail with -ENOENT (as reported in [1]).

The patchset tries to address the inefficiency by adding support for
string key. During the key comparison, the string length is checked
first to reduce the uunecessary memcmp. Also update the hash function
from jhash() to full_name_hash() to reduce hash collision of string key.

There are about 16% and 106% improvment in benchmark under x86-64 and
arm64 when key_size is 256. About 45% and %161 when key size is greater
than 1024.

Also testing the performance improvment by using all files under linux
kernel sources as the string key input. There are about 74k files and the
maximum string length is 101. When key_size is 104, there are about 9%
and 35% win under x86-64 and arm64 in lookup performance, and when key_size
is 256, the win increases to 78% and 109% respectively.

Beside the optimization of lookup for string key, it seems that the
allocated space for BPF_F_NO_PREALLOC-case can also be optimized. More
trials and tests will be conducted if the idea of string key is accepted.

Comments are always welcome.

Regards,
Tao

[1]: https://lore.kernel.org/bpf/20211120051839.28212-2-yunbo.xufeng@linux.alibaba.com

Hou Tao (3):
  bpf: factor out helpers for htab bucket and element lookup
  bpf: add BPF_F_STR_KEY to support string key in htab
  selftests/bpf: add benchmark for string-key hash-table

 include/uapi/linux/bpf.h                      |   3 +
 kernel/bpf/hashtab.c                          | 448 +++++++++++-------
 tools/include/uapi/linux/bpf.h                |   3 +
 tools/testing/selftests/bpf/Makefile          |   4 +-
 tools/testing/selftests/bpf/bench.c           |  10 +
 .../selftests/bpf/benchs/bench_str_htab.c     | 255 ++++++++++
 .../testing/selftests/bpf/benchs/run_htab.sh  |  14 +
 .../selftests/bpf/progs/str_htab_bench.c      | 123 +++++
 8 files changed, 685 insertions(+), 175 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/benchs/bench_str_htab.c
 create mode 100755 tools/testing/selftests/bpf/benchs/run_htab.sh
 create mode 100644 tools/testing/selftests/bpf/progs/str_htab_bench.c

Comments

Alexei Starovoitov Dec. 20, 2021, 3 a.m. UTC | #1
On Sun, Dec 19, 2021 at 01:22:42PM +0800, Hou Tao wrote:
> Hi,
> 
> In order to use string as hash-table key, key_size must be the storage
> size of longest string. If there are large differencies in string
> length, the hash distribution will be sub-optimal due to the unused
> zero bytes in shorter strings and the lookup will be inefficient due to
> unnecessary memcpy().
> 
> Also it is possible the unused part of string key returned from bpf helper
> (e.g. bpf_d_path) is not mem-zeroed and if using it directly as lookup key,
> the lookup will fail with -ENOENT (as reported in [1]).
> 
> The patchset tries to address the inefficiency by adding support for
> string key. During the key comparison, the string length is checked
> first to reduce the uunecessary memcmp. Also update the hash function
> from jhash() to full_name_hash() to reduce hash collision of string key.
> 
> There are about 16% and 106% improvment in benchmark under x86-64 and
> arm64 when key_size is 256. About 45% and %161 when key size is greater
> than 1024.
> 
> Also testing the performance improvment by using all files under linux
> kernel sources as the string key input. There are about 74k files and the
> maximum string length is 101. When key_size is 104, there are about 9%
> and 35% win under x86-64 and arm64 in lookup performance, and when key_size
> is 256, the win increases to 78% and 109% respectively.
> 
> Beside the optimization of lookup for string key, it seems that the
> allocated space for BPF_F_NO_PREALLOC-case can also be optimized. More
> trials and tests will be conducted if the idea of string key is accepted.

It will work when the key is a string. Sooner or later somebody would need
the key to be a string and few other integers or pointers.
This approach will not be usable.
Much worse, this approach will be impossible to extend.
Have you considered a more generic string support?
Make null terminated string to be a fist class citizen.
wdyt?
Hou Tao Dec. 23, 2021, 12:02 p.m. UTC | #2
Hi,

On 12/20/2021 11:00 AM, Alexei Starovoitov wrote:
> On Sun, Dec 19, 2021 at 01:22:42PM +0800, Hou Tao wrote:
>> Hi,
>>
>> In order to use string as hash-table key, key_size must be the storage
>> size of longest string. If there are large differencies in string
>> length, the hash distribution will be sub-optimal due to the unused
>> zero bytes in shorter strings and the lookup will be inefficient due to
>> unnecessary memcpy().
>>
>> Also it is possible the unused part of string key returned from bpf helper
>> (e.g. bpf_d_path) is not mem-zeroed and if using it directly as lookup key,
>> the lookup will fail with -ENOENT (as reported in [1]).
>>
>> The patchset tries to address the inefficiency by adding support for
>> string key. During the key comparison, the string length is checked
>> first to reduce the uunecessary memcmp. Also update the hash function
>> from jhash() to full_name_hash() to reduce hash collision of string key.
>>
>> There are about 16% and 106% improvment in benchmark under x86-64 and
>> arm64 when key_size is 256. About 45% and %161 when key size is greater
>> than 1024.
>>
>> Also testing the performance improvment by using all files under linux
>> kernel sources as the string key input. There are about 74k files and the
>> maximum string length is 101. When key_size is 104, there are about 9%
>> and 35% win under x86-64 and arm64 in lookup performance, and when key_size
>> is 256, the win increases to 78% and 109% respectively.
>>
>> Beside the optimization of lookup for string key, it seems that the
>> allocated space for BPF_F_NO_PREALLOC-case can also be optimized. More
>> trials and tests will be conducted if the idea of string key is accepted.
> It will work when the key is a string. Sooner or later somebody would need
> the key to be a string and few other integers or pointers.
> This approach will not be usable.
> Much worse, this approach will be impossible to extend.
Although we can format other no-string fields in key into string and still use
one string as the only key, but you are right, the combination of string and
other types as hash key is common, the optimization on string key will not
be applicable to these common cases.
> Have you considered a more generic string support?
> Make null terminated string to be a fist class citizen.
> wdyt?
The generic string support is a good idea. It needs to fulfill the following
two goals:
1) remove the unnecessary memory zeroing when update or lookup
hash-table
2) optimize for hash generation and key comparison

The first solution comes to me is to add a variable-sized: struct bpf_str and
use it as the last field of hash table key:

struct bpf_str {
    /* string hash */
    u32 hash;
    u32 len;
    char raw[0];
};

struct htab_key {
    __u32 cookies;
    struct bpf_str name;
};

For hash generation, the length for jhash() will be sizeof(htab_key). During
key comparison, we need to compare htab_key firstly, if these values are
the same,  then compare htab_key.name.raw. However if there are multiple
strings in htab_key, the definition of bpf_str will change as showed below.
The reference to the content of *name* will depends on the length of
*location*. It is a little wired and hard to use. Maybe we can concatenate
these two strings into one string by zero-byte to make it work.

struct bpf_str {
    /* string hash */
    u32 hash;
    u32 len;
};

struct htab_key {
    __u32 cookies;
    struct bpf_str location;
    struct bpf_str name;
    char raw[0];
};

Another solution is assign a per-map unique id to the string. So the definition
of bpf_str will be:

struct bpf_str {
    __u64 uid;
};

Before using a string, we need to convert it to a unique id by using bpf syscall
or a bpf_helper(). And the mapping of string-to-[unique-id, ref cnt] will be saved
as a string key hash table in the map. So there are twofold hash-table lookup
in this implementation and performance may be bad.

Do you have other suggestions ?

Regards.
Tao
> .
Yonghong Song Dec. 23, 2021, 4:36 p.m. UTC | #3
On 12/23/21 4:02 AM, Hou Tao wrote:
> Hi,
> 
> On 12/20/2021 11:00 AM, Alexei Starovoitov wrote:
>> On Sun, Dec 19, 2021 at 01:22:42PM +0800, Hou Tao wrote:
>>> Hi,
>>>
>>> In order to use string as hash-table key, key_size must be the storage
>>> size of longest string. If there are large differencies in string
>>> length, the hash distribution will be sub-optimal due to the unused
>>> zero bytes in shorter strings and the lookup will be inefficient due to
>>> unnecessary memcpy().
>>>
>>> Also it is possible the unused part of string key returned from bpf helper
>>> (e.g. bpf_d_path) is not mem-zeroed and if using it directly as lookup key,
>>> the lookup will fail with -ENOENT (as reported in [1]).
>>>
>>> The patchset tries to address the inefficiency by adding support for
>>> string key. During the key comparison, the string length is checked
>>> first to reduce the uunecessary memcmp. Also update the hash function
>>> from jhash() to full_name_hash() to reduce hash collision of string key.
>>>
>>> There are about 16% and 106% improvment in benchmark under x86-64 and
>>> arm64 when key_size is 256. About 45% and %161 when key size is greater
>>> than 1024.
>>>
>>> Also testing the performance improvment by using all files under linux
>>> kernel sources as the string key input. There are about 74k files and the
>>> maximum string length is 101. When key_size is 104, there are about 9%
>>> and 35% win under x86-64 and arm64 in lookup performance, and when key_size
>>> is 256, the win increases to 78% and 109% respectively.
>>>
>>> Beside the optimization of lookup for string key, it seems that the
>>> allocated space for BPF_F_NO_PREALLOC-case can also be optimized. More
>>> trials and tests will be conducted if the idea of string key is accepted.
>> It will work when the key is a string. Sooner or later somebody would need
>> the key to be a string and few other integers or pointers.
>> This approach will not be usable.
>> Much worse, this approach will be impossible to extend.
> Although we can format other no-string fields in key into string and still use
> one string as the only key, but you are right, the combination of string and
> other types as hash key is common, the optimization on string key will not
> be applicable to these common cases.
>> Have you considered a more generic string support?
>> Make null terminated string to be a fist class citizen.
>> wdyt?
> The generic string support is a good idea. It needs to fulfill the following
> two goals:
> 1) remove the unnecessary memory zeroing when update or lookup
> hash-table
> 2) optimize for hash generation and key comparison
> 
> The first solution comes to me is to add a variable-sized: struct bpf_str and
> use it as the last field of hash table key:
> 
> struct bpf_str {
>      /* string hash */
>      u32 hash;
>      u32 len;
>      char raw[0];
> };
> 
> struct htab_key {
>      __u32 cookies;
>      struct bpf_str name;
> };
> 
> For hash generation, the length for jhash() will be sizeof(htab_key). During
> key comparison, we need to compare htab_key firstly, if these values are
> the same,  then compare htab_key.name.raw. However if there are multiple
> strings in htab_key, the definition of bpf_str will change as showed below.
> The reference to the content of *name* will depends on the length of
> *location*. It is a little wired and hard to use. Maybe we can concatenate
> these two strings into one string by zero-byte to make it work.
> 
> struct bpf_str {
>      /* string hash */
>      u32 hash;
>      u32 len;
> };
> 
> struct htab_key {
>      __u32 cookies;
>      struct bpf_str location;
>      struct bpf_str name;
>      char raw[0];
> };

This probably work. The tracepoint has a similar mechanism without 
string hash. For example, for tracepoint sched/sched_process_exec,
the format looks like below:

# cat format
name: sched_process_exec
ID: 254
format:
         field:unsigned short common_type;       offset:0;       size:2; 
signed:0;
         field:unsigned char common_flags;       offset:2;       size:1; 
signed:0;
         field:unsigned char common_preempt_count;       offset:3; 
  size:1; signed:0;
         field:int common_pid;   offset:4;       size:4; signed:1;

         field:__data_loc char[] filename;       offset:8;       size:4; 
signed:1;
         field:pid_t pid;        offset:12;      size:4; signed:1;
         field:pid_t old_pid;    offset:16;      size:4; signed:1;

print fmt: "filename=%s pid=%d old_pid=%d", __get_str(filename), 
REC->pid, REC->old_pid

So basically, the 'filename' field is an offset from the start of the
tracepoint structure. The actual filename is held in that place.
The same mechanism can be used for multiple strings.

The only thing is that user needs to define and fill this structure
which might be a little bit work.

> 
> Another solution is assign a per-map unique id to the string. So the definition
> of bpf_str will be:
> 
> struct bpf_str {
>      __u64 uid;
> };
> 
> Before using a string, we need to convert it to a unique id by using bpf syscall
> or a bpf_helper(). And the mapping of string-to-[unique-id, ref cnt] will be saved
> as a string key hash table in the map. So there are twofold hash-table lookup
> in this implementation and performance may be bad.
> 
> Do you have other suggestions ?
> 
> Regards.
> Tao
>> .
>