Message ID | 20220414223704.341028-9-alobakin@pm.me (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | BPF |
Headers | show |
Series | bpf: random unpopular userspace fixes (32 bit et al.) | expand |
On Thu, Apr 14, 2022 at 3:46 PM Alexander Lobakin <alobakin@pm.me> wrote: > > On 32 bit systems, shifting an unsigned long by 32 positions > yields the following warning: > > samples/bpf/tracex2_kern.c:60:23: warning: shift count >= width of type [-Wshift-count-overflow] > unsigned int hi = v >> 32; > ^ ~~ > > The usual way to avoid this is to shift by 16 two times (see > upper_32_bits() macro in the kernel). Use it across the BPF sample > code as well. > > Fixes: d822a1926849 ("samples/bpf: Add counting example for kfree_skb() function calls and the write() syscall") > Fixes: 0fb1170ee68a ("bpf: BPF based latency tracing") > Fixes: f74599f7c530 ("bpf: Add tests and samples for LWT-BPF") > Signed-off-by: Alexander Lobakin <alobakin@pm.me> Acked-by: Song Liu <songliubraving@fb.com> > --- > samples/bpf/lathist_kern.c | 2 +- > samples/bpf/lwt_len_hist_kern.c | 2 +- > samples/bpf/tracex2_kern.c | 2 +- > 3 files changed, 3 insertions(+), 3 deletions(-) > > diff --git a/samples/bpf/lathist_kern.c b/samples/bpf/lathist_kern.c > index 4adfcbbe6ef4..9744ed547abe 100644 > --- a/samples/bpf/lathist_kern.c > +++ b/samples/bpf/lathist_kern.c > @@ -53,7 +53,7 @@ static unsigned int log2(unsigned int v) > > static unsigned int log2l(unsigned long v) > { > - unsigned int hi = v >> 32; > + unsigned int hi = (v >> 16) >> 16; > > if (hi) > return log2(hi) + 32; > diff --git a/samples/bpf/lwt_len_hist_kern.c b/samples/bpf/lwt_len_hist_kern.c > index 1fa14c54963a..bf32fa04c91f 100644 > --- a/samples/bpf/lwt_len_hist_kern.c > +++ b/samples/bpf/lwt_len_hist_kern.c > @@ -49,7 +49,7 @@ static unsigned int log2(unsigned int v) > > static unsigned int log2l(unsigned long v) > { > - unsigned int hi = v >> 32; > + unsigned int hi = (v >> 16) >> 16; > if (hi) > return log2(hi) + 32; > else > diff --git a/samples/bpf/tracex2_kern.c b/samples/bpf/tracex2_kern.c > index 5bc696bac27d..6bf22056ff95 100644 > --- a/samples/bpf/tracex2_kern.c > +++ b/samples/bpf/tracex2_kern.c > @@ -57,7 +57,7 @@ static unsigned int log2(unsigned int v) > > static unsigned int log2l(unsigned long v) > { > - unsigned int hi = v >> 32; > + unsigned int hi = (v >> 16) >> 16; > if (hi) > return log2(hi) + 32; > else > -- > 2.35.2 > >
On Thu, Apr 14, 2022 at 3:46 PM Alexander Lobakin <alobakin@pm.me> wrote: > > On 32 bit systems, shifting an unsigned long by 32 positions > yields the following warning: > > samples/bpf/tracex2_kern.c:60:23: warning: shift count >= width of type [-Wshift-count-overflow] > unsigned int hi = v >> 32; > ^ ~~ > long is always 64-bit in BPF, but I suspect this is due to samples/bpf/Makefile still using this clang + llc combo, where clang is called with native target and llc for -target bpf. Not sure if we are ready to ditch that complicated combination. Yonghong, do we still need that or can we just use -target bpf in samples/bpf? > The usual way to avoid this is to shift by 16 two times (see > upper_32_bits() macro in the kernel). Use it across the BPF sample > code as well. > > Fixes: d822a1926849 ("samples/bpf: Add counting example for kfree_skb() function calls and the write() syscall") > Fixes: 0fb1170ee68a ("bpf: BPF based latency tracing") > Fixes: f74599f7c530 ("bpf: Add tests and samples for LWT-BPF") > Signed-off-by: Alexander Lobakin <alobakin@pm.me> > --- > samples/bpf/lathist_kern.c | 2 +- > samples/bpf/lwt_len_hist_kern.c | 2 +- > samples/bpf/tracex2_kern.c | 2 +- > 3 files changed, 3 insertions(+), 3 deletions(-) > [...]
On 4/20/22 10:18 AM, Andrii Nakryiko wrote: > On Thu, Apr 14, 2022 at 3:46 PM Alexander Lobakin <alobakin@pm.me> wrote: >> >> On 32 bit systems, shifting an unsigned long by 32 positions >> yields the following warning: >> >> samples/bpf/tracex2_kern.c:60:23: warning: shift count >= width of type [-Wshift-count-overflow] >> unsigned int hi = v >> 32; >> ^ ~~ >> > > long is always 64-bit in BPF, but I suspect this is due to > samples/bpf/Makefile still using this clang + llc combo, where clang > is called with native target and llc for -target bpf. Not sure if we > are ready to ditch that complicated combination. Yonghong, do we still > need that or can we just use -target bpf in samples/bpf? Current most bpf programs in samples/bpf do not use vmlinux.h and CO-RE. They direct use kernel header files. That is why clang C -> IR compilation still needs to be native. We could just use -target bpf for the whole compilation but that needs to change the code to use vmlinux.h and CO-RE. There are already a couple of sample bpf programs did this. > > >> The usual way to avoid this is to shift by 16 two times (see >> upper_32_bits() macro in the kernel). Use it across the BPF sample >> code as well. >> >> Fixes: d822a1926849 ("samples/bpf: Add counting example for kfree_skb() function calls and the write() syscall") >> Fixes: 0fb1170ee68a ("bpf: BPF based latency tracing") >> Fixes: f74599f7c530 ("bpf: Add tests and samples for LWT-BPF") >> Signed-off-by: Alexander Lobakin <alobakin@pm.me> >> --- >> samples/bpf/lathist_kern.c | 2 +- >> samples/bpf/lwt_len_hist_kern.c | 2 +- >> samples/bpf/tracex2_kern.c | 2 +- >> 3 files changed, 3 insertions(+), 3 deletions(-) >> > > [...]
On Wed, Apr 27, 2022 at 8:55 AM Yonghong Song <yhs@fb.com> wrote: > > > > On 4/20/22 10:18 AM, Andrii Nakryiko wrote: > > On Thu, Apr 14, 2022 at 3:46 PM Alexander Lobakin <alobakin@pm.me> wrote: > >> > >> On 32 bit systems, shifting an unsigned long by 32 positions > >> yields the following warning: > >> > >> samples/bpf/tracex2_kern.c:60:23: warning: shift count >= width of type [-Wshift-count-overflow] > >> unsigned int hi = v >> 32; > >> ^ ~~ > >> > > > > long is always 64-bit in BPF, but I suspect this is due to > > samples/bpf/Makefile still using this clang + llc combo, where clang > > is called with native target and llc for -target bpf. Not sure if we > > are ready to ditch that complicated combination. Yonghong, do we still > > need that or can we just use -target bpf in samples/bpf? > > Current most bpf programs in samples/bpf do not use vmlinux.h and CO-RE. > They direct use kernel header files. That is why clang C -> IR > compilation still needs to be native. > > We could just use -target bpf for the whole compilation but that needs > to change the code to use vmlinux.h and CO-RE. There are already a > couple of sample bpf programs did this. Right, I guess I'm proposing to switch samples/bpf to vmlinux.h. Only purely networking BPF apps can get away with not using vmlinux.h because they might avoid dependency on kernel types. But even then a lot of modern networking apps seem to be gaining elements of more generic tracing and would rely on CO-RE for staying "portable" between kernels. So it might be totally fine to just use CO-RE universally in samples/bpf? > > > > > > >> The usual way to avoid this is to shift by 16 two times (see > >> upper_32_bits() macro in the kernel). Use it across the BPF sample > >> code as well. > >> > >> Fixes: d822a1926849 ("samples/bpf: Add counting example for kfree_skb() function calls and the write() syscall") > >> Fixes: 0fb1170ee68a ("bpf: BPF based latency tracing") > >> Fixes: f74599f7c530 ("bpf: Add tests and samples for LWT-BPF") > >> Signed-off-by: Alexander Lobakin <alobakin@pm.me> > >> --- > >> samples/bpf/lathist_kern.c | 2 +- > >> samples/bpf/lwt_len_hist_kern.c | 2 +- > >> samples/bpf/tracex2_kern.c | 2 +- > >> 3 files changed, 3 insertions(+), 3 deletions(-) > >> > > > > [...]
diff --git a/samples/bpf/lathist_kern.c b/samples/bpf/lathist_kern.c index 4adfcbbe6ef4..9744ed547abe 100644 --- a/samples/bpf/lathist_kern.c +++ b/samples/bpf/lathist_kern.c @@ -53,7 +53,7 @@ static unsigned int log2(unsigned int v) static unsigned int log2l(unsigned long v) { - unsigned int hi = v >> 32; + unsigned int hi = (v >> 16) >> 16; if (hi) return log2(hi) + 32; diff --git a/samples/bpf/lwt_len_hist_kern.c b/samples/bpf/lwt_len_hist_kern.c index 1fa14c54963a..bf32fa04c91f 100644 --- a/samples/bpf/lwt_len_hist_kern.c +++ b/samples/bpf/lwt_len_hist_kern.c @@ -49,7 +49,7 @@ static unsigned int log2(unsigned int v) static unsigned int log2l(unsigned long v) { - unsigned int hi = v >> 32; + unsigned int hi = (v >> 16) >> 16; if (hi) return log2(hi) + 32; else diff --git a/samples/bpf/tracex2_kern.c b/samples/bpf/tracex2_kern.c index 5bc696bac27d..6bf22056ff95 100644 --- a/samples/bpf/tracex2_kern.c +++ b/samples/bpf/tracex2_kern.c @@ -57,7 +57,7 @@ static unsigned int log2(unsigned int v) static unsigned int log2l(unsigned long v) { - unsigned int hi = v >> 32; + unsigned int hi = (v >> 16) >> 16; if (hi) return log2(hi) + 32; else
On 32 bit systems, shifting an unsigned long by 32 positions yields the following warning: samples/bpf/tracex2_kern.c:60:23: warning: shift count >= width of type [-Wshift-count-overflow] unsigned int hi = v >> 32; ^ ~~ The usual way to avoid this is to shift by 16 two times (see upper_32_bits() macro in the kernel). Use it across the BPF sample code as well. Fixes: d822a1926849 ("samples/bpf: Add counting example for kfree_skb() function calls and the write() syscall") Fixes: 0fb1170ee68a ("bpf: BPF based latency tracing") Fixes: f74599f7c530 ("bpf: Add tests and samples for LWT-BPF") Signed-off-by: Alexander Lobakin <alobakin@pm.me> --- samples/bpf/lathist_kern.c | 2 +- samples/bpf/lwt_len_hist_kern.c | 2 +- samples/bpf/tracex2_kern.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) -- 2.35.2