Message ID | 9e4171972a3d75e656073e0c25cd4071a6f652e4.1652772731.git.esyr@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Fix 32-bit arch and compat support for the kprobe_multi attach type | expand |
On Tue, May 17, 2022 at 09:36:15AM +0200, Eugene Syromiatnikov wrote: > Check that size would not overflow before calculation (and return > -EOVERFLOW if it will), to prevent potential out-of-bounds write > with the following copy_from_user. Use kvmalloc_array > in copy_user_syms to prevent out-of-bounds write into syms > (and especially buf) as well. > > Fixes: 0dcac272540613d4 ("bpf: Add multi kprobe link") > Cc: <stable@vger.kernel.org> # 5.18 > Signed-off-by: Eugene Syromiatnikov <esyr@redhat.com> Acked-by: Jiri Olsa <jolsa@kernel.org> thanks, jirka > --- > kernel/trace/bpf_trace.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > index 7141ca8..9c041be 100644 > --- a/kernel/trace/bpf_trace.c > +++ b/kernel/trace/bpf_trace.c > @@ -2261,11 +2261,11 @@ static int copy_user_syms(struct user_syms *us, unsigned long __user *usyms, u32 > int err = -ENOMEM; > unsigned int i; > > - syms = kvmalloc(cnt * sizeof(*syms), GFP_KERNEL); > + syms = kvmalloc_array(cnt, sizeof(*syms), GFP_KERNEL); > if (!syms) > goto error; > > - buf = kvmalloc(cnt * KSYM_NAME_LEN, GFP_KERNEL); > + buf = kvmalloc_array(cnt, KSYM_NAME_LEN, GFP_KERNEL); > if (!buf) > goto error; > > @@ -2461,7 +2461,8 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr > if (!cnt) > return -EINVAL; > > - size = cnt * sizeof(*addrs); > + if (check_mul_overflow(cnt, (u32)sizeof(*addrs), &size)) > + return -EOVERFLOW; > addrs = kvmalloc(size, GFP_KERNEL); > if (!addrs) > return -ENOMEM; > -- > 2.1.4 >
On Tue, May 17, 2022 at 12:36 AM Eugene Syromiatnikov <esyr@redhat.com> wrote: > > Check that size would not overflow before calculation (and return > -EOVERFLOW if it will), to prevent potential out-of-bounds write > with the following copy_from_user. Use kvmalloc_array > in copy_user_syms to prevent out-of-bounds write into syms > (and especially buf) as well. > > Fixes: 0dcac272540613d4 ("bpf: Add multi kprobe link") > Cc: <stable@vger.kernel.org> # 5.18 > Signed-off-by: Eugene Syromiatnikov <esyr@redhat.com> > --- > kernel/trace/bpf_trace.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > index 7141ca8..9c041be 100644 > --- a/kernel/trace/bpf_trace.c > +++ b/kernel/trace/bpf_trace.c > @@ -2261,11 +2261,11 @@ static int copy_user_syms(struct user_syms *us, unsigned long __user *usyms, u32 > int err = -ENOMEM; > unsigned int i; > > - syms = kvmalloc(cnt * sizeof(*syms), GFP_KERNEL); > + syms = kvmalloc_array(cnt, sizeof(*syms), GFP_KERNEL); > if (!syms) > goto error; > > - buf = kvmalloc(cnt * KSYM_NAME_LEN, GFP_KERNEL); > + buf = kvmalloc_array(cnt, KSYM_NAME_LEN, GFP_KERNEL); > if (!buf) > goto error; > > @@ -2461,7 +2461,8 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr > if (!cnt) > return -EINVAL; > > - size = cnt * sizeof(*addrs); > + if (check_mul_overflow(cnt, (u32)sizeof(*addrs), &size)) > + return -EOVERFLOW; > addrs = kvmalloc(size, GFP_KERNEL); any good reason not to use kvmalloc_array() here as well and delegate overflow to it. And then use long size (as expected by copy_from_user anyway) everywhere? > if (!addrs) > return -ENOMEM; > -- > 2.1.4 >
On Wed, May 18, 2022 at 04:30:14PM -0700, Andrii Nakryiko wrote: > On Tue, May 17, 2022 at 12:36 AM Eugene Syromiatnikov <esyr@redhat.com> wrote: > > > > Check that size would not overflow before calculation (and return > > -EOVERFLOW if it will), to prevent potential out-of-bounds write > > with the following copy_from_user. Use kvmalloc_array > > in copy_user_syms to prevent out-of-bounds write into syms > > (and especially buf) as well. > > > > Fixes: 0dcac272540613d4 ("bpf: Add multi kprobe link") > > Cc: <stable@vger.kernel.org> # 5.18 > > Signed-off-by: Eugene Syromiatnikov <esyr@redhat.com> > > --- > > kernel/trace/bpf_trace.c | 7 ++++--- > > 1 file changed, 4 insertions(+), 3 deletions(-) > > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > > index 7141ca8..9c041be 100644 > > --- a/kernel/trace/bpf_trace.c > > +++ b/kernel/trace/bpf_trace.c > > @@ -2261,11 +2261,11 @@ static int copy_user_syms(struct user_syms *us, unsigned long __user *usyms, u32 > > int err = -ENOMEM; > > unsigned int i; > > > > - syms = kvmalloc(cnt * sizeof(*syms), GFP_KERNEL); > > + syms = kvmalloc_array(cnt, sizeof(*syms), GFP_KERNEL); > > if (!syms) > > goto error; > > > > - buf = kvmalloc(cnt * KSYM_NAME_LEN, GFP_KERNEL); > > + buf = kvmalloc_array(cnt, KSYM_NAME_LEN, GFP_KERNEL); > > if (!buf) > > goto error; > > > > @@ -2461,7 +2461,8 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr > > if (!cnt) > > return -EINVAL; > > > > - size = cnt * sizeof(*addrs); > > + if (check_mul_overflow(cnt, (u32)sizeof(*addrs), &size)) > > + return -EOVERFLOW; > > addrs = kvmalloc(size, GFP_KERNEL); > > any good reason not to use kvmalloc_array() here as well and delegate > overflow to it. And then use long size (as expected by copy_from_user > anyway) everywhere? Just to avoid double calculation of size, otherwise I don't have any significant prefernce, other than -EOVERFLOW would not be reported separately (not sure if this a good or a bad thing), and that it would be a bit more cumbersome to incorporate the Yonghong's suggestion[1] about the INT_MAX check. [1] https://lore.kernel.org/lkml/412bf136-6a5b-f442-1e84-778697e2b694@fb.com/ > > if (!addrs) > > return -ENOMEM; > > -- > > 2.1.4 > > >
On Thu, May 19, 2022 at 7:37 AM Eugene Syromiatnikov <esyr@redhat.com> wrote: > > On Wed, May 18, 2022 at 04:30:14PM -0700, Andrii Nakryiko wrote: > > On Tue, May 17, 2022 at 12:36 AM Eugene Syromiatnikov <esyr@redhat.com> wrote: > > > > > > Check that size would not overflow before calculation (and return > > > -EOVERFLOW if it will), to prevent potential out-of-bounds write > > > with the following copy_from_user. Use kvmalloc_array > > > in copy_user_syms to prevent out-of-bounds write into syms > > > (and especially buf) as well. > > > > > > Fixes: 0dcac272540613d4 ("bpf: Add multi kprobe link") > > > Cc: <stable@vger.kernel.org> # 5.18 > > > Signed-off-by: Eugene Syromiatnikov <esyr@redhat.com> > > > --- > > > kernel/trace/bpf_trace.c | 7 ++++--- > > > 1 file changed, 4 insertions(+), 3 deletions(-) > > > > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > > > index 7141ca8..9c041be 100644 > > > --- a/kernel/trace/bpf_trace.c > > > +++ b/kernel/trace/bpf_trace.c > > > @@ -2261,11 +2261,11 @@ static int copy_user_syms(struct user_syms *us, unsigned long __user *usyms, u32 > > > int err = -ENOMEM; > > > unsigned int i; > > > > > > - syms = kvmalloc(cnt * sizeof(*syms), GFP_KERNEL); > > > + syms = kvmalloc_array(cnt, sizeof(*syms), GFP_KERNEL); > > > if (!syms) > > > goto error; > > > > > > - buf = kvmalloc(cnt * KSYM_NAME_LEN, GFP_KERNEL); > > > + buf = kvmalloc_array(cnt, KSYM_NAME_LEN, GFP_KERNEL); > > > if (!buf) > > > goto error; > > > > > > @@ -2461,7 +2461,8 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr > > > if (!cnt) > > > return -EINVAL; > > > > > > - size = cnt * sizeof(*addrs); > > > + if (check_mul_overflow(cnt, (u32)sizeof(*addrs), &size)) > > > + return -EOVERFLOW; > > > addrs = kvmalloc(size, GFP_KERNEL); > > > > any good reason not to use kvmalloc_array() here as well and delegate > > overflow to it. And then use long size (as expected by copy_from_user > > anyway) everywhere? > > Just to avoid double calculation of size, otherwise I don't have > any significant prefernce, other than -EOVERFLOW would not be reported > separately (not sure if this a good or a bad thing), and that > it would be a bit more cumbersome to incorporate the Yonghong's > suggestion[1] about the INT_MAX check. > I think it's totally fine to return ENOMEM if someone requested some unreasonable amount of symbols. And INT_MAX won't be necessary if we delegate all the overflow checking to kvmalloc_array() > [1] https://lore.kernel.org/lkml/412bf136-6a5b-f442-1e84-778697e2b694@fb.com/ > > > > if (!addrs) > > > return -ENOMEM; > > > -- > > > 2.1.4 > > > > > >
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 7141ca8..9c041be 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2261,11 +2261,11 @@ static int copy_user_syms(struct user_syms *us, unsigned long __user *usyms, u32 int err = -ENOMEM; unsigned int i; - syms = kvmalloc(cnt * sizeof(*syms), GFP_KERNEL); + syms = kvmalloc_array(cnt, sizeof(*syms), GFP_KERNEL); if (!syms) goto error; - buf = kvmalloc(cnt * KSYM_NAME_LEN, GFP_KERNEL); + buf = kvmalloc_array(cnt, KSYM_NAME_LEN, GFP_KERNEL); if (!buf) goto error; @@ -2461,7 +2461,8 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr if (!cnt) return -EINVAL; - size = cnt * sizeof(*addrs); + if (check_mul_overflow(cnt, (u32)sizeof(*addrs), &size)) + return -EOVERFLOW; addrs = kvmalloc(size, GFP_KERNEL); if (!addrs) return -ENOMEM;
Check that size would not overflow before calculation (and return -EOVERFLOW if it will), to prevent potential out-of-bounds write with the following copy_from_user. Use kvmalloc_array in copy_user_syms to prevent out-of-bounds write into syms (and especially buf) as well. Fixes: 0dcac272540613d4 ("bpf: Add multi kprobe link") Cc: <stable@vger.kernel.org> # 5.18 Signed-off-by: Eugene Syromiatnikov <esyr@redhat.com> --- kernel/trace/bpf_trace.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)